The Next Big Thing
Posts about next generation technologies and their effect on business.

Displaying articles for: February 2010

CMMI for Services: New, but necessary?

I read frequently that ITIL (IT Infrastructure Library) is the most widely-used set of best practices in our industry today. That is no surprise considering the importance of service management in delivering the full value of IT in support of the business. But while the "infrastructure" side of IT has endorsed ITIL, applications engineers and software organizations are just catching on.  Just about this time last year, the Software Engineering Institute (SEI) released a new model in its constellation of CMMI reference models called the CMMI for Services (CMMI-SVC).


The SEI was looking to fill a need for its current users seeking service establishment, management, and delivery best practices. So they kept the project management, process management, and support discipline process areas fairly intact from the CMMI for Development (CMMI-DEV), but replaced the engineering process areas with seven ITIL-like ones: Strategic Service Management, Service System Development, Service System Transition, Service Delivery, Capacity and Availability Management, Incident Resolution and Prevention, and Service Continuity.


Here's my take on it. ITIL V3 was designed to extend IT Service Management into a holistic end-to-end lifecycle capability for the enterprise.  CMMI-SVC allows current CMMI-DEV users to get a "taste of ITIL" and gain some process and performance benefits (including a more relevant CMMI rating if services are what they do). It's a step in the right direction to get software and systems engineering organizations and their processes more integrated with operations and infrastructure engineering, and what they do. The CMMI models do integrate well with ITIL, and provide complementary best practices beyond the new services practices that are critical to any IT organization. The depth of detail and guidance that the ITIL V3 books provide still mean they should be your first reference for IT service management, however.


My recommendation is that all services organizations continue to implement ITIL V3 practices taking a full services lifecycle view.  If you already have CMMI-Dev process credentials, keep them.  Together, this will give you more than sufficient CMMI-SVC "cover" if you need to achieve or maintain a CMMI rating.

When does a fad just become part of reality?

Recently there have been a number of blog posts again that have been talking about the death of SOA (again!). As I look around, the death of SOA discussions started way back in January of last year. Back when we were starting this blog, I did a number of entries around service oriented architectures - but that was half a decade ago. Sure there were some fad-like tendencies around SOA just like there are around Cloud. I'd state that SOA was a pre-requisite for cloud to move beyond IaaS.


It does remind me of a situation back when I was at a large non-US government client in the late-90s who said "What about that whole object oriented buzz a few years back, whatever became of that." In that case, it was that OO was so omnipresent that it was indistinguishable in the technology they were using. It was just part of the way things were. I believe SOA has entering that same situation. The techniques are now everywhere, therefore it is a non-issue.


At least it is not the other situation, I've encountered -- I used to be part of the AI group that supported GM in the late 80s - early 90s. There it was always a feeling that: "If we can do it, it can't be AI."

Feb 23rd 1927 – the FCC was created

Back on February 23rd 1927, Calvin Collidge signed the Radio Act of 1927 and created the FCC (then called the Federal Radio Commission - FRC). This also recognized the broadcaster's right to "free speech" (which was later changed to include the Fairness Doctrine in 1949). Free speech was limited though by stating: "No person within the jurisdiction of the United States shall utter any obscene, indecent, or profane language by means of radio communication."


I wonder what they'd say about the material and the various delivery mechanisms available today...

Labels: Communications

Personal Genome Costs Plummeting

DNA


IEEE Spectrum had an article on The $100 Genome. Today for around $48,000 you can have your personal genome sequenced. Just a decade ago it took a national effort and 13 years. In the not too distant future (less than a decade) any of us could do it for a very low cost. Many are questioning its possible mandated use some day. If it gets this low it is getting below the range of many of the test people regularly undergo.



Some of the reasons this cost has dropped so precipitously is because of better sensing technologies and computer analysis capabilities.


Using this kind of testing should significantly improve the human lifespan by diagnosing conditions and addressing the likely outcome (possibly with gene therapy).

Shell and HP agree to High-resolution Seismic Sensing Network

A while back I mentioned the Central Nervous System for the Earth activity out of HP labs. The fruits of those activities are starting be become more public.

 

"This strategic relationship with Shell is a cornerstone in HP's blueprint for an information ecosystem that empowers people to make better, faster decisions to improve safety, security and environmental sustainability while transforming business economics. Sensing solutions are positioned to provide a new level of awareness through a network of sensors, data storage, and analysis tools that monitor the environment, assets, and health and safety."

What Do Organization Structures Really Look Like?

Traditional organization charts typically depict the management hierarchy and maybe some committees and dotted line reporting relationships.  This is inadequate for today's organizations.  Today there are many more roles and relationships by which business actually gets done.  In general, we don't know what our business organization structures really look like.


I reported in August on a new specification proposed in response to the Organization Structure Metamodel RFP (Request for Proposals) issued by the OMG (Object Management Group).  The RFP seeks proposals for an organization structure modeling language.  This new proposal challenges traditional ways of representing organization structures.  Organization models typically focus on the management hierarchy of "business units" with annotations of some exceptions.  The new specification recognizes that these divisions, departments, groups, teams, etc., represent only one general type of relationship in an enterprise.  The following are additional types of relationships that should be considered.



  • Matrixed product development teams

  • Committees, task forces, work groups

  • Project teams, strategic initiative teams

  • Participants in business processes or choreographies

  • Business partner relationships, liaisons, contracts

  • Ad hoc collaborative relationships

  • Exchanges of materials, work products, business transactions


Each enterprise should consider the types of relationships of interest in an organization model and the amount of update activity this will require to keep the model current.


Each of these relationships has people involved in specific roles.  The relationships and the roles outline how work actually gets done.  Some of these relationships are relatively stable, and some are quite dynamic.  Most of them are not new, but have not been viewed as aspects of an organizational structure.  In a recent BP Trends article, "The Process Managed Org Chart: The End of Management and the Rise of Bioteams," Peter Fingar describes some radically different organizational structures that may involve additional types of relationships.


In the past it was necessary to make announcements or draw diagrams on paper and distribute copies in order to provide this information.  Today this information can be maintained by computer to be readily available to everyone who has a need to know.  We shouldn't need to keep track of who is doing what in our heads or send out email queries to find out from people we think may be informed.  Computers can manage the complexity of a model with many types of relationships, they can perform timely update and distribution of this information, and they can provide the selective display of information to meet particular needs.  A more robust model will provide better definition of responsibilities, and provide a more complete representation of the organization for planning, coordination and business transformation.


A modern enterprise should have an on-line, real-time organization model that is updated as the organization changes and reflects the relationships by which work is getting done.  Some updates may be accomplished by manual input as relationships are established.  Most updates should come from the systems that support these relationships such as business process management systems, project management systems and contract management systems.  This real-time model will not only provide information for people, but should support the operation of systems for reporting, exercise of control and coordination of activities.  Capture of this information in an integrated model will reduce administrative overhead and ensure consistency among different systems.  In addition, such an organization model may retain historical information on roles and relationships to help identify people with insights or experience from past relationships.


Enterprises need this information now.  It will become increasingly important as organizations continually change to meet new business challenges and establish relationships outside the traditional management hierarchy with shared services, outsourcing and virtual enterprises.

Is the power shortage over for sensors?

In a recent IEEE Spectrum there was an article about Wireless Sensors that Live Forever. Sensing is one of the low hanging fruit of nano-tech. There is an ever increasing ability to gather information and even have it analyzed on the edge of enterprises, to the point where almost every business can gain value from some form of sensing. One of the issues though is how to allow that device to communicate what it's gathered, since that takes power -- power on an on-going and regular basis. Until now, being able to gather or store that power has been a limiting factor, since even the best long term batteries that were small enough to imbed didn't last more than a year. With new sensing techniques it will be possible to embed sensors for the long term - like in the concrete of bridges.


This article talks about a couple of techniques to generate power over the life of a sensor:



Both use a MEMS based cantilever, piezoelectric power generator.


Peter Hartwell, a senior researcher at HP Labs, in Palo Alto, Calif., says the technology is "definitely a step forward."


"Energy harvesting research is important to HP Labs, which is developing sensors for its Central Nervous System for the Earth project, a vision of peppering the world with minuscule sensors. Power is one of the remaining obstacles in making the vision a reality; HP Labs' accelerometers require about 50 mW."


This abundance of low current power will definitely help enable the age of abundance of data.

Hundreds of Thousands of Sensors Make CeNSE for Shell




Hundreds of thousands of sensors,
thousands of wireless points and petabytes of data make CeNSE for Shell


At 0800 GMT today, Shell and HP announced
the first major project that will demonstrate the fundamental concepts behind
CeNSE.


 CeNSE
(Central Nervous System for the Earth) was
conceived by Stan
Williams
, Senior HP Fellow and director of HP' Information and Quantum
Systems Lab (IQSL) where
revolutionary technology is being developed in anticipation of trillions of
sensors that will eventually be an integral part of every aspect of our lives,
our work, and eventually our earth.


The collaborative
project announced by Shell and HP
focuses the fundamentals of CeNSE on the
practical application of finding and producing petroleum.  Together, the
two companies are bringing together complementary capabilities to drive
innovation by developing a wireless sensing system to acquire extremely
high-resolution seismic data on land.  The result will be a significant
leap forward in oil and gas exploration & production.


The system begins with a very small MEMS accelerometer created in the IQSL
lab by Pete
Hartwell
and announced
last November.  (Check out the Scientific American article, "World
Changing Ideas" in the December 2009 edition, page 58, featuring Pete and
his sensor.)  Not only is this sensing device small, rugged, low power and
inexpensive, it is also sensitive - 1000 times more so than the sensor in the
accelerometer in your Wii controller or the air bag of your car.  And that
makes it perfectly suited to measure very minute vibrations with extreme
accuracy - which in turn makes it the perfect sensor upon which to build an
entirely new seismic imaging device.


The resolution of a seismic image is greatly impacted by the quality and the
density of data retrieved during a seismic survey.  Because of their MEMS
heritage, Shell will be able to deploy hundreds of thousands of sensor nodes
(compared to tens of thousands for current systems) within the same weight,
cost, and crew size constraints of current seismic surveys.  That,
combined with the superior sensing range and accuracy, will result in
subsurface images that will be vastly superior (think HDTV compared to a
standard TV picture) and will transform Shell's ability to pinpoint abundant
new oil and gas reserves.


But just as the CeNSE vision encompasses a system of capabilities, the
sensor in the HP-Shell system will be only one part of the total HP-Shell
solution.  All of those sensors need to communicate with a
state-of-the-art monitoring and control system - and in this next-generation
approach the answer is "lose the cables" and "take to the
air".  Traditional seismic sensors are connected by cables that snake
across the survey area.  The HP-Shell solution being pursued uses wireless
communications to tie it all together, not only creating a much more flexible
and resilient solution, but also one that is safer for the employees who deploy
it (less weight to heft and fewer 'cable trips').


Then there is the data collected.  Hundreds of thousands of sensor
nodes will generate orders of magnitude more data than the massive amounts now
collected resulting in petabytes of data, each byte needing to be validated,
stored and then sent to data centers where high performance computers turn the
raw data into better decisions.  Watch this video to understand
how these sensing solutions can open our eyes to a new world of possibilities.


And this system for land-based seismic imaging won't be effective without
the innovations in seismic survey methods and processes being brought to the
collaboration by Shell.  It takes a systems view with a critical
rethinking or everything conventional to create outcomes that are
revolutionary, not evolutionary. 


CeNSE is a terrific vision of the future.  It knits together technology
advancements, emerging personal and business demands, and new skills and
thinking to create a vision that is not only plausible, but highly
probable.  The HP-Shell collaboration to build the next generation of
land-based, seismic sensing capabilities demonstrates that the CeNSE vision can
be translated into a practical solution which will produce superior, high value
business outcomes.  For Shell, this means gaining a competitive advantage
in exploring difficult oil and gas reservoirs and fully realizing the potential
of Shell's processing and imaging technology on land-based exploration and
production.


Welcome to the brighter energy future of sensors and seismic imaging - it
all makes good CeNSE.


 


 


 

National Engineers Week – in the US

February 14th-20th is National Engineers Week in the US, as opposed to February 8-13 in Ireland or the whole month in Ontario. Engineering week includes - Introduce a girl to engineering day on February 18th, with many events around the country as part of a larger effort allowing children to discover engineering.

Labels: Engineering| Youth

First Wind-cooled Data Center

HP just opened the large 360,000 sq. ft. Wynyard data center. This Green Data Center project was underway at EDS before the HP purchase. It uses the continuously blowing cool North Sea air and a unique multilevel low pressure airflow design to minimize the cost of cooling.


"The air runs through a massive bank of modular filters to remove dust and other contaminants before it circulates in a massive cavity, called a plenum, below its data center halls.


The air is forced up though the floor and runs over the front of server racks before being exhausted. The system keeps the hall at a constant 24C (75.2F). When it is cold outside, some of the exhausted heat is recirculated with the outside air to maintain the right temperature."


The PUE for the data halls themselves is around 1.16. Some of the Green features of the data center can be seen in this video.


"Running at a full load, HP has calculated that the Wynyard facility has a 1.2 PUE, meaning that for every 1.2 watt of electricity used to power IT equipment, 1 watt is used for cooling and other facility needs. That makes it HP's most efficient data center"


PUE is being used by the EPA in the US to determine Energy Star ratings for data centers. Various cloud vendors are using PUE for comparison as well and HP's appears to shape up pretty well in that comparison.


Energy efficiency is not everything when it comes to data centers though like all modern data centers security is critical:


"Security is tight. Access cards and biometric details are needed to access halls. Server cabinets are locked, and the keys are only released if the particular engineer has permission encoded on an access card. The entry system to the data halls prevents two people from entering at the same time. The data center also has a high perimeter fence, reinforced walls and constant security."

A more organic approach to computer security

I was catching up on my reading the other day and I came across an article on using Swarm Intelligence techniques to identify computer malware, describing research from Wake Forest University and the Pacific Northwest National Laboratory (PNNL). In my predictions for 2010 I listed security as one of the areas where significantly different techniques are going to be required. This article reinforced that perspective.


The article talks about using a detection approach that had different kinds of assessments moving around the corporate network looking for anomalies. Once an unusual situation is found they leave a trail (like an ant) back from the central security site. Other assessment techniques can follow the trail and look at the issue from other perspectives and develop a better understanding of the issue. This new approach to security minimizes false positives, since the report of unusual events are more thoroughly analyzed before a treat signal is raised.


"The system comprises a hierarchy of agents that run in specially designed swarm software deployed on all the hosts in a protected network. At the bottom of the hierarchy, the ants are simple programs that look for a particular statistic as they travel from host to host. Each ant has a memory of what it finds to be normal across the previous five hosts it visits.


One level up, a sentinel agent runs on each host. On the basis of information it collects from the ants, the sentinel forms an idea of the host's normal state. When an ant finds something unusual, it reports this to the host sentinel. For example, if the ant reported 8,000 connections per minute, the sentinel might see this as an anomaly. In that case, it would reward the ant by raising its pheromone value. The ant stores this information. As it moves on to other hosts, its high pheromone value attracts other ants and communicates the information about the host that raised its pheromone value. This encourages the other ants to investigate that host as well.


If these additional ants find other anomalies, they would also be rewarded, which would attract ants from other hosts. A certain threshold of messages triggers a threat signal. 


Sergeant ants haven't yet been implemented in the prototype system, but they will sit between the computing ecosystem and human analysts. When a threat signal is triggered, the sergeants will report it to a human for further action. The sergeants also let humans specify what types of behavior the system allows. For example, a system administrator could tell the sergeant not to allow peer-to-peer file sharing, and the sergeant would create agents to disable this on all the hosts."


Although it is still a prototype:


"The researchers created four digital ants of the 64 types then eventually want. To test their effectiveness, they set up a bank of computers and released three worms into the ant-infested Linux-based computers. The four digital ants in the computers had never seen the viruses before, yet identified the virus by only monitoring."


 

Touch skinning

Physorg.com had a post on how touch sensing capabilities can be applied to many types of surfaces. Being able to add a thin layer of touch sensitive material to a table or other space could make an interaction environment much more aware. Adding this to the projected multi-touch environment mentioned in a previous post would enable an immersive environment that could be interacted with in multiple dimensions.


It will likely be a while until we see this in our daily lives, but these technologies can definitely transform mundane objects into powerful interface devices.

Amateur Social Engineers

In a previous blog, I addressed the need for Security Awareness Training to recognize and avert social engineering attacks.


A recent event in the U.S. demonstrates a blatant and amateur attempt at social engineering.  This case involves an independent and conservative investigative report, or depending on your perspective, an activist, James O'Keefe and three other associates.  O'Keefe is known for successfully infiltrating political organizations like ACORN, posing in various undercover roles to expose information, wrongdoings, and the like, using classic social engineering tactics.


The latest event involving the district office of U.S. Senator Mary Landrieu of Louisiana has been widely covered in the media, including articles CNN and FoxNews, and many others.  A brief quotation from the article at CNN illustrates the social engineering tactics that were used:


The two men were "each dressed in blue denim pants, a blue work shirt, a light green fluorescent vest, a tool belt and a construction-style hard hat when they entered the Hale Boggs Federal Building," the release noted.


After they entered the building, the two men told a staffer in Landrieu's office they were telephone repairmen, according to the release and Rayes' affidavit. They asked for -- and were granted -- access to the reception desk's phone system.


O'Keefe, who had been waiting in the office before the pair arrived, recorded their actions with a cell phone, said the affidavit by Rayes.


Flanagan and Basel later requested access to a telephone closet, claiming they needed to perform work on the main phone system, the release and affidavit stated.


According to Rayes' affidavit, the two men went to a U.S. General Services Administration office on another floor and requested access to the main phone system. A GSA employee then asked for their credentials, and the two men said they left them in their vehicle, the affidavit said.


Whatever the aims of O'Keefe and his associates, they are currently being charged with entering (a federal) office under "false pretenses for the purpose of committing a felony."


However, this story sounds like it might have come straight from Kevin Mitnick's book "The Art of Deception".  The one difference is that these men appear to be amateurs in the field of social engineering.  Why?   They got caught.  And, to me, the reason seems to be that they did not "do their homework" to prepare for unforeseen circumstances (i.e. being asked to show credentials).


The lesson to be learned from this (for those in IT and security) is that the senator's office appears to have done an adequate job in training its staff and employees with proper procedures and security awareness to spot and avert social engineering attacks.  The staff involved did not fall blindly for the ruse posed by the workmen's overalls and hardhats that appeared to make them look like telephone service personnel.  Rather, they were sent to the proper office (General Services Administration) and once there, they were asked to show proper credentials. 


The result?  BUSTED !!!


Even if the two men posing as telephone repairmen had obtained false credentials, I would hope that the GSA employee would have checked to insure that "maintenance had been scheduled", or called their telephone service providers to verify the employment and activities of the two men. 


And to borrow the subtitle of Kevin Mitnick's book again, that is how you "Control the Human Element of Security". 

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation