The Next Big Thing
Posts about next generation technologies and their effect on business.

Displaying articles for: January 2009

Case Management for SOA

In a Service Oriented Architecture (SOA), business capabilities are accessed as services by automated exchange of messages across organizational boundaries (see: OASIS SOA Reference Model). Automated business processes are essential components of this exchange, driving the exchange of messages and the associated work to be performed in the service provider and the service consumer. However, the conventional BPMS (Business Process Management System), represented by BPMN and BPEL standards, does not effectively address a class of business process commonly described as case management.

A basic consumer-provider relationship involves a request and a response. However, many service interactions are more involved, continue over some period of time, and may move in different directions depending on the business circumstances of the consumer, the provider, and potentially other entities directly or indirectly involved in the business transaction. Consequently, the internal operations of participants may not conform to a predictable sequence of activities.

Case management involves coordination and control of activities related to a particular subject matter, where the state of the subject as well as other circumstances will determine what should be done next, and when. An example of this is the management of a patient case in a hospital. The "case" is represented by the patient hospital record. The patient will be given various examinations, tests, treatments and services as his or her needs are diagnosed, and as the medical condition changes. In some cases, what is done next is determined by a medical professional, in some cases it might be determined by a rule, it might be scheduled, or it might be in response to a patient request or change in symptoms.

In the context of commercial services, we might consider a machine repair service. A repair request is given to the service provider. The problem is diagnosed and a preliminary solution and cost are proposed for consumer review; there may be alternatives suggested. The consumer may request certain changes or approve the approach. There may be continuing interactions as the repair is undertaken and additional defects are discovered. The provider may engage different supporting services for special skills, delivery of parts, or inspection of the results. This engagement could be hours or weeks, depending on the complexity of the machine and the availability of parts and personnel.

Such processes will be common in an enterprise that implements SOA or engages in SOA relationships, in part because SOA will bring automation to conventionally manual service engagement and operation. In most cases, both the consumer and the provider of a service will need to apply a case management approach in order to accommodate the flexibility of interactions. At the same time, it is important to specify the potential activities and interactions so that performance is reasonably reliable and measurable, and consumer expectations can be appropriately set and addressed.

Bruce Silver discussed the need to address case management business processes in his February, 2007, blog, "What is Case Management?". In the Object Management Group Business Modeling and Integration task force, the need to address case management processes was first discussed about a year ago. As a result, the task force developed an RFI (Request for Information) entitled Dynamic Business Activity Models RFI, and responses to the RFI by Cordys and Tibco were reviewed at the September, 2008, OMG meeting. Since then, Henk de Man of Cordys published an excellent article at BPTrends entitled Case Management: A Review of Modeling Approaches.

Work is currently under way to prepare an OMG RFP (Request for Proposals) for case management modeling. It is unclear at this point, but it is likely that the solution should be an extension to the BPMN 2.0 specification (currently under development at OMG). A BPMN 2.0 extension should ensure that a choreography (interaction specification) can be specified to complement a case management process, and case management processes can be seamlessly incorporated where appropriate in the business processes of an enterprise.

Enterprise Adoption of Cloud Computing – or Not?

It looks like cloud computing is beginning to enter the "trough of disillusionment" on Gartner's oft-quoted "Hype Cycle". Last summer, Gartner positioned "cloud computing" on the emerging side of their hype cycle graph, nearly 2/3 of the way towards the "peak of inflated expectations" - an accurate representation at the time. In my view, cloud computing has now past the peak and is headed down the slope, as the initial excitement is wearing off.

For the past several months, it seems as though every presentation and article starts with an admission of: ‘cloud is over-hyped... but here is our view...'. More recently, I'm seeing articles that question why enterprises are not adopting cloud services in a more expedient manner. IT-World has a piece called "The Case Against Cloud Computing", which illuminates many of the shortcomings and barriers to moving existing applications to various cloud platforms like Amazon, Google, or Microsoft's Azure. This article provides a good summary of the issues, challenges and barriers that enterprises face in trying to move applications from private datacenters to a public cloud computing platform.

"How do we use cloud computing?" shouldn't be the first question. The question should center on "What is the business trying to achieve?". Is it simply to reduce IT costs?  Cloud is not the only way to reduce cost - there are existing and proven alternatives, such as consolidation, virtualization and automation. The use and adoption of cloud services is one of several methods for doing more with less. Cloud Services are a "means to an end; rather than an end in itself".

It is common in the IT industry, to view any new major development or theme as a way to replace the current systems. For example, the client/server movement was supposed to be the end of mainframes in datacenters, but that didn't happen. Client/server did displace some future mainframe sales at the time, but it was largely ‘additive'. There were also early suggestions that the Internet would replace private corporate networks. However, the Internet was used to extend, rather than replace private enterprise networks.

Cloud computing will follow a similar path over time, as enterprises employ new cloud capabilities to add value and extend business functionality, rather than a straight  replacement for today's systems.

Over time, cloud services will replace existing systems to varying degrees. Enterprise IT should focus on how they can add value to the business through the use of new cloud services and applications.

Labels: IT Services

Is The Next Big Thing a fault-less system?

I was fortunate to attend a Software Reliability lecture presented by Dr. Samuel Keene, past president of the IEEE Reliability Society . The lecture re-enforced many of the basic principles we learned as systems engineers over forty years ago. One "Path to Failure" lecture graphic jumped out at me as extremely important, and I hope I've faithfully reproduced it below:

To set the stage and to gain our attention, Dr. Keene recalled several notorious system failures and labeled each with an assignable cause:

  • Massive southeast power outage (2008) - administrative and power control system co-located

  • Mars Climate Orbitor (1998) - mix of metric and imperial units

  • Patriot missile misfire (1991) - operational profile change

  • DSC communications failure (1991) - 4 bits changed in 13 LOC, not regression tested

  • Alleged F-15 equator navigation system error - operational environment change

  • Jupiter flyby - power supply switch programmed, if loss of communications exceeds 7 days

Note -Utility companies, space agencies, and military units are normally not forthcoming with detailed failure reporting required for deep analysis.

The implementation oversight, that starts the path to system failure, can precede the fault activation by the widest range of time - namely, nanoseconds to infinity.

This statement made me think about all the systems I have designed, programmed, tested, installed, and changed over the years.

Is it possible that I have never created a fault-less system?

What can I do to create a fault-less system next time?

Or, can I only be expected to create a system, with faults, that behaves in predictable safe ways when faults are activated?

ICD-10: Is this a bigger issue than Y2K for healthcare?

SB628 mandates the U.S. to move from ICD-9 to ICD-10, a much more complex scheme of classifying diseases. The Centers for Medicare & Medicaid Services (CMS) has published the final rule adopting ICD-10 with a compliance date of October 1, 2013. As a result, U.S. healthcare organizations - including government and commercial payers and providers - must prepare to meet the demands of ICD-10. While the change expands diagnosis codes from 5 to 7 characters, it doesn't stop there, procedure codes expand to 7 characters and the content goes from numeric to alphanumeric. The following table illustrates the impact over ICD-9, the former coding scheme:

Code Set

Total Codes

Number of Diagnosis Codes

Diagnosis Code Structure

Number of Procedure Codes

 Procedure Code Structure




3-5 character alphanumeric


3-4 character numeric




3-7 character alphanumeric


7 character alphanumeric

Y2K only added a two character century indicator 18, 19, or 20! Like previous experiences (with Y2K), this set of changes will reach deep into payer systems and require substantial programming and testing activities for minimum compliance. Unlike previous experiences, ICD-10 will require or enable far greater substantial changes in business operations and procedures.

So, is ICD-10 really a bigger issue than Y2K? Let's hear from you.

Labels: Government

Engineering and development curriculum - Are they ready for the future??

I've written a couple of times (here and here) about changes to engineering organizations curriculum.

I came across an interview with Bjarne Stroutrup on some similar concerns.

When I studied engineering, the professor always assumed his class knew the language (no matter how obscure). We were never really taught programming, style, or even basic design. It was survival of the fittest. If you couldn't handle the basics on your own, you quickly dropped out.

Hopefully things have progressed since the dark ages. The nice thing about this approach was that we had a very solid understanding of the underlying structures of the computer, but we were doing much of our work at a fairly low level with a higher level specialty language thrown in as needed. Now with all the higher level language use, libraries, and reusable capabilities, the students may not be forced to understand what is actually going on. Having clear and clean interfaces and documentation is more important than ever. I also looked at the most common 25 dangerous programming errors and wondered how this information will be used by universities.

Standards for modeling interfaces, workflow and other interface definition techniques will be more important than ever as these people enter the workforce, and yet many organizations do not teach the advances in these areas. It may be that there is a bifurcation of the curriculum, with a concentration on lower level approaches and another around higher level industry standards and solution integration. Some view this as a difference between computer science and a trade school. I for one, do not. Automation is the strategic solution to the current monopoly of the IT budget by maintenance and operations.

Labels: Education
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.