The Next Big Thing
Posts about next generation technologies and their effect on business.

What's this thing called DevOps?

During a conversation last week with a Forrester senior analyst (Dave West) from the UK, I learned about a new term and a new industry "movement" - DevOps.  He said that there has been recent interest in this topic which combines, both literally and figuratively, the terms "developer" and "operations".  So of course when our call ended, I did some searching on the term and learned that there is indeed a fledgling "DevOps" community of developers and sysadmins who have recognized - as many of us - that applications should get built with full knowledge of the operational environment in which they will run. This group of primarily UK/Northern European IT professionals has collaborated on recommendations to remove some of the silos between those who build and those who run applications. This represents a cultural change as much as a need for integrated teams and processes. Two years ago in this space I spoke of the need for developers to use ITIL practices "to understand the real impact of new applications on the run-time environment and how they need to be designed differently for the complex, distributed environments we have today".  Perhaps not much has really changed, but at least the conversation has broadened.


The DevOps community sees agile and lean teams better suited to collaborate and drive DevOps thinking because of their matrixed approach, but I don't see why two-way communication, which is both the bridge needed and the current barrier, should be limited to small teams. We need to see push and pull of information equally from both developers and operations.  Silos within the IT organization or even when traditional outsourcing and multi-vendor partners are engaged are challenges but not insurmountable. 


Just as one example, when developing a web application I really do need to understand the capacity (network, servers, storage, database) of the current run-time environment as well as the security levels, and I need to work with both sysadmins and service management personnel to have them understand the service level, security and run-time requirements, work out conflicts, and also engage them as necessary for integration and other testing. EDS had the right idea with an initiative called Designed for RunTM that has continued to evolve towards what I believe the DevOps folks envisage, but there is still work to be done.


I do really agree with the recommendations of the DevOps gang that a multi-disciplinary approach must be taken between developers, testers, change and release management, and operations engineering groups such as system administrators and those who manage system capacity and availability. They suggest that team members might gain expertise across the divide.  Understanding the best practices from both CMMI and ITIL would help, as well as having the IT skills and experience to go along with them.  I've studied this "CMMI & ITIL" process integration area fairly deeply and have spoken at several conferences on the need for integrated application development, service management and infrastructure engineering processes. Drop me a line if you would like to discuss further or pick up a presentation. 

Data Virtualization: Essential but Approach with Caution

Data
Virtualization is the current marketing banner for Enterprise Information Integration (EII). It is an
important adjunct to SOA, but must be undertaken with caution.


David Linthicum, Linthicum Group,
and Bradley Wright, Progress DataDirect, recently presented an ebizQ webinar on
"Putting
Your Data to Work for Your Cloud, BPM, MDM and SOA Project
."  The
thrust of this presentation was that data virtualization will provide
consistent, cross-enterprise access to data in heterogeneous data stores.  This is not a new capability, but
was introduced as EII a number of years ago.


When asked the difference between
data virtualization and EII, Bradley Wright indicated that data virtualization
includes the capability to perform updates. 
While not all EII products supported updates, some did, so it appears
the primary difference is marketing. 
At the same time, as I will discuss, below, data virtualization should
not be used for updates.


The fundamental concept of data
virtualization and EII is that data is accessed from multiple, heterogeneous
databases through a virtual database that provides an integrated, consistent
view of data from these multiple sources. 
Queries are expressed in terms of the virtual database schema and
translated as required, and data from multiple sources is transformed and
integrated to provide a response that is consistent with the virtual database
schema.


SOA increases the importance of
data virtualization because SOA is likely to increase the number and diversity
of databases.  I discussed
this in my blog last year entitled, "Data
Management for SOA
." 
A service should be loosely coupled and its data stores should be hidden
from the service users to maintain flexibility in the implementation of the
service.  This conflicts with
needs for cross-enterprise views of data for planning and decision-making.  Data virtualization can provide
such visibility; however, there are certain realities that must be understood
when using data virtualization.  Loraine
Lawson touched on some limitations in an interview with Peter Tran and Bob
Reary of Composite Software two years ago entitled, "When
Data Virtualization Works - And When It Doesn't
," but there are additional
concerns.


The following paragraphs outline
key limitations of data virtualization that must be considered when setting
expectations and when using data virtualization to obtain composite views.


Data inconsistencies
A data virtualization product can perform data conversions (e.g., feet
to meters), but it can't create data that isn't stored. 
For example, if one organization maintains weekly production figures and
another maintains monthly figures, these two different measures cannot be
reconciled.  If one
organization tracks numbers of defects in one set of categories, and another
uses a different set of categories, the figures cannot be compared or added.


Such problems are fundamental to
the business, and if it is important to examine such data across the
enterprise, then there must be a transformation initiative to make the data
collection and storage consistent with a common scheme.


Process inconsistencies
Some enterprises will have similar business operations that are in
different geographies or produce different categories of products or services.  What they do may be similar, but
they may have business processes that cannot be compared. 
There may be different stages of production or service delivery that are
of interest to top management.  The
different operations may use the same terminology for phases, but the terms are
not applied consistently to the business processes. 
This may lead to top management comparing apples to oranges.  Such discrepancies might extend to
inconsistent metrics such as the definition of rework, and inconsistencies
between sales and the cost of goods sold.


Timing inconsistencies
An enterprise does not operate instantaneously and in lock step.  The orders being received are not
the same as the orders being filled and the orders being shipped. 
The engineering change issued by the engineering department may be
delayed until current inventories are consumed. 
Payment is not due on orders shipped but not yet delivered.  A query that combines data from
different operations will not represent a consistent view of the enterprise.  That requires the definition of
cut-off-points and the time for various activities and transactions to reach
and record consistent points in their operations. 
This is why financial information is not immediately available at the
end of a period.


It is not practical to eliminate
all such inconsistencies or wait to accumulate consistent results.  Users of data virtualization must
understand such limitations when using the composite data.


Resource overload
A data virtualization service will access data from various production
databases.  These databases
are not necessarily configured to handle an increased volume of queries.  Some queries may add unexpected
workload to a database, or the workload from many potential users may be quite
unpredictable.  This ad hoc resource
demand could interfere with mainstream business application performance.  In cloud computing, the resource
may be available on demand, but there could be unacceptable increases in costs.


Update errors
If data virtualization is used to update databases, it will bypass the
applications designed to validate, control and coordinate the updates.  The updates may also be
inconsistent with the current state of the production operations. 
Furthermore, updates normally performed by associated applications may
require coordination and propagation to related operations and applications.  It is very dangerous to bypass the
responsible organizations and their applications to update their databases-it
should not happen.  Any update
should go through the appropriate processes for validation, authorization,
control and coordination that are the responsibility of those business
operations and their applications.


I think data virtualization
(a.k.a., enterprise information integration) is an important technology that
should be part of a SOA strategy, but users must adopt it with their eyes wide
open.  It's a long-term
investment, and it is likely there will always be the need to understand and
allow for inconsistencies in the data.

Managing Enterprise Attributes and Events using a New SOA Provisioning and Synchronisation Pattern

Introduction
We normally associate Service Oriented Architecture (SOA) with a very logical framework of architectural concepts, policies and methods that allow the interaction of loosely coupled services.  These services are generally defined as having a granularity that have a meaningful business value and consequently have a natural re-use within the Business Process Layer of the architecture (Figure 1-C).    For this interaction model to work there must be some shared view of the relevant entity between the services with encapsulation hiding the messy, lower-level attributes.  The degree of separation of the attribute from the interfaces at the service level is much less than that which exists within a high-level application.   Even so, there is still a major maintenance and interoperability headache managing entity relationships between applications, hence the rise of the ERP systems.  ERP systems have a very high degree of entity encapsulation and attempt to be self-sufficient.  However, not everything can be encapsulated within an ERP system and there is still a need for the corporate business processes to respond to external events. For example, a person's risk profile may have a dramatic impact on the way a business process will behave if this fact is known.   So how does this information get shared within the organisation's IT systems?  This misunderstood interaction of special cause events versus a perceived need for rigid business process is often the reason for the failure of many BPM initiatives. 

 

In essence the solution comes down to being able to:

 


  • Externalise Business Process decision points

  • Support events at all layers within the architecture

  • Synchronise event causality across the architecture

 

 Externalise Business Process decision points
As already mentioned, facts can have a profound impact on business outcomes and the way the architecture has been designed to respond influences the manner in which Business Processes are written and developed.  If the decision points are farmed off into a Business Rules Engine they are not only infinitely easier to change and manage but they are decoupled from Business Processes which are more static and less frequently changed.  Through this separation we have achieved two important conceptual outcomes. The first is the ability to dynamically alter the business outcomes without a modifying the components in the business process. Secondly, we have opened the architecture to be able to respond to events at the BPM layer. 

 

Support events at all layers within the architecture
Events should occur at all levels within the architecture.  At the BPM level they are responsible for meaningful transition between business phases.  At a lower level they can be responsible for service invocation and at an even lower level associated with attributes changes, such as role, activity based privileges, job code, date of birth, telephone number, address and so on.   Events associated with moving between business process phases are normally very deterministic. 

 

Attributes are at the lowest level of the conceptual architecture and have a very low level of architectural granularity.  This does not sit well with the philosophy of an SOA where everything is service based.  Indeed many of these attributes reside as fields within databases and may not be exposed in any explicit form but still influence the outcome of a service call.   Exposing these attributes and the events they will cause is the goal of enhanced or advanced SOA where we are seeing the development of technologies such as Event Based Architectures (EBA) Complex Event Stream Processing (CEP) that augment the SOA framework. One development pattern being used to support an EBA is the Inversion of Communications pattern and is a method of signalling events reversed from the normal approach where service consumers call services to obtain information.   From the discussion above, it is clear now that services need a way of communicating facts to the enterprise and this is what that achieves. If we extend this pattern to use synchronisation and the concept of provisioning there will be a further major enhancement in the ability of SOA to perform event handling.

 

Synchronise event causality across the architecture
To be able to synchronise the change in attribute value and the consequent events to all consumers/ subscribers that are affected by a change,  we need to introduce the concept of provisioning.  Which means that owners of a delegate attribute will need to perform the role of producers in notifying all consumers when an attribute's value changes. The assumption being that consumers (services) will be have been provisioned with those policies, facts or attributes that will affect a business outcome that they participate in.  This has the added benefit that allows application of secondary changes to other attributes based on the outcome of applying rules within a given context  - all unknown to the producer of the original attribute.   So, consumers of an attribute are provisioned, through agreements with producers being required, through similar agreements, to publish any change in the value of an attribute. Standards now play an important part in provisioning and synchronisation process.  In figure 1-D,E the need for standards is generically identified through the term "Canonical Mark-up Language"

 

An Example
A real world example of a standard is the Service Provisioning Mark-up Language (SPML). By adding an SPML gateway to our architecture, a producer is now able to submit SPML modify requests  against those attributes that need to be interpreted and synchronised with consumers - this means the enterprise has a single interface - no LDAP, Stored procedures and the like.  Therefore the primary purpose of the Provisioning Service Gateway BPEL Service's (Figure 1-D), is to provide a standards-based interface and to also invoke the BPEL based event mediator The mediator will interact with the business rules engine to determine the rules associated with the attribute being modified.  The rule or rule set will return the BPEL Web Service as a Dynamic Link   which will be invoked to perform the appropriate event processing.  The ultimate outcome will be to update the attribute table within the synchronising engine that will trigger synchronisation  to the affected consumers,  using the very language that was used to pass the initial change event, e.g. SPML.   

 

Summary
The purpose of this architecture pattern is to achieve a high level of decoupling between all the EBA components, propagate a common view of all business events and to allow event processing to remain with the Business Process layer of the architecture. 

 

 

 

Search
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation