The Next Big Thing
Posts about next generation technologies and their effect on business.

Are legacy systems a drag on your recovery?

In a recent HP survey of 700 IT and application leaders, more than two-thirds said that their company’s IT technology strategy is constrained by their current applications portfolio.

brittle applications.png

 

This is a case of drowning in one’s own success, since all the application in the portfolio added value back when they were created. Businesses and technology change and the value generated may no longer exist. There are just too many apps and too little value. After a while it can be difficult to understand if an application still adds more value than it costs and even more difficult to “pull the plug” without an external mandate.

 

A few things happened in recent years that should have made organizations reach to yank the cord, like the downturn of the economy but some of the same responses to the downturn like removing “excess personnel” have made organizations fragile and unable to move out of the “if it ain’t broke don’t fix it” mode.

 

Instant-On pressures are developing within organizations, as they look at cloud and other flexible business models. The time to “leave it alone” is coming to an end. Interfaces between systems are an expectation and master data management approaches will require organizations to look at:

  • the kind of data being held in systems,
  • how the data is being maintained,
  • what decisions are being made from the data by whom.

The slow recovery is going to require more flexible and integrated systems -- after all technology has not stopped its advances during the downturn. There are new customers and they have different expectations. There are also new automation techniques available that increase the quality while lowering costs.

 

Technology Strategies Every Enterprise Should Consider

automation.pngTwo of the other HP fellows contributed to an article about Technology Strategies Every Enterprise Needs. In it they focus on:


  • On-line testing
  • Master Data Management
  • Cloud Computing Security

When I think of these three areas, I am surprised at how these are overlooked and what new opportunities are available that are not discussed. In the testing space, most organizations have a fully occupied testing organization and may not realize the extent of testing that needs to occur when moving to the cloud. Even if it is just a move to an IaaS service, performance and functionality testing is required, let alone if they want to actually take advantage of the clouds parallel processing capabilities to perform functions more quickly. Many times the in-house organization will need to supplement their testing capabilities during the transition period. These extra resources allow for higher quality testing and can help with understanding of the new environment as well.


The MDM space has always been an issue organizations need to address. Having systems of record and ensuring consistency between systems takes unnecessary confusion out of the organization – at a minimum. If an enterprise is moving to higher level of cloud capabilities like Software as a Service or even BPO, this linkage need can be easily overlooked in the planning process. Having live links with external systems will be difficult to maintain, but that is a price to pay for access to the SaaS Intellectual Property. If it doesn’t look like you can maintain the links, you’ll likely need to rethink your strategy – eventually. This kind of Enterprise Architecture activity is more important in the cloud than ever.


Cloud security is the one area that really worries organizations. In many cases it is because they have relied on the physical structures of the compute center to provide a (false) feeling of security. Although this is an important issue to everyone, some industries have their own set of rules and regulations (e.g., Hipaa, PCI) . Understanding those rules and what they are trying to address will strengthen everyone’s security understanding. Security thoughts need to be expanded to disaster recovery and business continuity as well. Just because a cloud provider has 99.99% availability within their data center, it doesn’t mean your service has that level of availability end-to-end.


The one area that I don’t think is getting adequate coverage in the cloud area is the user interface consistency needs. We can’t expect to put a hodgepodge of in-house and vendor provided interfaces in front of the user community and expect high productivity. There are cases when it can happen and the cost of consistency may be too high, but I rarely hear organizations plan for it as an issue.


Although cloud activities may have a great deal of similarities to the current IT environment, numerous active decisions will need to be made, don't expect a passive approach to cut it.

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation