The Next Big Thing
Posts about next generation technologies and their effect on business.

Data, the lifeblood of the enterprise

data lifeblood.jpgEven though object-oriented techniques and analytics have been around since the last century, today they are being applied and thought about in whole new ways. Technologies are enabling objects to interact with monitoring, analytics, and control systems over a diverse range of networks and on a plethora of devices. Computers are embedded in devices and rarely thought of as devices themselves, by most people.

 

This more connected and action-oriented approach will expand the reach and impact of information technology systems, impacting business value generation, applications expectations, and use cases where IT hasn’t really been focused effectively before.

 

One of the exciting aspects of this intelligent edge approach to the business use of IT is that the software will enable greater control of the physical world, not just the digital one. This means less latency and more efficient use of resources (including human attention). For many, this started in the consumer space, and is only now being embraced within business.

 

The importance of this information and its integration into the business means that the focus on security will need to increase, protecting the data as well as the control data streams. This flow will become like the blood flow of the human body, if it is interrupted or somehow contaminated – bad things happen.

 

With gamification techniques, this information flow can be used to adjust human behaviors as well as machines. How organizations think about and deal with data is already changing.

 

Everyone needs to get comfortable with:

  1. The data sets we’re working with today will look trivial within the relatively near future. Storage technology will continue to get larger and cheaper.
  2. We’ll keep the data longer and continue to generate new value from the data in use today. Data is a corporate asset and we need to treat it as such.
  3. Data scientists will be in high-demand and business schools will branch into this area in a big way, if they haven’t already.
  4. The conflict between real-time access to information and the security implications will continue to be a concern
  5. The use of cloud techniques will mean that organizations will need to start feeling comfortable with moving the computing to the data more often than the data to the computing. The pipes are big, but not that big.
  6. The diversity of devices used to access the information and the locations they are accessed from will continue to increase. BYOD is not about devices.
  7. Master data and metadata management are critical skills to get the most out of big data efforts. Even if they can’t be synchronized, they need to be understood.

 

We have the computing and bandwidth capabilities, it is just our imaginations on how to use it that limits us.

Start thinking about HTTP 2.0 early

http.pngOne of the changes on the horizon that I’ve not paid too much attention to but will impact the services space is: HTTP 2.0. Most organizations today are using HTTP 1.1 and since that dates back to 1999, it is getting rather long in the tooth.

 

Some of the areas trying to be addressed in this update are performance and security.  There are efforts underway today (like SPDY) to improve the existing HTTP, and these are only recently being supported by some of the mainstream browsers. The foundation they defined is being used though to move forward.

 

If the standards effort progresses as defined, HTTP 2.0 will be faster, safer, and be more efficient than HTTP 1.1. Most of these changes will actually take place behind the scenes, so for the user they will upgrade their browser and have HTTP 2.0 capabilities and have to wait for the servers to provide the improved functionality.

For companies though capabilities like server push, enabling HTTP servers to send multiple responses (in parallel) for a single client request -- significant improvements are possible.

 

 

“An average page requires dozens of additional assets, such as JavaScript, CSS, and images, and references to all of these assets are embedded in the very HTML that the server is producing!”

 

So for HTTP interfaces, instead of waiting for the client to discover references to needed resources, the server could sent all of them immediately as soon as you know they’ll be needed. Server push can eliminate entire roundtrips of unnecessary network latency. With user interface responsiveness being an important satisfaction criteria for users this could be a differentiator for a service, especially if it is turned to the network bandwidth available.

 

For businesses, there is a bit of work to do, since porting environments between HTTP servers requires a great deal of testing, even if you have not yet architected a solution to newer functionality. Microsoft and others have put out new server software so organizations can get their feet wet now, while the standards are still solidifying.

Metrics and Enterprise Architecture

measurement.pngI was talking with some individuals recently about the changing role of architects. Although there are stills some organizations that are focused exclusively on cost cutting – doing more-with-less. There are also those that view the main role of the architect is strategic, improving agility and IT effectiveness. This has raised the importance of business architecture, looking at business process effectiveness, not just the underlying technical environment. This also means looking at how the IT available is being applied and may even mean doing more-with-more. Even in this age of cloud techniques, there is more need for Enterprise Architecture than ever.

 

Enterprise architects will need to deal with a new set of goals and measures, since the perception of their value and impact will be changing. Unfortunately, since many of these activities are longer term in nature that also means that they will be harder to measure and dynamic course correction will have greater latency.

 

The impact of efforts like application portfolio management can be easily measured but take a long time to become visible. Improved time to market is a bit more nebulous and can take even longer to become apparent. What’s clear is that the metrics need to be defined specifically for the situation. It is not a one size fits all efforts approach. I could be mistaken, so if you think there are standard EA metrics, please let me know. 

What is agility in business and IT really about??

technology and business change.jpgI was flying out to LA to talk about the future of mobility and trying to catch up on my magazines when I came across this article titled: Find Out What Agility Really Means. It starts out saying:

“Business and IT executives think about agility differently. For IT, agility is technology-focused, while non-IT leaders look for broad organization and leadership qualities that are difficult to implement.” and “Business agility is the quality that allows an enterprise to embrace market and operational changes as a matter of routine.”

 

This separation of perspective is one thing wrong with how organizations tackle the dynamic nature of business today. IT agility has much less to do with technology than it has to do with taking latency out of an enterprise response to situations. IT should be able to provide systems of action to make this happen.

 

There are many ways to do this ranging from:

What is clear is that for an organization to be agile, time to action has to be an important measure.

 

I also found it interesting that the intersection of agility and mobile was one of the first topics I ever posted, since that is what I am about to talk about.

Solutions of the future – systems of action?

applications.pngI was in a discussion today where someone asked: “Won’t applications in the future be like the ones that are delivered today?” I had to pause for a moment, since the thought behind the question itself made me second guess my assumptions.

 

When I think back on the way we wrote applications when I started in this industry, there are definitely some similarities with the solutions of today, but there are also some radical differences. We tried to write very focused and standalone solutions, generally written in 3rd generation languages that lived in systems that were constrained by memory, storage and access to data.

 

The use of automation tools (CASE) to perform the grunt coding has definitely not progressed as quickly as I’d hoped. I personally blame the whole .COM era for slowing down the industries progress on that one, since the focus moved back to hand crafted 3G coding techniques, probably because most of those are free.

 

On the other hand, the depth and breadth of application capabilities and the integration required for the Internet based applications (to add value) is very different than the applications created previously – let alone the diversity of coding languages and techniques that are applied to meet the changing environment where the enterprise and the consumers live.

 

As I think to the future and how scarcity that shaped how the applications are written, it seems that changes are in the offing. Data, computational power and access to storage are virtually unlimited when compared to the past. With the analytical capabilities available, we can recognize the intent or likelihood a decision is being made and act upon it, rather than just validating the decision after the fact. This by itself, will shift how we think of value generation. Our expectations can shift to a negative response time rather than having the goal of near zero-response time to events.

Another issue is the fact that “applications” are turning into aggregations of functionality. The standalone application that has generated value for so long, is likely dead. Today we pull data from one spot, process it with services from another and display it with yet another set of services. This is very different from the application of the past.

Having said that new devices (like the smart phones) have revitalized the small standalone application. Even if they don’t generate much value, there are many mobile applications and they are personalized so they don’t really target the masses in the same way. This approach to application development and design is actually closer to the way applications were developed in the past than the integrated, higher volume solutions used in the enterprise (e.g., ERP and the various mobile enterprise approaches).

 

There has been quite a bit of discussion about systems of record and systems of engagement on this blog and many others, but with more intelligent software and automation techniques, a whole new range of “systems of action” are developing for the enterprise -- with the goal of taking latency out of the enterprise response and focusing people on where their skills are really needed. Do you see this shift too? How do you see the market or development efforts responding??

Search
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels