The Next Big Thing
Posts about next generation technologies and their effect on business.

The need for automated environmental validation in IT

action 002.jpgI was recently reading the post When disaster strikes: How IT process automation helps you recover fast and it got me thinking about the need for automated environmental validation. Recovering fast may not be good enough if the recovery destination environment has changed.

 

In the software space, you can use jUnit or nUnit to codify the limits of the code and make sure it works, and breaks as defined. It can be a very useful component of a test first unit testing approach.

 

I was wondering if infrastructure automation efforts should include a similar capability that we can automatically test an environment to ensure its characteristics are up-to-snuff, before and after we make a change. These tests could either be run periodically, or as part of a promotion to production process.

 

Automating this validation would remove the human element from this tedious step. It seems like this would be a useful and possibly necessary step for cloud deployments, since those environments are dynamic and beyond the scope (and understanding) of the person who wrote the original programs. Maybe this is commonly done, but I've not talked to many who attack this issue proactively.

IoT standards war begins

tug of war.pngI seem to have done quite a number of blog posts in the last month related to the Internet of Things. I just noticed that there have been numerous announcements about standards efforts. This may have spurred me on. 

 

There are a number of them, but the three I’ve seen the most about are:

  • AllSeen Alliance that supports the open source project AllJoyn that provides “a universal software framework and core set of system services that enable interoperability among connected products and software applications across manufacturers to create dynamic proximal networks.”
  • The Open Interconnect Consortium with “the goal of defining the connectivity requirements and ensuring interoperability of the billions of devices that will make up the emerging Internet of Things. “
  • And Google (not to be left out) has defined Thread. Its goal is: “To create the very best way to connect and control products in the home. “ These devices all run over IEEE 802.15.4.

The IEEE has its own set of IoT standards efforts, but those haven’t been getting the press as the recently announced ones above.

 

It is clear that IoT needs standards, but if it is too fragmented there will be no standard at all.

 

Hopefully this will shake out soon, since standards will help make the services and the software needed that actually provide the value for the end consumer.

 

Other views about starting small but thinking big

Last week, I did a post titled: Start Small but think big, when transforming. Fairly quickly I got a note from Erik van Busschbach from HP SW that said he’d made some similar statements related to cloud adoption. In fact he even had a video about his perspective. 

 

 Think big, start small.jpg

 

Next week at HP Discover, I hope to track Erik down (who is the Chief Technologist, World Wide Strategy & Solutions for HP Software) and talk about the nuances of our perspectives. He also wrote a post on an HP SW blog about: Why the IT Value Chain is your blueprint for strategically regaining control of IT that also contains the start small but think big concept.

 

Even if we’re coming at the problem from different perspectives, the fact that much of what we’re talking about ends up at the same result is reaffirming. 

Will the Internet of Things lead to passive oversharing?

 

security compromize.pngLast week there was a twitterChat by CIO magazine and the Enterprise CIO forum on ‘the Internet of Things and the effect on the CIO’. During this discussion someone asked “Are there security issues (particularly for the consumer)?” Everyone can probably agree that there are significant concerns that everyone needs to be aware as they strap on more and more devices.

 

One of these concerns relates to a story from a few years back. Then, there was quite a bit of discussion about Super Cookies. This techniques uniquely identified computers by their software versions, installed software… the kind of thing that can be gathered via JavaScript. Nothing had to be stored on the computer itself, like a normal cookie.

 

A similar technique can be applied to uniquely identify a consumer. What devices are they carrying…? Essentially, tracking people by what emissions they are emanating or consuming. Like the Super Cookie, this technique can track and record user behavior across multiple sites. Devices like cell phones are always transmitting "here I am" infromation. BlueTooth and WiFi can also be set to respond to external emissions.

 

Once you can track individual’s movement and interests, you can use that to predict future behavior and act upon it – much like what was demonstrated in the site pleaserobme.com. This site used individual’s social site usage to understand when they were away from home -- except in this case it is passive oversharing by our IoT devices that is the concern. Right now people view this as just a retail experience enabler so they are not freaking out.

 

But this passive surveillance is one area that will likely be scrutinized very closely in the coming years. Those who create devices need to be very aware of what is shared and utilize as much of the security capabilities that are available to keep passive sharing to a minimum.

 

It is not just about recognizing people who come into a retail area. For those who own devices, we need to be aware of what they emit, when and what controls are available to limit them. If it is possible to drive down a street and know which houses are occupied and which are not just by their IoT emissions, there are definitely people who will take advantage.

 

Rethinking future services and the application portfolio

applications.pngAreas changing within business and IT include the movement away from dedicated hardware for applications, as well as the concept of dedicated applications themselves. In order for these changes to be truly successful there are a number of factors to be addressed.

 

Today there are a wealth of software providers that supply intellectual property to address business problems (e.g., ERP solutions). Although some support more flexible access methods (e.g., SaaS), they are still rigid in what they make available to the business itself. The problems are viewed as IT and not what the business needs. In order for these service providers to address the specific needs of an organization, greater service integration flexibility is required. This allows for real integration of business processes, meeting the businesses unique needs. IT that supports those business processes may come from many different sources.

 

This flexibility will require greater data transport capabilities and analytics, turning generic processing into business differentiation. This movement of data outside the control of a service provider is the bane of most as-a-service solutions, yet when you think about it – whose data is it??

 

To meet the needs of the system users, greater platform independent support is required. This will allow the integration of generic business processes into a context specific solution that can be used by the various business roles to make better business decisions. Since the mobile interface is the enterprise interface going forward, placing the information in the context of the user is critical, on the device the user is actually using. Or if the response is well understood facilitating the systems of action needed to predict and respond to business events.

 

This also means that custom application configuration capabilities will be critical. Rather than having 3rd generation programmers handcrafting new behaviors into the system, standards and tools for customization will be required. Application configuration capabilities will improve the time to market and reduce the maintenance costs -- relying on business-oriented graphical modeling to aggregate functionality from across the portfolio of capabilities. Social capabilities and gamification support will be built into these customization capabilities. This mass-customized contextual portfolio approach is the antithesis of what leveraged service providers enable today.

 

One of the biggest detriments (at least from my perspective) of the dot com era was the view that everyone can code. These coders can do that in a 3rd generation language like Java (or JavaScript for that matter). And finally, that coders actually understand user interface and business process automation design (and security). I don’t think we can afford to put up with these views any longer. The changes in how computing works and is delivered as well the complex possibilities enabled by the abundance of IT capabilities don’t allow it. There has been work to leverage experts and hide complexity over the years, yet most organizations take advantage of very little of this work. It’s time that we move on.

Search
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation