The Next Big Thing
Posts about next generation technologies and their effect on business.

Is it time for a Chief Automation Officer?

Automation officer.pngOver the last few years, there has been quite a bit of discussion about the race against the machines (or the race with the machines), based on the abundance of computing available. When I think about the IoT and its implications on business, it may be that information is just a side effect of an entirely different corporate strategic effort.

 

Maybe there is a need for a Chief Automation Officer more than a Chief Information Officer going forward?!? Someone who looks at the business implications and opportunities for cognitive computing, sensing, robotics and other automation techniques.

 

Or is automation just assumed to be part of all future strategic planning activities. As I began thinking about it, it’s clear that others have thought about this CAO role as well, although mostly from an IT perspective instead of one based on business need. It could be viewed that this is a role for the CTO or even the enterprise architect.

Where did the IoT come from?

I was talking with some folks about the Internet of Things the other day and they showed me some analysis that made it look like it was relatively recent.

 

where did the IoT come from.jpg

 

My view is that its foundations go back a long way. I worked on (SCADA) Supervisory Control and Data Acquisition systems back in the 80s, which were gathering data off the factory floor, analyzing it and performing predictive analytics, even way back then.


In the 70s, passive RFID came into being and one of the first places it was used was tracking cows for the department of agriculture to ensure they were given the right dosage of medicine and hormones – since cows could talk for themselves.

 

In the late 70s and early 80s barcodes become widely used to identify objects, allowing greater tracking of manufacturing lines as well as consumers in stores.

 

In the 90s, higher speed and greater range allowed for toll tags to be placed on cars, allowing for greater ease of identification but still very little use of sensors to collect additional information.

 

At the turn of the century, the military and Walmart required the use of RFID to track products and that caused significant increase in their adoption. About the same time, low powered sensing capabilities were developed since RFID only provided identification and the scanner provided location, people began to look at other information that could be collected like temperature, humidity as well as ways to gather information remotely like smart metering in the utilities space (although even that started much earlier).

 

Most technology adoption follows an S curve for investment and value generation. We’re just now entering the steep part of the S curve where the real business models and excitement is generated. It is not really all that new it is just that the capabilities have caught up with demand and that is making us think about everything differently (and proactively).

The shifting world of business continuity

disaster2.pngI was in an exchange this week with an individual talking about business continuity. The view that business continuity needs to focus on:

An organizations business continuity approach need to be reassessed in a world of high levels of automation, contracting for services and reduced latency. The very definition of foundational terms like ‘work location’, ‘services’ and ‘support’ are changing. Diversity of perspective is likely to be a critical component of any kind of timely, situation response.

 

“The management of business continuity falls largely within the sphere of risk management, with some cross-over into related fields such as governance, information security and compliance. Risk is a core consideration since business continuity is primarily concerned with those business functions, operations, supplies, systems, relationships etc. that are critically important to achieve the organization's operational objectives. Business Impact Analysis is the generally accepted risk management term for the process of determining the relative importance or criticality of those elements, and in turn drives the priorities, planning, preparations and other business continuity management activities.”

 

In today’s environment, business impact analysis is becoming ever more technical and the interconnection between environmental factors more complex. We have seen situations recently with program trading that an entire financial institution has been placed at risk when their automated trading responds in an unforeseen fashion or their governance breaks down. We’ll be seeing similar techniques applied throughout organizational processes.

 

The response to almost any situation can be enabled by techniques like VOIP and other approaches that allow additional levels of abstraction. Simulations can be used to understand the implications of various scenarios as part of business continuity planning.

 

As I mentioned back in March:

Having an effective, robust approach to business continuity is part of management, security and many other roles within an organization.  It is important to remember that there is a cost for being unable to respond to an incident.

Metrics usage in an agile approach

change.pngA couple of months ago, I did a post on: The supply and demand issues of governance, including issues that cause organizations to be blindsided by events.

 

Lately, I’ve been thinking about this a bit more but from the metrics side -- defining and collecting the leading and lagging indicators of change associated with governance. There is quite a bit of material on this concept, but this link to a definition on leading indicators is focused on economic leading indicators. The concepts for business processes are similar.

 

Leading indicators show progress, lagging indicators confirm completion (examples on this perspective made me dig up a post I did in 2009 about measuring cloud adoption). Most organization’s processes only have lagging indicators. These are metrics that identify we’ve hit milestones… This can allow efforts to get fairly far down a path before they can do course corrections. More predictive approaches are possible and needed to adapt to this changing approach to business.

 

When I look at applying gamification, I usually come up with numerous leading indicators since gamification is about influencing the work in progress. When approaching change, look for items that show improvement or change and not just validation of achievement.

IoT and IT’s ability to foresee unintended consequences

Internet of things.pngI was recently talking with someone about an Internet of Things study that is coming out and it really made me wonder. HP has been doing work in the IoT for decades and gets relatively little credit for the efforts. In fact where I started work back in the 80s was writing statistical analysis tools for plant floor (SCADA) data collection – essentially the high value, big data space of its time, back when a 1 MIPS minicomputer was a high $$ investment.

 

The issues we deal with today are a far cry from that era, now we’re as likely to analysis data in the field about well head performance or robotics but many of the fundamentals remain the same. I’ve mentioned the issue of passive oversharing in the past, and addressing that issue needs to be at the foundation of today’s IoT efforts as well as value optimization issues.

 

I was in a discussion about vehicle to vehicle communications requirements a few months back and the whole issue of privacy looms much larger than the first thoughts of preventing accidents. I think everyone would agree that putting on the breaks by those vehicles affected is a good idea. Should the stop lights recognize bad behavior and visually send a signal to other drivers? There are a wide range of innovations possible with a network like this.

 

There are also negative possibilities to be considered:

  • Is passing along this driver performance to the police a good idea? What about insurance companies?
  • What about just that fact that your car knows it is speeding, is that something that others should know?
  • Or the information about where you’re driving to, now that your car is sharing this information with other cars and infrastructure (cell phones already do this by the way).
  • What if a driver can ‘socially engineer’ the limits of the system to maximize the performance for them. An example of this might be if you were to push the system so that yellow lights would stay yellow a bit longer because you’re accelerating into the intersection – is that OK?

Some unintended consciences are going to happen. We should be able to see many of them coming, if we think creatively. IT organizations will need to develop their implication assessment skills, for their social as well as business effects. The IT team should have better comprehension of the analysis and data sharing that has happened elsewhere and the implications, regardless of the business or industry and be able to advise accordingly. They need to reach out early and often.

Search
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation