I've been asked to prepare a presentation on the subject of innovation in the high-tech industry and spent the last couple days doing some research on the subject. What I found out was so fascinating that I decided to share it with you in a couple blog entries. So, let's start today with the fundamental question, what is innovation? And let me start with a quote from a French writer, Marcel Proust. He says "The real act of discovery consists not in finding new lands, but in seeing with new eyes". And that really made me think. I believe we often misunderstand between innovation and invention.
Indeed, innovation and invention do not have the same meaning. One definition of innovation I found a long time ago sounds like this: "New ways of leveraging existing ideas, technologies, processes etc. to create value". This includes two important elements, first innovation may consist in the new use of existing ideas, not everything needs to be new, and second, innovation is related to the creation of value, in other words to the notion of doing business. Invention on the other hand is: "The creation of new ideas which may or may not prove to have value." So, invention inherently includes the concept of coming up with something new. Whether that something new is valuable or not remains to be seen. Obviously, invention and innovation can go hand in hand. When Edison created the light bulb, not only did he perform a great invention, but he did it in such a way that it became economically viable. He innovated through the creation of an entire integrated electric system of which the light bulb was one component.
Enterprises today can innovate in a variety of spaces. The most known one is obviously the product/services innovation, but it is not the only one. I count five spaces, I'd like to discuss briefly here:
- Business Innovation, finding new business models, new processes to reach the customer or consumer, new efficiencies in the approach taken, new channels of distribution etc. That innovation is often left to the line of business managers. Marketing will innovate in how to approach customers and make the brand visible and desirable. Services will find new ways to help customers out and resolve potential problems during and after warranty. And I could go on like this.
- Ecosystem Innovation, finding new approaches to use the strength of the supply chain and distribution partners to delight the customer. This includes supply chain integration, visibility, exchange of information across the ecosystem, improvements in operations etc. It is all related to tying the enterprise better with its partners up- and downstream. Functions such as Supply Chain and Distribution are focused on this type of innovation.
- IT Innovation, finding new ways for IT to support the business better. This can go from the development of a private cloud to support the changes in workload throughout the year, to the development of platforms to support interacting with suppliers, distribution partners and customers.
- Product/Services Innovation and I put products and services together as the lines between both are increasingly blurring. How can new technologies improve the experience of an existing product? Can the integration of the product with some services deliver a new user experience? Can new things be invented by the creative combination of existing products/services with new technologies, channels, approaches etc. In a world where new things appear all the time, where software takes an increasing role in products, where the internet becomes ubiquitous, many new things can be dreamed up. This is the role of business units in general and the product development team in particular.
- And then, there is a last type of innovation that supports all of the above, and this is Technology Innovation. Indeed, identifying new technologies and making them available to improve the business approach, the ecosystem integration, IT or product/service combinations, is critical for companies that want to stay at the forefront of innovation. This is often how inventions will enter the innovation cycle. Having a technology watchdog that reviews what's new out there, is critical. Is this the reason why the role of CTO (Chief Technology Officer) is becoming so popular?
The key thing for enterprises is to carefully choose their bets. This requires curiosity and creativity. Indeed, curiosity to go and find out the new ideas that might be of interest and creativity to see how those could best be used in the context of the enterprise. In a future entry, I will take the example of the electronics industry, look at key trends and how such trends could lead us to innovation as described above, so stay tuned.
Many things have been said about the HP IT Transformation and how we turned the business upside down. Randy Mott even described the three big mistakes made during this journey. HP leads IT transformation by example. And I could go on like this. However, while the transformation of corporate IT receives a lot of the limelight, there is another far-reaching transformation ongoing, and that one concerns the R&D IT.
Like in many companies our R&D engineers used to have their servers under their desks, managed by themselves, and completely out of control of the central IT department. When you started a new project, you found the hardware, decided what software you would use and went to work. This resulted in a limited ability and incentive to leverage from team to team, from project to project. If by accident you used the same software, you would have different versions or at least heavy customization making it difficult to exchange information. We discovered this had following implications:
- Engineers were losing a fair amount of time addressing IT issues rather than performing R&D (estimated to represent up to 20% of their time)
- The lack of information sharing reduced the re-use of components and subassemblies
- License costs were extremely high as software was acquired by the engineers themselves for the purpose of each project
We decided to take advantage of the data center transformation performed for corporate IT to transfer the running and management of the R&D IT infrastructure to IT, while standardizing the tools. The objective was to have the engineers spending 100% of their time doing research and development, to drastically increase re-use and to diminish license costs.
This was obviously not an easy journey. At the start we got a lot of pushback from the engineers themselves. They felt they would lose control. We wanted them to be very creative, but at the same time were forcing them to use specific tools,... and processes, as we realized we had to redesign and standardize processes along the way.
Learning from the three big mistakes made in the IT transformation, we asked the engineers to register their applications at the start of the journey. A number of them did. Others, as would be expected did not. When we sunset and migrated the first applications into the data center, a number of homemade interfaces suddenly stopped working resulting in furious calls to the IT department. Well, guess what, these were interfaces to applications that had not been registered. It became very clear this was going to happen and engineers better participated.
Beyond the hiccups, it was the close collaboration between teams of engineers and IT that allowed the definition of standards for the organization. The combined team made decisions as to which applications would be used moving forward.
One of the issues the team ran into was response latency. Indeed, HP's three double data-centers are all in the US, while our R&D departments are scattered all over the world. Particularly in software and PCB design, having all applications running in the data-centers proved to be too slow. So caching servers were established around the world. The principle of having the master data residing in the data-centers remained, but the applications ended up being run out of remotely managed caching servers, allowing centralized management/backup etc. while providing the engineers with satisfactory response times.
IT got standardized and common tools, reducing complexity and costs and enabling common processes. Addressing the communication issues removed the need for "shadow IT". The engineers no longer needed to spend time managing their IT environment, freeing up 10-20% of their own time. This improved the productivity of the groups. Standardization of tools & processes helps focusing the innovation on the customer.
HP announced its quarterly earnings this Wednesday. During the earnings call, Mark Hurd was asked why we are spending less on R&D in 2010 than in 2000 when we were a company half the size. His answer includes following "...we're in a mode to look for processes that we can standardize. Simple things like testing, QA, how many development tools we've got. All of these have been, because of acquisitions, very random and very unique, and very, if you will, siloed.
So our ability to get standardized on those processes gives us an opportunity to take out cost. And the way we look at R&D is we don't look to Cathie's point, we look at R&D in the context of overhead, maintenance and innovation. What we are trying to do is get the innovation dollars up. So when you look at the total R&D spend, it's down, and yet the yield to the product road map is up. So, when you look at the number of introductions of products that have had meaningful change in share positions, that number is actually up for us. And that's what we want to measure. We want to get more innovative products into the market sooner."
Even if the R&D IT transformation was not the only reason, it sure participated in achieving the objectives described by Mark. What about you? Do you recognize yourself in what I described at the start of this entry? We are happy to share our experience, so don't hesitate to contact us.
A couple days ago, I received a call from a friend asking me a very simple question: "What criteria should I use to decide which applications I should move to the cloud?" Not having a direct answer myself, I searched the web, and to my amazement, I could not find a clear and simple answer to the question. So, we are all making so much noise about cloud computing, but seem to be unable to articulate simply what we put in the cloud. I started approaching things from the opposite direction and remembered that a cloud environment typically consists of a large amount of small standard servers that are virtualized and typically run multi-tenant applications. This got me going, as it brought the criteria back to a simple series of questions:
- Does the application benefit from a parallelized environment?
- Is the application suitable to run in a virtual environment?
- Is the application written using a multi-tenant approach?
Knowing that cloud environments consist in large datacenters of low cost x86 servers running in a virtualized environment, I believe following criteria should at least be taken into account:
- No direct interaction between the application and the hardware (otherwise the application cannot run in a virtual environment)
- Many tasks can be performed in parallel and each task can run at satisfactory speed on the type of server used by cloud service providers (standardized x86 servers)
- Where applications are performance intensive and require support for high processing loads, I/O and memory then these are more suited to a scale up approach on a dedicated server.
Beside those two key criteria other aspects to take into account include:
- If the application is used by many users, having a multi-tenant approach will reduce the footprint required as it allows running multiple users on the same code base. In this case, one has to ensure that any customization of the application can happen outside of the common code base
- In multi-tenant applications, data can be stored in separate databases or in a shared database environment (with separate or shared schemas). The fundamental question to keep in mind here is the criticality of the data and whether the proposed security levels protect the data satisfactorily.
- How much data needs to be transferred to and from the application. If the numbers are large, calculate the true bandwidth available and assess both the cost and time needed to get the data in the right place.
- And then, there are the security aspects that need to be taken into account. Where security can be addressed, it is done differently than for traditional applications, meaning an application rewrite might be necessary.
All this being said, the basic question remains: which application should we move to the cloud? I would suggest starting with a non-business critical application that does not use highly confidential data. To take full advantage of the cloud, the application should be architected with SOA principles in mind, and, depending on the amount of users, a multi-tenant approach may be taken.
The benefits of moving the application to the cloud include, beside the cost aspects, the capability to respond quickly to varying demand, using the elasticity available in the cloud. According to the Cloud definition proposed by NIST, elasticity means:
"Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time."
The drawbacks include the lower levels of control available, as the environment is managed by an independent third party and as the ecosystem used to provision the service is not transparent. Cloud based services have availability levels that match the best datacenters, but where, in case of a datacenter outage, contacts with the management team can easily be established, it's not available in a cloud environment. It is more the lack of information on the outage, its causes, duration etc., that makes users nervous, than the level of service provided. Here again, the question has to be asked to whether one is ready for this uncertainty.
I am sure this list of criteria is far from complete, and I am looking forward for your comments to improve it. I believe however, we need to be able to equip IT departments with a clear and simple criteria list for them to decide whether to move to the cloud, and in case they do, what applications to start with.
There is currently a lot of attention being focused on Cloud Computing, but what does this mean in practical terms? HP's definition of Cloud Computing is in terms of globally available, scalable, and flexible services, which can be subscribed to on a pay per use basis. Given this definition, the Cloud is just one source of IT services amongst a range of others, either insourced or outsourced. The key issue here is the move towards muliti-sourced IT environments, where some services are provided in-house, some from a range of infrastructure, application and business process suppliers, and some from the cloud, as service availability grows.
So, how should enterprises prepare for the transition to such a model? The first step is to review the IT strategy in the light of business needs; fundamentally, which capabilities need to be kept in-house as business critical, which are better suited to specialist providers, and which lend themselves to the pay per use Cloud treatement? These consideration should be made in conjuction with an exercise to consolidate and simplify an IT landscape, to reduce operational costs and free up investment for business focused inititiatives. Such an approach requires transition to a model of IT provision based upon services, with a governance model, IT management processes and architectural standards that support the management of service lifecycles from multiple providers. ITIL is an established source for such service management processes, whereas an enterprise architecture based upon service-oriented principles is key for guiding technical decisions in support of business needs (see TOGAF, The Open Group Architectural Framework).
A key challenge in a multi-sourced envrionment is monitoring and managing the performance of services across service provider boundaries. One possible approach is to use HP's Cloud Assure services, which are themselves based on a range of Software as a Service IT management offerings. Cloud Assure can be used in conjuction with service providers to monitor key performance parameters via a dashboard on a pay per use basis.
Considerations of trust and security are also critical to the acceptance of multi-source service provision. Cloud Assure includes the capability to analyse security vulnerablities associated with IT services from whatever source. However, other issues around compliance to data protection and privacy legislation, such as ascertaining the physical location of data and associated legal jurasdiction, will require specific guarantees from service providers.