Many things have been said about the HP IT Transformation and how we turned the business upside down. Randy Mott even described the three big mistakes made during this journey. HP leads IT transformation by example. And I could go on like this. However, while the transformation of corporate IT receives a lot of the limelight, there is another far-reaching transformation ongoing, and that one concerns the R&D IT.
Like in many companies our R&D engineers used to have their servers under their desks, managed by themselves, and completely out of control of the central IT department. When you started a new project, you found the hardware, decided what software you would use and went to work. This resulted in a limited ability and incentive to leverage from team to team, from project to project. If by accident you used the same software, you would have different versions or at least heavy customization making it difficult to exchange information. We discovered this had following implications:
- Engineers were losing a fair amount of time addressing IT issues rather than performing R&D (estimated to represent up to 20% of their time)
- The lack of information sharing reduced the re-use of components and subassemblies
- License costs were extremely high as software was acquired by the engineers themselves for the purpose of each project
We decided to take advantage of the data center transformation performed for corporate IT to transfer the running and management of the R&D IT infrastructure to IT, while standardizing the tools. The objective was to have the engineers spending 100% of their time doing research and development, to drastically increase re-use and to diminish license costs.
This was obviously not an easy journey. At the start we got a lot of pushback from the engineers themselves. They felt they would lose control. We wanted them to be very creative, but at the same time were forcing them to use specific tools,... and processes, as we realized we had to redesign and standardize processes along the way.
Learning from the three big mistakes made in the IT transformation, we asked the engineers to register their applications at the start of the journey. A number of them did. Others, as would be expected did not. When we sunset and migrated the first applications into the data center, a number of homemade interfaces suddenly stopped working resulting in furious calls to the IT department. Well, guess what, these were interfaces to applications that had not been registered. It became very clear this was going to happen and engineers better participated.
Beyond the hiccups, it was the close collaboration between teams of engineers and IT that allowed the definition of standards for the organization. The combined team made decisions as to which applications would be used moving forward.
One of the issues the team ran into was response latency. Indeed, HP's three double data-centers are all in the US, while our R&D departments are scattered all over the world. Particularly in software and PCB design, having all applications running in the data-centers proved to be too slow. So caching servers were established around the world. The principle of having the master data residing in the data-centers remained, but the applications ended up being run out of remotely managed caching servers, allowing centralized management/backup etc. while providing the engineers with satisfactory response times.
IT got standardized and common tools, reducing complexity and costs and enabling common processes. Addressing the communication issues removed the need for "shadow IT". The engineers no longer needed to spend time managing their IT environment, freeing up 10-20% of their own time. This improved the productivity of the groups. Standardization of tools & processes helps focusing the innovation on the customer.
HP announced its quarterly earnings this Wednesday. During the earnings call, Mark Hurd was asked why we are spending less on R&D in 2010 than in 2000 when we were a company half the size. His answer includes following "...we're in a mode to look for processes that we can standardize. Simple things like testing, QA, how many development tools we've got. All of these have been, because of acquisitions, very random and very unique, and very, if you will, siloed.
So our ability to get standardized on those processes gives us an opportunity to take out cost. And the way we look at R&D is we don't look to Cathie's point, we look at R&D in the context of overhead, maintenance and innovation. What we are trying to do is get the innovation dollars up. So when you look at the total R&D spend, it's down, and yet the yield to the product road map is up. So, when you look at the number of introductions of products that have had meaningful change in share positions, that number is actually up for us. And that's what we want to measure. We want to get more innovative products into the market sooner."
Even if the R&D IT transformation was not the only reason, it sure participated in achieving the objectives described by Mark. What about you? Do you recognize yourself in what I described at the start of this entry? We are happy to share our experience, so don't hesitate to contact us.
We now have visibility on what happens in our ecosystem. We can react quickly. Can we do something to avoid the same problem? This is where the time dimension plays a role.
A number of years ago I had dinner with a colleague in London. Halfway through the main course he suddenly told me ERP was killing the business. When I asked him why, he answered by pointing out it only maintains one information, the current one. By having multiple views of the same data, and by understanding how that data evolves over time, he said, I can understand the dynamics in actions in the system I'm looking at. That's when the time dimension really hit me.
Indeed, knowing the inventory level of a product at any given moment in time is important to decide whether to trigger replenishment. However, it does not tell you anything about whether the replenishment level is the correct one or not. If I want to identify what the minimal safety buffer should be, I need to be looking at how the inventory evolves over time, what the variability is in demand, how often we have been out of stock etc. The operational environment described in last week's post, does not address this. We need to understand how things have evolved over time.
By maintaining past shared values centrally, or by gaining access to the suppliers business intelligence environment, we are able to reconstruct history, identify paterns and correlate multiple data points. Doing this helps us in multiple ways. It allows us to understand what caused a specific event to happen. By looking at similar patterns we may be able to avoid a similar event in the future, or at least react early. Dashboards are used to highlight such trends, using colours for example to identify borderline situations. The real benefit of such environment is to trigger reactions earlier, making the supply chain more robust (fewer events will happen) and more responsive (providing faster reactions if an event happens anyway).
The data itself provides interesting information such as minimum and maximum monthly stock level, while other data is derived from this base information. For example, the largest daily inventory change, variability (Maximum stock minus minimum stock divided by minimum stock) help identify the optimal safety stock levels.
Key performance indicators (KPI) can be calculated using this data. They visualize how well the company and its ecosystem are doing. Where continuous improvement programs exist, the evolution of those KPI's over time, represent an excellent indication of progress.
Identifying patterns and correlation of multiple data points also helps understand the interactions between the ecosystem parameters. We will talk about how best to use this information in our next blog entry when we look at the strategic aspects.
Understanding the time dimension, identifying why the ecosystem reacts the way it does, open up capabilities to improve operations. Often we talk about lean supply chain, but we cannot walk along our supply chain to spot waste. Building Operational visibility and understanding how things evolve over time provide the information required to start the lean six sigma journey. Systematically, the operation of the Supply Chain can be improved through a thorough analysis of the data.
The operational dimension allowed us to react more quickly to events, the tactical dimension provide us the capability to identify the arrival of an event earlier and potentially to avoid it. The third dimension, the strategic one, will allow us to review how we do business, helping us to eliminate the occurrence of an event all together. But that's for the next blog entry, so remain tuned.
On Monday, Newsweek released their inaugural Green Rankings, and interestingly, HP finished at the top. In an article, titled "The Greenest Big Companies in America" they explain they decided to publicize this list to recognize the efforts of companies, and how they ranked companies in industries as diverse as high-tech and mining.
Some are critical about the way this was done, Rose Gordon in PRWeek for example, points out that "ranks are fraught with subjectivity, or incomplete and self-reported data", but recognizes this is "a best first effort".
HP's position is due to its long term commitment to reducing the environmental footprint of all its operations, and not based on carbon offset or any other substitution program. The environmental subject is an emotional one, but reading through some of the blog comments, I realize many HP programs are not known and not visible to most. Indeed, our objective has not been to market green, but rather to become greener. HP is mainly working in three spaces:
- Reducing the environmental impact of a product throughout its whole lifecycle, from design to recycling
- Reducing the environmental impact of HP's own operations and facilities
- Helping HP employees reduce their own environmental impact
Each of those subjects is worth a whole dissertation, and it is not my plan in this entry to review all measures taken and describe every internal policy, this would take us way too far. However, I would like to highlight some very practical examples that may help you understand what we are trying to do.
Let me first highlight a program, called Design for Environment, we embarked on several years ago. The objective of this program is to include the environmental aspects right from the early design stages of the product. It addresses the impact of the product during manufacturing, during usage and at recycling, so, all the way through the lifecycle of the product. Aspects such as packaging, materials, energy consumption, supply chain impacts, ease of reuse/recycling are all taken into account. Obviously tradeoffs have to be made to ensure quality and ease of use of the product, as our customers do not expect HP to lower their standards.
Let me give you some examples. About one year ago, HP designed a notebook for Walmart, shipped in a stylish bag made out of 100% recycled material, reducing packaging with 97%, and winning Walmart's Design Challenge along the way. Another example is the increased use of recycled material in the production of Inkjet Cartridges. These are just two examples in a series. I will come back to the DfE program in a future entry.
And let me take a minute to urge you to recycle your cartridges and HP products. You can find information on the program in your country here.
To reduce its environmental impact, HP is addressing multiple aspects at the same time. And here is where, a partial view does not allow a real understanding of what is happening. For example, when HP announced the doubling of Green Energy Use in late 2008, we received negative comments from some of our competitors, and that's fine. In the mean time, we are well on the way to reduce our total Greenhouse Gas (GHG) emissions from our facilities to 16 percent below 2005 levels, by the end of 2010. Our current performance can be found here. We are consolidating buildings; increase the use of renewable energy, decrease emissions per unit of floor space etc. But we are also ensuring our company car fleet uses more efficient cars (reduction of 8% of GHG emissions from 205 level, and 24% from 2006 level in Europe alone) , we are reducing travel and have implemented Halo telepresence studio's in all major facilities. We have also consolidated our IT datacenters resulting in drastic reductions in energy consumption, and are working at consolidating our printing environments.
In many countries programs exist to help employees reduce their environmental impact. In my own country for example, a program for acquiring rooftop solar panels has been implemented. Environmental tricks and tips are available on the intranet to give another example. And HP goes even further, trying to incent its customers to reduce their carbon footprint. The "Power to Change" program, launched last June, is one of the examples. To address the widest possible audience, the program is on facebook and twitter.
Becoming "greener", and I use this term rather than the term "green", as I strongly believe companies can always reduce their environmental impact, is not a big bang announcement, but an orchestration of many small steps that each add to a common goal. Caring for the environment has been in HP's DNA for a long time. Our first recycling program was started in 1966, to recycle punched cards. Most of you probably do not even remember how they look.
We are committed to continue our efforts to reduce our environmental impact. If you are interested in reviewing how we progress, check our environment pages.
There are an increasing number of cloud based Supply Chain services, offered as SaaS, appearing on the market. But that is not the subject of this blog entry. Actually the question is way simpler. Are you aware of all the players participating in providing you the latest SaaS service you have bought?
Let me take an example. Amitive is an SaaS provider, delivering Community Supply Chain Management services, or as they call it, "SCM in the CloudTM". I don't know the company, it just happened to be one that appeared in Bob Trebilcock's review of SCM software in the Cloud.
Going to Amitive's web page, I tried to understand whether they run their software suite on their own environment, or whether they depend on another provider to do so. Finally on page 7 of their white paper "Amitive SCM in the CloudTM: Catalyst for the Great Leap Forward in SCM" I find following phrase "Amitive manages a customer account via our multi‐tenant servers in the secure datacenter of our world class infrastructure partner". No information about who this partner is. Later, back-up, scalability, high availability and disaster recovery are highlighted. Are all these provided by the same infrastructure partner or are other ones involved? No information is provided.
Again, I just picked this company as an example. What I want to highlight is the fact most of the SaaS providers use a supply chain to provide their services to their customers, who most often do not even ask questions about the partners, nor assess the risks associated with these partners.
As in physical supply chains, something going wrong with a supplier may result in major disruptions in the whole Supply Chain. One such examples happened on August 8th, 2008, when a cloud backup service, called The Linkup ceased operations. The company actually used the services from another company, called Nirvanix. Customer data got lost on the way, resulting in closing the service. I found a good description of the whole story in a Network World blog entry from August 11th, 2008. Again, my objective here is not to argue about this particular case, but to illustrate the importance of understanding the supply chain one subscribes to when using a cloud based service.
You could argue that, like in the "real" world, there are alternatives. And that is indeed the case, but the lack of standards in the cloud today, mean that moving from one infrastructure, platform or service to another is not an easy task.
It is critical for companies wanting to use the cloud in general and Software as a Service in particular, to understand the players in the service provider supply chain, in the same way as they would do for their physical goods supply chain. Similar tools and techniques should be used during that evaluation. As CIO's and IT teams are less familiar with this than procurement departments, they may want to borrow expertise there for such evaluation.
The topic of RFID has been approached a few times in this blog. RFID, being a technology, is supporting a number of usage scenarios and its maturity depends on the area we are looking at. Access control and retail and certainly not at the same place.Still the middleware space has changed significantly over the last year, with the Oracle/BEA deal, steady progress from Microsoft and SAP, and the Checkpoint/OAT deal more recently. I found this article to be quite interesting about this and trying to look at the future of middleware…http://www.rfidupdate.com/news/07082008.html#editorsNoteWhile the functions provided by the middleware themselves are certainly going to stay in some form, it will be interesting to see if some of the larger players are able to use this as a differentiator versus a “need to have” type set of features.
Why isn't a 60 years old technology more mainstream?