Does the cost of cloud computing limits the usability of a community cloud to support a supply chain? This is really the question I'd like to address today.
Last April, McKinsey and last September, IDC have done an exercise of costing the difference between running applications in the cloud and in a datacenter. The McKinsey study has resulted in very strong reactions from bloggers, pointing out "McKinsey's Cloud Computing Report is Partly Clouded" to take one example. The IDC presentation, done at the Cloud Computing Summit, got much less exposure. Computerworld UK and Linux.com comment, surprisingly rather in a positive note. Why is that?
Well, where McKinsey was reasonably vague in their comparison, IDC very clearly pointed out they compared running applications in a next generation datacenter versus running them in the cloud. In the current terminology, you could argue that a next generation datacenter is nothing else than a private cloud, so that the debate becomes irrelevant.
Highly virtualized datacenters used by large enterprises can achieve efficiencies that are very close to the ones reached in public clouds. Cloud companies needing to make profit, it is logic that such private clouds will be cheaper in the long run. Fundamentally this is what IDC demonstrates. The whole argument for the cloud is the one of elasticity, and the case study of Animoto is often quoted. They managed to get from 50 to 5000 servers in a week-end. This is really great, but let's be frank for a minute, how many large enterprises are confronted by such issues? What is the true demand variability?
In all the discussions above, we are talking about hosting existing functionality in the datacenter or the cloud. But what should we do with new functionality? In several blog entries, I have been talking about the development of community or ecosystem clouds to support an improved management and collaboration in the supply chain. These functionalities do not exist in most enterprises, and the question should really be whether such functionality needs to reside in the datacenter or in the cloud. Today, I would argue that it should be in the cloud for multiple reasons:
- First, building the required infrastructure in a datacenter costs money, it's typically CaPEX, which is not highly regarded by most financial people these days. Putting the functionality in the cloud limits or eliminates the start-up costs, and turns the on-going cost in OPEX, a move that is regarded more positively by finance.
- Second, establishing the visibility and collaboration function in the cloud can be seen as a natural evolution of existing B2B exchanges. Supply Chain partners, already connected to an exchange might be able to use similar connections to the community cloud. It makes the development of such a cloud acceptable by the supplier base.
- Third, using the cloud establishes a neutral party that maintains the service. It builds trust and may facilitate the acceptance in the supplier community
- Lastly, using cloud technologies, the shared data can remain under the control of the data owner, as the technology allows the access of distributed data. Suppliers can decide which information to share, while avoiding the proliferation of that data across the internet.
Obviously, security is an important topic to address, but progress is slowly but surely made. Governance and how the service evolves over time is the other, is another aspect that needs to be addressed. Some large OEM's may take the lead and direct their suppliers to attend, but in most situations, we expect a trusted third party to run the service. Obviously, in that case, having a team looking after the evolution of the service, what new functionality is taken in service when, becomes important. Building a community around the service will make the members feel part of a team, which is exactly what you are looking for to establish a successful service.
Last week, I was in Singapore at the SCM Logistics World 2009. My presentation was around how to build a weatherproof Supply Chain through the increase of visibility across the ecosystem. Having talked about the subject previously on this blog, I'd like to focus this time on a debate that took place amongst the participants, and this one was focused on how to look at the end-to-end supply chain holistically and identify how the performance of this one could be measured.
Over the last 12 to 18 months, companies have cut costs as never before. The smart ones have done this in such a way that their supply chains have become leaner and meaner, resulting in real savings that benefit the end consumer. In doing so, they have lowered the inventory buffers that shielded portion of their supply chain from variability and uncertainty in others, and as such increased the level of risk across the ecosystem.
So, understanding the supply chain, its dynamics, how it behaves, and implementing the management processes and governance required becomes critical. Many people will agree with the statement I just made, but it is how to do this that holds people back. Actually, when browsing the internet on this subject provides little useful information. Yes there are a couple software packages and the odd presentation (to be paid for), but nothing else. Interesting.
Companies have been focused on their supply chains for years, but they do not seem to have thought through how to measure them end-to-end. I believe there is no need to make things more complicated that they are. The industry has been using the SCORTM (Supply Chain Operational Reference) model for years. This model proposes for each node in the supply chain a series of KPI's. Could we use those for an end-to-end measurement? That was the debate in the corridors of the conference. Let's look at it in a little more details.
Let's look at the level 1 KPI's. The first one, Perfect Order Fulfillment, is really a Supply Chain KPI, as it relies on all partners in the supply chain to deliver their elements to ensure the final product is complete, meets the specifications and addresses the expectations of the customer. With the buffer inventories disappearing, the ecosystem is no longer compartmentalized and using this measure as an end-to-end one makes sense.
The second one, order fulfillment cycle time is an interesting one. Indeed, if the supply chain manufactures to order, it is clearly a measure of the supply chain performance. On the other hand, if the customer is delivered from stock, obviously there is only a portion of the supply chain that affects this measure. This demonstrates that not all KPI's are applicable in the same way to the supply chain. However, the exercise remains interesting.
The next three, Upside Supply Chain Flexibility, Upside Supply Chain Adaptability (The maximum sustainable percentage increase in quantity delivered that can be achieved in 30 days) and Downside Supply Chain Adaptability (The reduction in quantities ordered sustainable at 30 days prior to delivery with no inventory or cost penalties)., are by nature supply chain measures, so do fit our approach here. The same applies to Supply Chain Management costs.
Cost of Goods Sold (The cost associated with buying raw materials and producing finished goods. This cost includes direct costs (labor, materials) and indirect costs (overhead)) is one that is difficult to calculate across a supply chain as not all partners are willing to work "open books", which is what you would need to correctly calculate this metrics from a end-to-end supply chain level. So, this is probably not a good metric to go with. The same applies to two other metrics, the return on Supply Chain fixed Assets and the Return on Working Capital.
The last KPI, Cash-to-Cash Cycle Time is applicable for supply chains that work in a deliver order model, but not for the ones that deliver from stock.
All in all, a number of KPI's are useful and could be applied if the appropriate data can be gathered across the supply chain. From experience I do know the use of a small number of KPI's already makes a large difference, so while we may want to think at creating a couple more, using the ones we already have, would help companies focus on optimizing their ecosystems and in turn improve their delivery capabilities in addressing the new opportunities that appear on the horizon.
Lately we have heard a lot about Cloud Computing and seen a number of announcements in this space. Cloud Computing is really all about the simple provisioning of computer power, storage space and services. James Governor nicely characterizes it in his blog entry "15 ways to tell its not cloud computing". According to Gartner's Hype Cycle for Emerging Technologies, 2008, Cloud Computing is entering the "Peak of Inflated Expectations", while the Pew/Internet points out in a report of last September, that 69% of Internet users have either stored data online or used a web-based software application, but mainly for personal use.
Today, Cloud Computing is not risk free for companies, as pointed out by Gartner. They recognize 7 security risks and urge companies to pay attention to following areas:
- 1. Privileged user access
- 2. Regulatory Compliance
- 3. Data Location
- 4. Data Segregation
- 5. Recovery
- 6. Investigative support
- 7. Long-term viability
This means, as the SDTimes describes in an article labeled "What is the state of Cloud Computing?", that it may not be ready yet, but should definitely not be discarded. So, the question manufacturing companies should ask themselves is how to best prepare themselves for this new paradigm.
In our mind, two things can happen right away. First companies should work at developing a Service Oriented Architecture based environment, developing a service oriented approach to address the needs of the business. Second, companies should document and standardize their business processes. Combining both approaches will provide side benefits, including a greater responsiveness to changes in the marketplace, which could be quite useful in the current environment. But let's discuss both elements in a little more details.
By separating business processes and services (or elementary transactions if you prefer), one creates an environment where individual services are orchestrated in business processes. Using graphical tools (similar to workflow tools), the processes can be changed quickly and efficiently. At a later stage, when Cloud Computing has achieved more maturity, certain services may be migrated to the cloud. Because of the separation of services and business processes, and resulting from the implementation of an appropriate SOA management environment, the migration of such services will be done transparently for the remainder of the environment.
This approach requires the development of a certain discipline both in the business process area and in the infrastructure space. Governance needs to be established to ensure the proper management of the services lifecycle. If you are interested in understanding more about SOA and HP's approach, you may want to take a look here.
Let me finish off with a couple words regarding business processes. At HP, we are using the Supply Chain Operational Reference model extensively as a base to design key processes. Not only does it provide a good basis, but through its key performance indicators, it allows the benchmarking of activities between companies. And this serves as a base for continuous improvement.
When will Cloud Computing become mainstream? Nobody knows. It may be in a couple years or in a decade. However, it is important for companies to start planning for it today, and implementing an SOA architecture should become the foundation of any IT environment that may one day migrate to the cloud. By standardizing and documenting business processes, one not only complete the transformation, but at the same time increase the responsiveness of the company, and it's eco-system, while facilitating the improvement of operations.