High Performance Computing is used increasingly in Product Development as digitization increases. Companies tend to perform an increased amount of testing digitally and simulation is increasingly becoming mainstream. For a number of years grid computing has been used to address the computational needs to run such algorithms. Over the last couple years, people have start using cloud computing and some have run grid type workloads on cloud computing.
So, are these two different needs to address similar problems? Actually not. The first element to point out is that they are optimized for different type of activities.
Cloud computing (and I am talking here about IaaS, as this is where the comparison makes sense) focuses on making available virtual machines (VM) to run a specific program. Typically, it's one virtual machine for one program. In most instances, the program will execute in the VM for a certain period of time. Cloud is all about provisioning a large number of VM's to many customers, helping them achieve a variety of tasks.
Grid computing on the other hand implies parallelization, the execution of a single task across many computers. This is typically used for operational research (OR) and other computation intensive tasks. Operational Research lends itself very well for such tasks as you search an optimal through approximation. Calculating the value of a particular objective in a number of situations with slightly evolving parameters to obtain a maximum or minimum, is what OR is all about. It is used increasingly for simulations, planning and scheduling and other compute intensive tasks. By executing the same model with different variables on a large number of machines, and then identifying the maximum or minimum, one can speed up the process drastically. That's what parallelization is all about.
So, is IaaS a good platform to run grid computing tasks on? And here the opinions differ. In an interesting podcast, Mike Vizard, the CEO of Parabon Technologies, a grid software and service provider, explains the differences, but at the same time points out they are very complementary. In an article, titled "Grid Computing and the future of Cloud Computing", Jennifer Schiff describes the differences and points out no one technology will take over.
My take on this is that it depends. Indeed, as Grid Computing is all about computational intensive tasks, the less overhead in the execution of the task, the better. So, running compute intensive tasks on a cloud reduces the CPU bandwidth available for actual computation as some cycles will be used for managing the virtualized environment, delaying the computational task, and as a result, slowing down the overall problem solving. So, if you have a regular need for such compute intensive tasks, and computer idle time in your datacenter, a grid is probably better. You may even think about a HPC cluster. It will give you the best efficiency.
On the other end, if you are only an occasional user of such environments and do not want to have to manage/provision a separate software stack, you might want to run such compute intensive tasks in a cloud environment. But you should realize you run it in a sub-optimal way.
Simulations, operational research, planning and other computational intensive tasks are increasingly used in organizations. Parallelization is becoming easier and more effective. So, companies are starting to look at environments where such workloads can be executed.
So, should cloud service providers also provide specific environments for grid computing, or will we see separate grid service providers appear on the market. Future will tell, but it is clear companies will be on the look-out for high performance computing types of environments. Due to the importance of such computation workloads in manufacturing, they will be amongst the first ones.
In the last two entries, we talked about operations and the time dimension. All we described was reactive. We have been taking decisions based on past performance. Could we become pro-active? Would there be a way through which we are able to predict how the system would react to a decision, and eventually adapt our decision before even having implemented it. Too good to be true?
Actually not. Operational Research and other Applied Mathematic tools allow us to model the environment and to submit scenarios. If the model is correct, it will react exactly as the ecosystem will do in the same circumstances.
That, however, requires a model. And that's the difficult piece. Using the information collected while performing the operational and tactical activities, the parameters of the model can be determined. This requires specific modelling skills, which are not always easy to find. The payback is excellent though, as it helps optimize operations and supports decision-making.
To make this work, we need three components, simulation software, a model and scenarios. Several packages exist in the marketplace, some of them more focused on inventory optimization, others on supply chain networks, and even others on logistics. They all address the other aspects to a certain level. It is however important to make a good choice. We have mainly used Optiant, as, due to the dynamics of our industry, inventory levels should be kept low to avoid aging costs.
Simple models can also be built using Excel and macro's. This is an option that is often forgotten, but that provides good results. Professor Jeremy F. Shapiro has done a lot of work in this space.
How do we get started? It typically involves a trial and error. Build a model using the information gathered in the operational environment and the understanding gained during the tactical analysis. When it looks OK, choose a couple events that have happened over the last couple weeks or months. Enter them in the model and look at how the model reacts. Analyse how close that reaction comes to what really happened. This will test the validity of the model. If you are not happy, tune the parameters of the model and test again. The importance is to have a number of different scenario's to ensure the model mimics the dynamics of the actual ecosystem in multiple cases. The immediate implication of this exercise is that you gain an increased understanding of the dynamics of your supply chain. Once your model behaves as your ecosystem, it becomes usable.
Now you have a what-if analysis tool. Indeed, you develop a possible scenario. This can be a component or logistics cost increase, a sudden demand increase or reduction, a supplier that cannot deliver (strike, fire, bankruptcy, you choose), or the unavailability of a shipping lane. Running the model you can see what should happen if the scenario really happens. Now, you can ask yourself the next question. This scenario resulted in sales losses (for example), how could I avoid that? And you create a second model with a changed ecosystem (for example new shipping routes, additional suppliers, intermediate inventory buffers etc.) and submit the same scenario to the new model. How is this revised ecosystem reacting? What is the probability of such scenario happening, what is the cost of changing the ecosystem and what are the benefits? In other words, is it worth strengthening our ecosystem?
Such tool gives you two major advantages. From an operational perspective, you can quickly check the implications of decisions you want to take, avoiding unexpected events to happen. And from a tactical perspective, you have a mechanism to check how you can improve the robustness and agility of your supply chain to avoid problems in the first place. Not bad, isn't it?
Early March, Industry Week published an article, titled "The Dead of the Supply Chain", arguing that the supply chain as it has traditionally been defined is no longer feasible. Gone are the days when supply chains were linear, static, in-country and tightly coupled to the brand owner or OEM's internal manufacturing.
Having spent time in studying the HP supply chain, I can fully agree with the statement. Indeed, supply chains such as ours have become global, outsourced and demand driven. Balancing demand and supply is critical to set-up and manage lean supply chains these days. In the article, Andrew Salzman describes five requirements that need to be addressed to achieve this:
- B2B Integration
- Business Process Management
- Exception and Event Management
- Business Intelligent
- Operations management
I believe he misses a major point. The approach he promotes is an operational approach, which only covers part of the needs. Yes, OEM's and Brand Owners need to be able to react quickly when something goes wrong, yes, data needs to be normalized and aggregated, and stakeholders need to be connected. However, that only provides information on what's happening now and how potential problems can be resolved. It does however not use the present situation to improve the future.
What we need is "closing the loop", in other words combine the information on what has and is happening to be able to improve operations both in real time and over the duration. What does this mean? Well, first, there is a need to integrate suppliers and channel partners alike. Yes, this requires a B to B integration, but more importantly, it can only be achieved if collaborative relationships are built between partners. Developing those is a whole discussion in its own right, and I will defer this one to another entry. Building the electronic data transfer mechanisms is reasonably easy; building the trustworthy relationships is more difficult. Data needs to be put in a common format and will be used, through an operational data store, for exception and event management. However, the time dimension is not taken into account here. Indeed, it is by understanding trends, what has happened prior to specific events etc. that the dynamics of the supply chain is understood. Business Intelligence can be used here. Consolidated data is stored in a data warehouse for extended periods of time, analysis tools can then be used to understand what is happening and why.
This understanding allows the development of modeling tools that mathematically simulate how the eco-system will behave. This is where simulation comes in and where the difference is. Using simulation allows the analysis of multiple scenario's and the identification of how the supply chain should be transformed to be capable to react better to fluctuation in demand, absorption of events, and result in major reductions in risks of running a global supply chain. It is surprising to see how little companies are using simulation these days. It has been proven extremely valuable over and over again, but many companies seem to be bogged down in operational management, forgetting the importance of planning things right. Yes, when a fire burns, it needs to be extinguished, but doesn't avoiding the fire make more sense? This is why we, at HP, are focusing so much on business intelligence and intelligent decision making. Over and over again it has proven beneficial.
A couple weeks ago I got my attention drawn to two different events that happened at one week's interval. The first was about the collision of two satellites in space. Indeed, an obsolete Russian military satellite crashed into one of Iridium's communication ones. Both satellites burst in pieces, making it even more difficult for future ones to avoid collisions. When asked about prevention, Iridium told reporters they were dependant of NASA for their information and had not received any warning.
About a week later, another article spoke about two nuclear submarines, a French and a UK one, but each other during a military exercise. This probably made news because both vessels where armed with 16 nuclear warheads each. This sounds a little frightening. Although very little information was provided on the circumstances of the incident, it was mentioned that both vessels had their sonar turned off.
You may want to ask yourself why I am bringing this up in a blog devoted to the manufacturing industry. Well, aren't we often in the same situation as Iridium and the two submarines? Do we have visibility of what is coming our way? Do we have our sonar turned on? Particularly in the current crisis, when there is a lot of variability in the system, are we spotting potential problems early, allowing us to do the required course correction? I would dare to answer that the many cases we are in exactly the same situation as the submarines. We don't know what is coming our way.
Building our Supply Chain "sonar", requires working with the other players in the eco-system and exchange information on what happens. This can only be done if all parties understand the advantage of working together and building the visibility we talked about. Actually the benefits are even greater than just avoiding trouble. Through understanding the behavior of the supply chain, following aspects can be addressed:
- Obviously, spot problems early and react to ensure customer service levels are maintained
- Understand the "waste" in the supply chain and use lean/six sigma techniques to reduce those
- Address potential risks, which can be identified once the real operation of the supply chain is understood
- Develop simulation models allowing to experiment with alternative supply chain models , and ultimately optimize the supply chain for resilience and flexibility.
If you happen to be a member of SCMWorld.org, you may want to listen to the webinar I recorded on the use of lean and six sigma in the supply chain.
Two prerequisites are necessary to establish such approach successfully. First, trust needs to be built between the partners. This requires a different attitude towards suppliers, one that is established on a basis of a win-win approach and not an adversarial "get me the lowest cost" one. I would actually argue that this last approach often leads to extra costs somewhere else in the supply chain, and as such misses its objective all together. Secondly, an information backbone in which partners can share information securely, needs to be established. This often ends up being a barrier as the shared information is managed by a central team and partners fear leakage.
The use of emerging technologies such as Cloud Computing may actually address this. Indeed, in the current environment the information is centralized to be analyzed through a single database. The advantage of those new technologies is that they provide the capability to present distributed information as if they were located in a central database, providing the best of both worlds. Information can stay under the responsibility of the owner, who in turn can control who accesses it.
Let's make sure our companies are not submarines without sonar, as in the current crisis, maximum visibility is required to anticipate issues, take market share away from competition and be financially sound.
Last week I was fortunate to listen to the inauguration webcast of SCMWorld.org, a website devoted to sharing supply chain experience. The approach taken is actually quite interesting as it combines a 90 minute webinar with a two to three day forum discussion. The inauguration webinar consisted in a discussion between Martin Christopher (Cranfield), Hau Lee (Stanford), Yossi Sheffi (MIT), Robert Blackburn (SVP SCM, BASF) and Reuben Slone (EVP SCM, OfficeMax) and focused on the better understanding of the forthcoming supply chain mega-trends that will impact your business in 2009. Both the discussion and the subsequent forum turned out to be extremely interesting. I would like to share with you my thoughts resulting from listening in and participating to the forum.
Today, companies are focused at short term cost reductions, but as it turns out different approaches are taken. While some companies cut whenever they can without taking long term implications into account, others try balance the short term and the long term. It was interesting hearing Robert Blackburn talk about how BASF tried doing so, and how they saw closer collaboration with suppliers as a way to do this. There is indeed a natural trend to try reducing costs of supply by having the procurement department squeezing them a little more. However, this may have two implications, first put availability of supply in danger, and second, push the supplier to bankruptcy. On the latter front, both BASF and OfficeMax increase contact with their suppliers to better assess their financial situation and whether there is a danger of bankruptcy or not.
How to increase this visibility is really the question? It requires gaining information from throughout the Supply Chain. As this takes time to implement, you should start with the key partners, the ones that are essential for getting your product out in time because either they are on the critical path or they cannot be replaced easily. Building closer relationships with the suppliers, establishing trust, going for a win-win situation, and sharing information all help gaining access to key data that will provide this visibility. Companies should start by documenting the end-to-end processes across the supply chain and use that to list the key data required and define the key performance indicators that will establish the health of the eco-system at any moment in time.
By displaying the KPI's in real-time dashboards, potential issues can be spotted quickly. But more importantly, by analyzing the data over time, trends can be spotted and addressed early.
This data can also serve another purpose. Indeed, wouldn't it be interesting if you could simulate future scenario's and decisions ahead of time to understand how the eco-system would behave? Using simulation packages such as Optiant, By using the collected data to build the models and then use those to identify the implications of decisions, treads, problems etc. dramatically helps companies understand implications, plan changes carefully and identify problem spaces early. All of this can be done while testing out appropriate inventory and service levels to reduce the costs dramatically. By doing just that, HP has been able to save millions of dollars over the years. Combining visibility and simulation, closing the decision loop, is a very strong approach to survive the current crisis, to manage the eco-systems volatility effectively and to improve resilience drastically.