In an integrated world between Product Development & Engineering (PD&E) and Supply Chain, the main interface is the BOM, the Bill of Materials. Now is it, as Robin Saitz points out, a bomb?
Actually, we first have to define what BOM we are talking about, as there is the e-BOM, or engineering BOM, typically maintained in the PLM environment and describing how the product is developed. Second we have the m-BOM, also known as the manufacturing bill of materials, mainly focused on the components or ingredients. The m-BOM is maintained in the ERP system and is a critical part of Master Data Management. Third we have the s-BOM, focused on support of the product in the field. All three contain a common base, but each has specific information related to the functions performed.
The question is obviously which of the three is the master. Well as long as the product is not released, this should not be an issue. It obviously makes sense the e-BOM leads the way. But what happens after NPI (New Product Introduction)? At NPI, the other two BOM's are updated. From that point onwards they should stay synchronized.
However, here we come to another question. When an engineering change is initiated, is this done by PD&E, or is there a separate team within production that takes care of the engineering of products, once they are in production. Obviously, if the change is initiated by PD&E, the e-BOM will be updated and this information should cascade again to the m-BOM and the s-BOM. In the second scenario things are trickier. Indeed, will the engineers allow those production people to access their PLM environment, or do they want to keep that to themselves? In that case, the engineering changes will be entered in the m-BOM and s-BOM directly, but that builds discrepancies between the three BOM's.
The closer the BOM's can be, the easier it is to maintain consistency. Design for Manufacturability (DfM)/Design for Supply Chain (DfSC) and Design for Serviceability (DfS) allow the teams to work together in the development and roll-out of the new product, do not only foster relationships between engineering, manufacturing and services, but often result in higher quality products that can be manufactured and serviced at a lower cost, as Tom Shoemaker points out in a blog entry titled BOMs and better development.
DfS is probably the most obscure of the above acronyms. And my "google meter" shows there is not a lot written on the subject. It's all about thinking, during product design, at how the product will be maintained once in operations. You may ask yourself why this is important. Well, facilitating the work of service engineers reduces the cost of warranty and improves customer service. There is another reason these days, and it has to do with the increased trend for "pay per use". The other day I ran into an aerospace company telling me they have started selling (or renting might be a better term) their motors per flying hour. Now, every hour the motor is kept on the ground reduces revenues. The manufacturing company is now responsible for keeping the motor flying and suddenly serviceability becomes a hot topic. The electronics industry has been looking at this for a while, hence the reason they approached us to understand what we do in that space.
Joe Barkai from IDC-Manufacturing Insight points out that the small upfront investment of thinking about serviceability while designing the product is largely off-set by cost reductions in the field. This has been our experience too. But he also makes the point that design engineers neglect long-term, life-cycle costs. Measurement and reward systems may have to be adapted to ensure the appropriate behavior gets engrained in the engineering culture.
DfM/DfSC is already gotten somewhat more in the culture, as we have been at it for quite a while now. I remember hearing about it in the late 80's. However, there are still companies with a large gap between engineering and manufacturing. Demonstrating the importance of the synchronization between the e-BOM and the m-BOM can surely help bridge the gap. People, processes and systems are not three independent components of the organization. They interact extensively and by improving one, we may hope that the other ones improve also. Your thoughts on the subject?
In most companies today, product development & engineering (PD&E) and supply chain are two different worlds, each with their own rules, their own way of operation and their own approaches. I remember a meeting with a client a couple years ago, where probing about how product development and manufacturing communicate to optimize the design of a product for manufacturing, and was told in no uncertain terms that the company had a product development, an engineering and a manufacturing department, and that each was independent.
Beyond the organizational debates, it is critical for organizations to have PD&E and Supply Chain working closer together. As patil, Bath and Ragsdell point out in an article titled "Accelerated Product development and Supply Chain Management", the three key components influencing the profitability of manufacturing organizations are quality, cost and delivery. It is by taking a holistic view that these objectives can be addressed.
PD&E should lead the product lifecycle, from concept development to end-of-life, being responsible not only to bring the concept to market (New Product Introduction), but also to manage all engineering changes along the lifecycle of the product. Gaining feedback from services (both for products under warranty and for others), the PD&E team can improve the quality of the product, and with it the experience of the customer.
Supply Chain on the other hand is responsible for the cash-to-cash cycle, from the order to the delivery of the product. They handle the manufacturing (outsourced or not), the supply of components/ingredients, the delivery of the product through the chosen distribution channels and the services/reverse logistics.
Not only do information flows need to be exchanged between the two key business processes, but decisions taken in one have major impact in the other. Many years ago, companies started to focus on "design for manufacturing (DfM)", now a newer term; "design for supply chain" is in vogue. But in practice, how much is implemented? That's the real question.
PD&E teams are creative and focused on innovation, while Supply Chain teams focus mainly on operations, ensuring the predictability of their eco-system. A different type of individuals can be found in each organization. This results in mis-understandings. Conflicts are difficult to settle and final decisions may have to be taken high up the hierarchy.
Developing an understanding of how the other party operates is critical for developing a more integrated approach. Years ago, when implementing their DfM program, HP used to establish combined teams (product development & manufacturing) that took the product from concept to market. That team then lead the manufacturing for the first 6 months of production, gaining an understanding of the implications of decisions taken in the design process. This helped us in those days foster the link between the two processes.
But the human aspect is not the only one. The product lifecycle process is mainly articulated around a PLM system, while the cash-to-cash process runs on an ERP system. Both systems are different in nature and integrating them is difficult. Often the introduction of the new product is the breakpoint. That is where the BOM and associated information is transferred from the PLM system to the ERP one. But that does not resolve the post NPI engineering changes, so building a bridge between PLM and ERP is a real need. SAP has done that with their SAP PLM module, but this one is not popular in the discrete manufacturing world. So, the problem remains.
One additional level of complexity is the increased involvement of suppliers in both the product lifecycle and cash-to-cash processes. Being it due to manufacturing outsourcing, or to optimize the utilization of specific components or technologies, one is now confronted not only with the integration of PLM and ERP, but also with the linkage of the supplier (and his systems) in the information flow. We believe that, moving forward, the latter will be addressed through community clouds, as described elsewhere on this blog.
There is space for a true PLM/ERP integration ant to my knowledge, this has not been solved yet in a satisfactory way. There is still room for improvement.
Many things have been said about the HP IT Transformation and how we turned the business upside down. Randy Mott even described the three big mistakes made during this journey. HP leads IT transformation by example. And I could go on like this. However, while the transformation of corporate IT receives a lot of the limelight, there is another far-reaching transformation ongoing, and that one concerns the R&D IT.
Like in many companies our R&D engineers used to have their servers under their desks, managed by themselves, and completely out of control of the central IT department. When you started a new project, you found the hardware, decided what software you would use and went to work. This resulted in a limited ability and incentive to leverage from team to team, from project to project. If by accident you used the same software, you would have different versions or at least heavy customization making it difficult to exchange information. We discovered this had following implications:
- Engineers were losing a fair amount of time addressing IT issues rather than performing R&D (estimated to represent up to 20% of their time)
- The lack of information sharing reduced the re-use of components and subassemblies
- License costs were extremely high as software was acquired by the engineers themselves for the purpose of each project
We decided to take advantage of the data center transformation performed for corporate IT to transfer the running and management of the R&D IT infrastructure to IT, while standardizing the tools. The objective was to have the engineers spending 100% of their time doing research and development, to drastically increase re-use and to diminish license costs.
This was obviously not an easy journey. At the start we got a lot of pushback from the engineers themselves. They felt they would lose control. We wanted them to be very creative, but at the same time were forcing them to use specific tools,... and processes, as we realized we had to redesign and standardize processes along the way.
Learning from the three big mistakes made in the IT transformation, we asked the engineers to register their applications at the start of the journey. A number of them did. Others, as would be expected did not. When we sunset and migrated the first applications into the data center, a number of homemade interfaces suddenly stopped working resulting in furious calls to the IT department. Well, guess what, these were interfaces to applications that had not been registered. It became very clear this was going to happen and engineers better participated.
Beyond the hiccups, it was the close collaboration between teams of engineers and IT that allowed the definition of standards for the organization. The combined team made decisions as to which applications would be used moving forward.
One of the issues the team ran into was response latency. Indeed, HP's three double data-centers are all in the US, while our R&D departments are scattered all over the world. Particularly in software and PCB design, having all applications running in the data-centers proved to be too slow. So caching servers were established around the world. The principle of having the master data residing in the data-centers remained, but the applications ended up being run out of remotely managed caching servers, allowing centralized management/backup etc. while providing the engineers with satisfactory response times.
IT got standardized and common tools, reducing complexity and costs and enabling common processes. Addressing the communication issues removed the need for "shadow IT". The engineers no longer needed to spend time managing their IT environment, freeing up 10-20% of their own time. This improved the productivity of the groups. Standardization of tools & processes helps focusing the innovation on the customer.
HP announced its quarterly earnings this Wednesday. During the earnings call, Mark Hurd was asked why we are spending less on R&D in 2010 than in 2000 when we were a company half the size. His answer includes following "...we're in a mode to look for processes that we can standardize. Simple things like testing, QA, how many development tools we've got. All of these have been, because of acquisitions, very random and very unique, and very, if you will, siloed.
So our ability to get standardized on those processes gives us an opportunity to take out cost. And the way we look at R&D is we don't look to Cathie's point, we look at R&D in the context of overhead, maintenance and innovation. What we are trying to do is get the innovation dollars up. So when you look at the total R&D spend, it's down, and yet the yield to the product road map is up. So, when you look at the number of introductions of products that have had meaningful change in share positions, that number is actually up for us. And that's what we want to measure. We want to get more innovative products into the market sooner."
Even if the R&D IT transformation was not the only reason, it sure participated in achieving the objectives described by Mark. What about you? Do you recognize yourself in what I described at the start of this entry? We are happy to share our experience, so don't hesitate to contact us.
In a series of previous posts, I have been addressing the whole concept of "closing the loop", and how, using data provided by the eco-system, we could improve operations. The whole issue is getting access to the appropriate data, and here is where things get complex. Actually, two things are top of mind when suppliers are asked to share information with their customers. They want to make sure the information provided will not be used against them during the next procurement negotiation, and that the data does not get in the hands of their competitors.
These are the main reasons why suppliers often only reluctantly share data across a Supply Chain. In the previous posts related to "closing the loop", I have demonstrated how sharing information can benefit all parties. The traditional hub approach poses potential treads by storing all data in a single place. So, the question comes up whether newer approaches such as cloud could address concerns.
A new approach, called "community cloud", may address some of these issues. Actually, like many cloud terms, a community cloud really covers two independent concepts. On the one hand, as described in an article "Digital Ecosystems in the Clouds: Towards Community Cloud Computing", it shapes the underutilized resources of user machines to form a cloud, hence avoiding the need to expose the information in a public cloud. On the other hand, as depicted in the NIST definition of Cloud Computing, it may consist in an infrastructure shared by several organizations and supporting a specific community that has shared concerns. It may be managed by the organizations or a third party and may exist on premise or off premise. In other words, it could be presented as a private cloud for a community.
This is a first step. It provides us with an infrastructure we can use to share information, without having to invest upfront in a hub. However, it does not address the data concerns. We can obviously argue that a community cloud could be safer than the public cloud. The safest place to maintain data is however in the company's own environment.
Let's take this idea a step further. Could we execute the collaboration processes in the cloud, but at the same time maintain the data under control of the owners. This is precisely the type of community cloud I suggest to "close the loop". This approach works very well for the supply chain, but may be more complex when used in the PD&E (Product Development & Engineering) space due to the size of the information that needs to be transferred.
How should this work? The first time the partner enters the community, he indicates what data he accepts to share and points out where that data can be found. Either he provides direct access to tables in his operational databases, or he builds a staging database in which he stores the data he accepts to share. The cloud then downloads a small (digitally signed) agent on his site that will link directly with the logic located in the cloud. This agent can trigger the cloud logic or be triggered by it. In both cases, it will identify the information required to execute the process or generate the report. The data is then transferred to the cloud, but only kept there to perform the one task required. Once the task is performed, the data is cleaned of the cloud systems. If the same data is required a second time, it will be fetched again from the partner's servers.
You will argue that such approach will increase data uploads, and that is correct. With today's internet communication facilities, this should not be an issue as long as the amount of data to be transferred is not too large (hence my comment on PD&E).
I have not found a name for such approach (suggestions are welcome), but believe as long as cloud security aspects are not fully resolved, such approach is the best one to take.
Coming out of the current recession, with growing, but volatile, demand, companies that worked at making their supply chains leaner should seriously look at such approach to get a tighter control over their ecosystem and ensure customer service.
For many years now, people have been arguing that PLM applications could brighten the supply chain. As far as I can see, it has not happened (yet?) PLM-Supply Chain Integration helps manufacturing firms see the big picture, says Alan Earls. So, why is there no more focus on that integration? And why would it now be a good time to look into this in more details?
You would just be landing from Mars if you did not know we just went through a tough recession. Some will even argue it is not over yet. Companies have used all available cost-cutting recipes to balance revenues and costs, trying to keep out of the red. But how many have focused on improving their product margins through improving designs for cheaper manufacturing and service? I have not heard that many discussing such efforts. Benefits can however be great.
To understand the relative cost of each step in manufacturing, one may want to go back to Activity Based Costing (ABC) techniques. The objective is to identify the specific cost of each product feature and then identify how to reduce cost without harming the features of the product. You may remember that, a number of months ago, HP won the Franz Edelman Award amongst others for their complexity ROI calculator. Although this tool is looking at whether an additional platform or feature should be developed (in other words, will the additional cost be offset by additional revenues, increasing the overall profit), a similar approach can be used to identify what is most costly in the manufacturing of the new product, and whether this cost can be reduced.
This however requires a close collaboration between product development and supply chain, and here is where the issue often start. Product development engineers are typically rather creative and like to develop something new and exciting. Supply Chain people are all about operational excellence, simplifying how things are done. So, right from the start we have different characters that may not work well together. On top of that we need reliable information on what the costs could be, making things even more difficult.
To get the integration going and benefiting of the results, one of two things need to happen. Either the company needs to be in a desperate situation to reduce costs (price) of their products to stay in the business (the "burning platform" syndrome), or a strong top management push needs to be in place. If both are there, it's obviously even better.
A couple years ago, I met with the engineering department of an automotive OEM. I asked them if, when defining the stamping process for a hood for example, they could go back to the design engineers pointing out to them that a small change in the design could reduce the number of steps dramatically. They looked at me like if I was coming from another planet.
When looking at it from an IT perspective, we have some great similarities between the purpose of PLM and ERP systems. Indeed the objective of both is to provide a community with the most up to date information and keep it abreast of changes. However, the nature of the data is different. Where in PLM it's all about features, functions, specifications, engineering changes, in ERP it's about part numbers, orders, quantities, schedules etc. But as already mentioned above, both need to interact closely to minimize product costs. It requires the integration of two worlds, and that is in my mind why, in many companies it has not been addressed yet.
The benefits go from the ability to leverage volume pricing to faster production ramp-up. And these are no small benefits, but often they are overlooked or only partially addressed. In the current search for cost reduction, this may be a good time to address the issue and take full advantage of the potential savings, so why wait?