In this article the author discusses the human side of Dr. Taguchi’s Robust Engineering (RE) and shares his lessons learned about the success and failure factors for implementing RE in companies. Hints are given for avoiding pitfalls and increasing the success ratio, both in the engineers’ activities and management’s role in RE implementation.
Along 23 years actively studying and applying dozens of tools for problem solving and quality improvement, the author can very confidently state that Taguchi’s RE is among the top two (the other being Altshuller’s TRIZ). Some years ago GM-Corporate conducted an appraisal of 39 quality tools being used by engineers in 200 of its plants worldwide, and RE got an outstanding first place [Kawasaki, M., pers.comm.]. RE is so powerful, so practical and yet so misunderstood and underestimated by so many companies! The author has seen several cases where stunning improvements have been made and substantial savings have been reaped from individual RE applications, but somehow top management does not embrace the challenge of company-wide RE implementation. Why in this world should not receive top priority a method that can prevent even unknown future problems and reduce cost and increase quality and reduce time-to-market and boost customer satisfaction? The root causes behind that paradox are cultural, not technical, and very hard to understand. While the author humbly realizes that he still does not have the key to decipher that enigma, he believes that some of the clues given here may increase the chance of success for those who are already committed to RE. In the following sections we discuss some of the managerial and human aspects of RE implementation. The intent is not to make an exhaustive discussion but rather to point out the main lessons learned by the author. So:
3. If you are an executive or manager...
3.1 Understand the uniqueness of RE
- Have you ever heard of a problem solving method that can solve the problem without having to attack its causes, so that you don’t need to spend money to control them?
- Do you know about any problem solving method that can eliminate unknown problems of new technologies and products, well before they happen for the first time?
That’s Taguchi’s unique Robust Engineering! But in order to understand above statements, we need to go into a little conceptual discussion. OK, you are an executive, and executives are not supposed to deal with technical stuff. But please bear with me, for the good of your company.
With respect to (a) above, Dr. Taguchi had the cornerstone idea of using cost of change to distinguish noise factors from control factors. Noise factors are those variables that we know that affect the product function, but are either costly or impossible to change. Control factors are those design parameters that we can change without major impact on unit manufacturing cost. Noise factors abound in the manufacturing and use environments and are known causes of many problems in product performance. Traditionally, we spend a lot of money trying to control or eliminate those causes by adding countermeasuring “features” to product design, plus more inspection and testing during production. Radically different, Taguchi’s Parameter Design approach is to constrain the solution within the control factors only and search for the robust condition, i.e., the particular combination of control factor settings that is least sensitive against the damage induced by the noise factors. That way, most problems can be solved without directly attacking the causes (noise factors) and without spending money!
As to the intriguing (b) statement above, Dr. Taguchi realized that most performance problems of a product/process (which we’ll call “engineering system”) are nothing more than symptoms which result from variation in the performance of the system’s basic function. Let’s call this functional variability. So problems are symptoms of functional variability. For illustration, let’s consider the classical example of an automotive brake system. The system’s function is to produce a friction torque to stop the wheel. Occasionally, vibration and audible noise appear as brake system’s problems. If most product problems really are symptoms of the functional variability, then vibration and audible noise are symptoms from friction torque variation. The connection may be hard to see at this point. But Dr. Taguchi carried out his thought process much farther. From the fundamental physical principle that no work can be done without energy, he concluded that every engineering system must receive and transform energy in order to accomplish its function. And in such process of energy transformation, physics also guarantees that there are only two possibilities for the energy being handled by the system: it is either utilized for the system’s useful function or lost and left available in the system’s use environment. Wherefrom the problems come. Because problems require energy too. From this interesting viewpoint, a problem is nothing more than a harmful function being “performed” by the energy that was not utilized to perform the system’s useful function! Now we can better understand the connection between brake vibration and variation in the friction torque output. Whenever the several energy transformations performed by the brake system do not produce the end result of “stopping energy”, lost energy is available to manifest itself under all sorts of symptoms: vibration, audible noise, wear etc. A vibrating brake is less “torque-efficient” because energy was “stolen” from torque output and redirected to vibrate the whole vehicle (yes, that requires a lot of energy!). So we get to the general conclusion, valid for every engineering system created by humankind, that: the system’s energy transformation efficiency while performing its useful function is what determines if that system will have more or less problem symptoms to disturb its customers. Now, the degree of efficiency in energy transformation (useful energy over lost energy) is a direct function of the adjustment or specification defined for the control factors during the system’s design stage. Therefore, if we adjust our design parameters in such a way that maximum energy is driven to perform the useful action, we simply leave no energy available to be “used” for problem generation later on at the production or use stages. And that is true even for unknown, unsuspected problems that were never experienced before! That way we can very efficiently get rid of expensive firefighting activities, without even knowing about it. Tell me about prevention! In short, feed your product’s useful function with all the available energy, and let your problems starve to death!
3.2 Let the fire burn and focus on true prevention
Over and over, when asked what percentage of their time they spend fighting problems, engineers reply around 70%, in the average. Some say 100%, because they belong to departments specifically created to fight downstream the problems previously designed upstream in the development cycle. Others might even say 120%, counting overtime. That is an enormous waste of resources, energy, and health. Somebody must have the guts to stop this organizational bleeding, and that’s top management responsibility. Many people confuse defect reduction with quality improvement. Defect or failure reduction is just cost avoidance, not quality improvement. Improvement is what you do after you got rid of nonconformances. The biggest opportunities for cost reduction come from reducing variability well beyond (“six sigma” and up) the tolerance limits. Some might think that this is a “new” Six Sigma concept. But the truth is that Dr. Taguchi has been saying this for decades, by means of the quality loss function concept. We must go beyond “zero defects”. It is not enough just to meet specifications. But in most cases, such state of high performance simply cannot be achieved through the late, time-pressed, cost-limited and technically-restrained quick (and dirty) fixes so typical of firefight solutions. A high power voice must resound from the boardroom and say: “- Enough for the production of fires! What has gone, has gone; let those fires burn. But from now on let’s design robust products, period!”
3.3 Give it a try, even if it is not your Corporate drive
I have seen companies loose the excellent improvement opportunities (quality and cost) made possible by RE, just because someone raised theoretical reservations against Taguchi Methods (TM). I know of at least one company whose engineers were enthusiastically applying RE in their plant (e.g., in one of the applications they saved 3 million dollars in annual warranty cost), but all of a sudden they were prohibited to keep on with that because their Corporate Quality Department started Six Sigma with a consulting company that curses TM. I’ve learned that if now those engineers want to optimize their processes by using TM, they must not let Quality know about it! Back to Middle Age Inquisition? Engineers should be free to choose their best tools. So please challenge anyone who condemns Taguchi Methods based on the viewpoint of traditional statistics. Ask that one three basic questions:
- How many experiments have you performed with the specific purpose of maximizing the robustness of a product/process against noise factors, using the Signal-to-Noise ratio as the single response?
- How many times in your experiments have you forced the experimental error by deliberately introducing the strongest noise factors into the experimental observations?
- How many times have you simultaneously studied, in a single experiment, 7 or more variables at three levels each?.
Accept no “but ...” as a response for the above questions. Have ears for numbers only. If you do not hear high counts, then you are facing just emotional, philosophical resistance to Taguchi Methods by someone who does not know what RE is all about.
3.4 Don’t think your blessing is enough for things to happen
That is not true just for implementing RE, but for any organizational change. When I was a young engineer I used to think that one of the advantages of RE is that you don’t need big cultural changes for it to succeed – you just go ahead and try it. True, in part. The fact is that you must go well beyond the first try for really incorporating RE into the engineer’s daily work. Implementing RE is not different from any other change process.
Some executives I’ve met seem to believe that their initial green light plus a few occasional “well done” said during the presentation of this and that case study are all that is necessary for RE to happen in their companies. The truth is that RE challenges the roots of traditional engineering culture (yes, this thing is quite real). All of a sudden, there comes an outside consultant who has no experience with our business, and dares to tell us that for decades we’ve been measuring the wrong things about our product, that proposes some weird measurements which violate our test procedures, who says that more important than solving problems is to measure and reduce functional variability, that zero defects is not enough, that even product within the tolerance limits may cause losses and all sort of strange thinking. And just as if that were not enough, the guy now asks us to build 18 expensive prototypes and wants us to test them for months under two different test conditions. With the exception of a few and rare open-minded people, most managers and engineers will react to such innovative ideas and offer severe resistance, most of it very quietly, behind the scenes. Besides the initial blessing to implement RE, overcoming such cultural resistance requires just three more things from top management: heart, mind and agenda.
3.5 Set up and commit to a comprehensive implementation strategy
Don’t think everything will happen from that 4-day training course you sent your engineers to. Without an effective implementation strategy, RE will only produce some sporadic success now and then, just as long as the few converted believers keep their intrinsic motivation alive, overcoming barriers by their own merit till their energy or patience supply ends. But if you want to see RE flourish and reap rich and plenty companywide fruits, an implementation strategy is needed. The author recommends a three-phase approach, all of them are necessary, otherwise the implementation process will perish:
- Introduction. Here the idea is to demonstrate RE’s power in your own business. Select some hot technical issues as pilot themes for RE application. Make sure to include both chronical problems and new strategic technologies. Train the involved people (managers and engineers) and provide good consulting support along the case studies. I emphasize that consulting support is critical in this phase. Many ill-conceived RE implementations die here, because engineers without enough experience with RE try to run experiments by themselves. They may easily make a series of wrong decisions concerning the response to be measured, control factors and levels, noise factor strategy, data analysis etc. Then, after the experiment proves unsuccessful, they blame Taguchi’s RE method!
- Dissemination. Here the purpose is to create a “RE movement” in the company. Now that in Phase I you have “validated” RE in your reality and have some real data to starting beating skepticism, it is time to structure leadership and spread the gospel. Name an RE Sponsor from the top management team and his right technical arm, the RE Champion. Define implementation indicators and targets. Plan and allocate resources (training, people, time, equipment and test samples, internal promotion etc.). Define recognition and reward practices for teams and individuals. Plan and conduct systematic reviews by top management. Publicize results. And... well, you’ve done that before for other quality tools.
- Consolidation. Don’t overlook the need to standardize the use of RE into your technology and product development process. Require that new technologies and designs demonstrate their robustness. Demand formal planning of RE activities right from the beginning of the development process. Allocate resources and schedule time slots for the experiments planned. Also, during this phase form a few internal Certified Taguchi Experts that should work full time with RE.
3.6 Involve your best suppliers in the implementation process
RE cannot be fully implemented without integrating suppliers into it. In many cases, your product optimization depends on the optimization of suppliers’ processes. In other situations, only the supplier has the specific know-how needed for a successful experiment. So make sure that your RE implementation initiative includes your key strategic suppliers.
3.7 Form and allocate full time specialists to lead RE studies
One of the success factors of Six Sigma “programs” are full time Black Belts specifically trained and allocated full time to improvement projects. RE implementers should do the same. Although the participation of product/process engineers is key for RE studies, mainly during the initial experiment formulation, let’s not burden them with other specific technical and organizational activities necessary for conducting the experiments. One full-time, certified RE expert can support up to 100 case studies each year. That’s more than many big companies do.
3.8 Identify and displace cynic people
After all is said and done, there still might be people in the organization who, by some obscure psychological phenomena, will just not buy RE. Those are the ones that pretend to agree, but in true silently disagree and will not move a feather to make it happen. If such individuals are in a position to block the road, perhaps the only solution is to put them aside.
4. if you are an engineer...
4.1 Think more like an engineer and less like a scientist
It is amazing to see how much of an engineer’s time is wasted trying to understand the nature of a product’s behavior. Much testing and measurements are carried out in different operating conditions, and lots of plots and data are generated. During those tests, much care is taken so that no strange variable (environment, for example) interferes with the noble quest for truth. Impressive reports are issued, with product’s behavior fully characterized and shown to meet the requirements. But somehow, later in large scale manufacturing and use environments, the observed product performance severely departs from that report’s findings, and firefighting starts. Sounds familiar? Why is it so? What is bad in those testing practices?
The problem is: that’s not the engineer’s job. Just as technology is different from science, engineer’s and scientist’s roles are different. The scientist or researcher gets paid to discover or understand things in nature. The engineer gets paid to create or improve things that do not exist in nature. The scientist tries to discover the general laws behind a phenomenon and to build a model that can predict its average behavior. The engineer simultaneously uses several models developed by the scientist in order to impose a desired behavior to an engineering system.
In the world of science, the effect of noise factors blinds the scientist’s ability to detect weak phenomenon effects, and therefore the experiment must be protected from such spurious variation. But in the engineering world, the effect of noise factors is precisely what the product must be made robust against, and therefore noise variation must not only be anticipated but also deliberately introduced into the experiment. That helps to realize why randomization is an essential technique in scientific investigation, but not necessary and even counterproductive in optimization for product robustness.
Variability, improvement, unit manufacturing cost and schedule are not typical major concerns for scientists as they are for engineers. And so on. But we are not usually told about that in our schools. Instead, we are educated to believe that if we know the model (theory) then we can do the best product. Later on in the industry, we realize that those scientific models are just a starting point; that reality is terribly more complex; that average is not the only concern, but variation reduction is even more important, and promoting drastic variability reduction is a very, very hard task. For decades now, in the past century, by the end of the last millenium, Dr. Taguchi has developed and made available a complete system (RE) that can greatly improve the efficiency of engineer’s technical activities in the engineering world. But here are we, in 2001, making efforts so that more people understand RE and use it. Amazing! Tell me about cultural resistance....
4.2 Realize that there is no such thing as a “root cause” for most complex engineering problems
We are taught that to solve a problem we must search for its root cause, and kill it. That practice works fine for many problems, indeed. A little thought from an experienced person or team, an Ishikawa diagram built and – aha – there’s the bad guy. But when we face our most complex engineering problems, that same practice fails miserably and becomes the reason why such problems happen again and again, till they are accepted as a fact of life, leaving us with chronical losses to pay forever. I’ll explain. What makes a problem a complex one? The high (more likely huge) number of variables involved. Therefore we conclude that it is a naive exercise trying to isolate and act on just one of them, even if you change several of them, but one at a time. The toughest problems we deal in engineering are not the result of a single cause, but more likely the sum of many different effects, many of them individually weak. And in such cases, the solution usually depends on simultaneously experimenting, identifying and combining those subtle effects in such a way that the problem vanishes. If you don’t have a structured tool to do that (and that’s RE), you’re up to disappointments.
4.3 Don’t think you’re that good
There is something called “design space”. That’s the universe of all possible combinations for the critical design parameter in your engineering system. Suppose you’re starting a new design and let’s assume that you have just 10 of such critical parameters to cope with (a very optimistic assumption, for even the simplest systems may have many more critical variables than that number). Now, let’s consider that you have just three options for specifying the nominal values for those design parameters (we are again being optimistic, because it is likely that in real life you’ll have more options than that, like more than three nominal dimensions for a thickness, or more than three temperature settings for an oven, more than three material types, spring forces, capacitance values etc.). With those 10 design parameters (control factors) and just three different conditions (levels) for each, the number of possible combinations is ... 310 = 59,049. That’s your design space size! Hidden among them is the robust combination of the control factors; that one that assures consistent performance of the system’s function, no matter what the noise factors are, leaving no energy for causing problems (your dream!). Now you’re an extremely knowledgeable and experienced engineer, besides the fact that you have a previous design specification to take as a starting point. So out of those 59,049 combinations you intuitively screen off, say, 39,049 that technically do not make sense. Excellent job. Now you are left with “just” 20,000 possibilities, all of them reasonable, to choose the optimum design from! Yes, out of them it is likely that you’ll be able to find at least one combination that will meet the requirements, after the corrections of a few “design-prototype-test” improvement cycles. But we are talking about optimum performance against the noise factors, not about acceptable performance against specification requirements – two quite different things! Let’s acknowledge that, no matter how technically good you are, the traditional trial-and-error method will give you a chance of just 1 in 20,000 of hitting the optimum (robust) combination of the control factors. Most engineers are not aware of the huge design space size. Moreover, they are over-confident about their technical superpowers. And thus they keep using the trial-and-error or the one-factor-at-a-time strategies in their design activities. So next time don’t think you’re that good. Be humble and use RE’s techniques to guide you in your expedition through the design space, in search for the robust design condition. I can assure you that it is much easier, faster, safer and funnier.
5. no matter if you are a manager or engineer...
5.1 Beware of the link between individual short-term technical decisions and long-term bottomline results
In this long section let me make a solemn warning against the hidden evil in two engineering practices for problem solving:
- the “educated guess”, or “shotgun approach” (SGA), and
- the “one-factor-at-a-time” (OFAT) or, plainly, “trial-and-error” method
In (a), the system expert learns about the problem symptoms and, based on his/her theoretical and practical knowledge, readily recommends: “do this” (and there goes the one bullet). Now in (b), if the expert feels that the situation is more complex, he/she decides to explore a few design options by changing just one factor in each experimental run and peeking the winner. For easier reference we’ll call (a) and (b) above the “Traditional Experimental Approach” (TEA). If you were missing an equation in this article, let’s define one now: SGA+OFAT=TEA. Having defined it, let’s start with some statements about TEA:
- TEA is deceptive. It is so “natural”, so intuitive, so common sense that, without even noticing, there you are seeping some of it. But in truth it is a wolf disguised of sheep.
- TEA addicts. Since no one uses TEA unless facing a problem, in the long term TEA very subtly renders its users some very reactive problem fighters. TEA-addicted people do not move their butts unless fire burns behind them. No defects, no need to improve; it is good enough. In the terminal state, TEA-addicted engineers may be fully convinced that their main role in the organization is to resolve problems. If your gut reaction the this last phrase was “and what else could it be?”, start urgently a TEA-deintoxication program (I suggest some RE shots to start).
- TEA is very, very ineffective. It is targeted just to meet tolerance requirements, not the ideal nominal value of the characteristic under study. Because of that, it does not lead to the robust condition of the control factors (the one with minimum variability around the target value). Since a non-robust condition is failure-prone, the problem (or some other variant of it) will likely reoccur, giving rise to much firefighting later.
- In the long run, TEA is very, very expensive. Firefighting costs a lot. Since each single TEA-induced decision can cause much firefighting later on and several TEA-decisions are taken daily by TEA-addicted people, we can conclude that TEA burdens the organizations with a huge cost. However, that cost is hidden, because it is very difficult to associate each of those several individual TEA-decisions taken in the past with the many big problems of the present.
- In the long run, TEA is very, very time-consuming. Time is money. The same said above about cost can be said about time. While TEA-induced decisions are quick, the firefighting caused by them takes a lot of precious time.
A big problem, though, is that people are seldom aware of above facts. Somewhere in the Industrial Age most managers and engineers lost the ability of systems thinking. TEA’s side effects are hard to appreciate, since the individual TEA-decisions and the later firefights are so apart in time that they look like unrelated events. Only people with a systemic viewpoint can recognize the link between them. That’s why we’ll engage a system’s thinking exercise right now.
Please follow me along as we derive some very bad side effects when managers and engineers use TEA. The following text summarizes much of what’s been said, and Figures 1-A to 1-D (a Current Reality Tree) graphically depict the cause-and-effect connections. In the text, the numbers between parentheses refer to the statement number in Figure 1. We’ll start from the apparently inoffensive root causes for the whole situation, located at the bottom of Figure 1 and, by using “if-then” clauses, we’ll cruise through a number of cause-effect layers, till we get to some very bad undesirable effects:
|Pass the mouse pointer|
above the image to enlarge.
On one side, if (102) TEA is very intuitive and “natural” and (103) most managers have built their careers using TEA and (104) managers and engineers are not aware of design space’s huge size and (105) people believe that their current knowledge is enough to find the best solution, then all these facts add up to produce (107) “TEA is the standard, accepted way for design solutions”. And on the other side, if (101) “top managers don’t know about the unique aspects of RE”, added to (107) “TEA is the standard, accepted way for design solutions, then (108) “top managers don’t see the need to fully understand RE’s principle and philosophy”, and if (110) “top managers are too busy with bottomline results and long-range planning”, and if (109) “the link between RE application and bottomline results is hard to spot, even if now and then a successful RE case study is presented”, then (111) “top management is not committed to the widespread and formal use of RE to prevent downstream problems”, what explain why (112) “people and other resources are not specifically allocated to RE activities” and also why (113) “RE is not required in technology/ product planning”. Now we continue the Current Reality Tree with Figure 1B.
|Pass the mouse pointer|
above the image to enlarge.
From my experience, a RE experiment requires, in average, 16 hours of team discussion, including experiment formulation and data analysis. To that time one should add other activities such as ordering/preparing test samples, arranging for test and measurement facilities, scheduling meetings, preparing meeting notes, documenting the study and preparing status review and presentation for managers. This adds considerable additional time over those 16 hours of team discussion. Now, the contribution of the Product Engineer (PE) is indispensable only during those 16 hours, in the case he/she is the only person in the team with expert knowledge in the technology or product under study. But if (112) “people and other resources are not specifically allocated to RE activities”, then all the burden of those additional tasks go to the PE. Thus one can see from Figure 1-B that (112) will lead, on one side to (206) “experiment cannot be run without PE’s technical knowledge” and, on the other side to (207) “RE experiment cannot be run without spending significant PE’s time”, and (206) added to (207) lead to (211) “RE experiment cannot be run without PE’s active involvement”. Now, getting back to (113) “RE is not required in technology/ product planning”, then (208) “RE activities are not formally included in product development schedule/ cost, and thus are seen as extra work”, and then (212) “middle management does not see RE experiments as top priority for PEs”. Now tell me what happens if: (209) “PE’s workload is high” (much of it due to firefighting) and (210) “under high workload PE must work on top priority items only” and (211) “RE experiment cannot be run without PE’s active involvement” and (212) “middle management does not see RE experiments as top priority for PEs”? Obviously, the result is (213) “many planned RE experiments are postponed by PEs”./
Now to Figure 1-C. People are so used to TEA’s knee-jerk reaction “do this” (the shotgun approach) that they will “naturally” resist to RE’s thorough approach: If (107) “TEA is the standard, accepted way for design solutions” and (301) “RE experiments take longer than a single, short-term TEA-decision” and (302) “RE experiments demand more prototypes and resources than a single, short-term TEA-decision”, then (303) “engineers think that RE method is too costly and time-consuming” (because they don’t see – and it is hard to do so – the connection between each single, quick TEA-decision they make everyday and the huge amount of time and resources that may cause much later, in apparently unrelated problem events). And because of (303), we then have (305) “proposals for RE studies are rejected at first hand by PEs” (but since it is not politically correct to do that openly, the typical rejection is silent, and comes disguised with all sorts of excuses and very, very slow responses).
Now, if (208) “RE activities are not formally included in product development schedule/ costing, and thus are seen as ‘extra work’”, then (304) “management easily accepts that RE experiments are cancelled or postponed”. Which combined to, on one side, (213) “many RE experiments are postponed by PEs” and on the other side to (305) “proposals for RE studies are rejected at first hand by PEs”, explains why (306) “RE experiments are not conducted or concluded”. And if we have (306), then (307) “product is not evaluated against noise factors and the robust condition is not identified” and if (308) “robust condition is the one that maximizes energy efficiency” and if (309) “robust condition is too hard to find by means other than RE”, then (310) “product’s use of available energy is much likely inefficient”.
|Pass the mouse pointer|
above the image to enlarge.
And we finally get to Figure 1-D. If (310) “product’s use of available energy is much likely inefficient” then (402) “lost energy is available to trigger harmful effects” and if (403) “most product problems are caused by lost energy available in the use environment”, then this combination leads both to (405) “known problems are not effectively solved” and (406) “new, unknown problems are not prevented”, which sum up to (408) “targets are not met for product quality indicators” and then some very noxious end effects: (412) “customer satisfaction is reduced”, (413) “warranty cost increases” and (414) “chance of recalls increase”. Now since we have (405) and (406), and (404) “development/ validation are targeted to detect problems”, then (407) “problems ‘appear’ (who knows where from?...) later in the development/ validation cycle”, and then another undesirable effect that makes many lives miserable: (411) “much time and resources are spent with firefighting”, which
ends the bad picture with (415) “development/ validation schedule is delayed” and (416) “development/ validation cost increases”.
|Pass the mouse pointer|
above the image to enlarge.
Moral of the story: behind, much behind those six very undesirable effects, one can now appreciate at least two hidden, insidious, most important and actionable root causes: (101) “top managers don’t know about the unique aspects of RE” and (107) “TEA is the standard, accepted way for design solutions”. To have an idea of how much hidden those root causes are, from the Current Reality Tree one can count eighteen cause-effect layers separating them from the final undesirable effects! And those are essentially cultural or “political” issues, not just technical ones. Perhaps that’s why we, RE promoters, so used and dedicated to the technical side of RE, presenting case studies and individual benefits of RE applied in this or that product/ process, we have been making a terrible job selling RE to top management. Besides, we have been too much complacent with this mother of inefficiency that is TEA. It is time to show the links from it to those downstream disasters.
Some ideas have been discussed and some practical hints have been given concerning the implementation of Robust Engineering in companies. An implementation strategy consisting of three logical phases was recommended. Besides, we could conclude that the main root causes behind many undesirable effects companies struggle against with respect to development cycle and final product performance are: 1) “Top managers don’t know about the uniqueness of Robust Engineering” and 2) “The traditional engineering approach (“shotgun” plus “one-factor-at-a-time”) is the standard, accepted way for design solutions”. Therefore, if we want to successfully implement Robust Engineering, proper solutions must be developed for eliminating those root causes. Robust Engineering implementation is not robust against them!
The author would like to thank Luiz H. Riedel, Robust Engineering Coordinator at GM Brasil Proving Ground, for working together in the construction of most of the Current Reality Tree presented in Figure 1. Also, thanks are due to my Master Shin Taguchi, because much of the thinking in this article is not my own’s, but rather his and Dr. Genichi Taguchi’s thinking.
Kawasaki, M., General Motors do Brasil, personal communication in 1998, Sao Paulo, SP – Brazil.
Share the knowledge