Continuing the theme of keeping improvement projects on track, CI leaders should be very careful to avoid falling prey to “theory blindness.”
Theory-blindness is an “expensive” pitfall that extracts a huge economic toll in organizations of all types and sizes. In some cases it leads companies to invest in expensive solutions that completely miss the real cause. In other instances, organizations will live with costly problems for years because of a shared but erroneous theory about the cause of the problem.
Psychologist Daniel Kahneman, (the only non-economist to win the Nobel Prize in Economics) describes the phenomenon in his book, Thinking, Fast and Slow.
The human brain, he illustrates by describing decades of research, is wired to apply a number of biases, theory-blindness being one of them. Understanding the biases gives us the tools to overcome them.
The most powerful mental bias underlying a great deal of the flawed decision making is what he calls: WYSIATI (which is a acronym for “what-you-see-is-all-there-is”). It occurs because we are inordinately influenced by what we see, and greatly undervalue information we do not have. As a result, paradoxically, the less we know, the more sure we are of our conclusions.
Based on research and many years of experience, we’ve determined the best way to avoid theory blindness is to rigorously adhere to an improvement process; one that includes a comprehensive method of identifying and quantifying root causes and the real waste.
Several previous posts focused on identifying waste or opportunities for improvement. Once this step is completed, and a specific problem is identified as the “best” opportunity, the next step often involves finding the root cause of the problem.
This is a critically-important step and, if we’re not careful, we can find ourselves working on the “wrong assumptions.” In fact, we’ve consistently found that few things are more dangerous than common knowledge – when it is wrong.
Root causes are tricky and elusive things. Brainstorming and the “Five Whys” can be effective tools, but neither approach guarantees the “right” result or conclusion. In fact, when the “wrong” root cause is selected, the most common culprit is an untested conclusion.
The best course of action is to think quite broadly when brainstorming and to consider carefully every possible way that the people, technology, information, materials, environment, or methods might be contributing to the problem.
In addition, when the brainstorming of possibilities is over, we should put on our skeptical hat and test each one – before going to the next “why” to find the root cause. Otherwise, we risk arriving at the wrong conclusion.
Here are five key questions you might consider to test a possible cause is to see if it is consistent with the data you already have.:
Did the proposed cause precede the effect? If not, it is probably not the real cause. If poor call response rate is being blamed on the new answering system, was the call response rate better before the system was installed? If not, the new system cannot be the culprit.
Does the data indicate the problem is trending or cyclical? If so, you can probably rule out ideas about causes that would produce steady effects. For example, to test the possibility that shipping errors are on the rise due to poor technology, ask whether the technology has changed. If there have been no changes in the technology, any changes in the results must be caused by something else.
What other effects would you see if the proposed cause were true? Are you seeing them? If not, look elsewhere for the cause. For example, to test whether ‘poor morale’ is causing a high number of defects, ask where else would signs of poor morale show up. Are you seeing them there?
If the proposed cause were not true, could the effect have happened? Could the product weight be dropping if a blockage had not developed in the dispensing line? If the answer is ‘no’, you know you must find the blockage.
If the cause had been X, would it always produce this effect? If the answer is ‘yes’, then in order to test this, you simply need to check whether the supposed cause actually occurred. For example, if my car will not start, a possible cause is that I left the lights on. (I drive one of those old fashioned cars that require operator involvement to turn off the lights.) If I check and find the lights are in the ‘on’ position, I can confirm my theory. Otherwise, I must keep looking for the cause.
Challenges and best practices associated with continuous improvement