Category Archives: Theory Blindness

WYSIATI?

Continuing the theme of keeping improvement projects on track, CI leaders should be very careful to avoid falling prey to “theory blindness.”

Theory-blindness is an “expensive” pitfall that extracts a huge economic toll in organizations of all types and sizes. In some cases it leads companies to invest in expensive solutions that completely miss the real cause. In other instances, organizations will live with costly problems for years because of a shared but erroneous theory about the cause of the problem.

Psychologist Daniel Kahneman, (the only non-economist to win the Nobel Prize in Economics) describes the phenomenon in his book, Thinking, Fast and Slow.

The human brain, he illustrates by describing decades of research, is wired to apply a number of biases, theory-blindness being one of them. Understanding the biases gives us the tools to overcome them.

The most powerful mental bias underlying a great deal of the flawed decision making is what he calls: WYSIATI (which is a acronym for “what-you-see-is-all-there-is”). It occurs because we are inordinately influenced by what we see, and greatly undervalue information we do not have. As a result, paradoxically, the less we know, the more sure we are of our conclusions.

Based on research and many years of experience, we’ve determined the best way to avoid theory blindness is to rigorously adhere to an improvement process; one that includes a comprehensive method of identifying and quantifying root causes and the real waste.

what’s your plan for avoiding theory blindness?

plan

Our previous post described the pitfall of “theory blindness,” and explained how, with good intentions, people can fall prey to it.

A sure way to avoid this pitfall is to adhere to a defined improvement methodology — one that goes well beyond the common (most often ineffective) two-step approach of:

  1. Someone in a position of authority comes up with an improvement idea
  2. The idea is immediately implemented

Instead, a more elaborate improvement process or plan will incorporate a systematic search for new knowledge and understanding in order to arrive at a solution that addresses the root cause of whatever problem we are hoping to solve or whatever process we’re hoping to improve.

Take, for example, the first six steps of the 8-step methodology we apply:

  1. First, we identify and quantify what to work on. After gathering a lot of ideas and opinions about opportunities, we prioritize and then quantify. Quantification helps us in two ways: it helps us set aside our pet ideas for improvement (theories) that simply are not supported by the facts, and it helps us proceed with appropriate urgency on the highest impact opportunities.
  2. Next, we put together a team of people who can study the opportunity for improvement from a variety of perspectives. We include input from both customers and suppliers of the process (internal and, when possible, external) which helps us overcome theory-blindness, because people who can see the process from different perspectives can help us spot the flaws in our theory.
  3. Third, we gather facts and data about the current situation. This step can be difficult for those who entered the project with a preconceived solution – but when a sufficient number of relevant facts and data are surfaced, they most often serve as effective treatments for theory blindness.
  4. Fourth, we analyze root causes: thinking expansively and systematically about possible causes and then critically examining each possibility.
  5. The fifth step is to implement, but we’re not finished yet!
  6. Step six is to study the results. Because we started the process with a good baseline measurement, when we study the results, we will either confirm a successful improvement or not. We can then complete the final steps and move on to the next project!

The less we know… the more we think we’re right!

questions

Just as continuous improvement teams face hidden perils associated with confirmation bias (see related post), there is another frequently unrecognized pitfall that plagues many-a-project, and that merits our constant attention.

“Theory blindness” is a remarkably common condition in which our theory about the way the world works blinds us to the way the world really works.

When afflicted, we readily accept evidence (however meager or flawed) that supports our assumption or theory, and we explain away or simply fail to notice or correctly interpret evidence that contradicts it.

Daniel Kahneman is an Israeli-American psychologist and economist notable for his work on the psychology of judgment and decision-making, as well as behavioral economics, for which he was awarded the 2002 Nobel Memorial Prize in Economic Sciences.

In his book, Thinking, Fast and Slow, he suggests the human brain is wired to apply a number of biases, theory-blindness being one of them. The impact of theory blindness is that we are inordinately influenced by what we see, and greatly undervalue information we do not have. As a result, paradoxically, the less we know, the more sure we are of our conclusions.
It’s just how we are wired.

The Less We Know…
When engaged in improvement projects it is important to maintain an open mind and a heightened awareness of the impact of theory blindness, lest we fall prey to the pitfall and “assume” things that just aren’t so.

But beware!

Confidence, it turns out, depends much more on coherence (whether all the information at hand points to the same conclusion) than completeness. Thus, while the less we know the less likely we are to be right, the more likely we are to think we are right!