Although it is undeniable that it is valuable to learn from failure, very few organizations do it well. The reason for this is not that managers are not committed to learning from failure to improve future performance. In fact, I have found that managers in many different types of organizations put in a lot of effort to trying to learn from failures. However, these efforts very rarely lead to any real change. The problem is that managers have the wrong perspective on failure.
The majority of executives I’ve spoken with think that failure is terrible (naturally!). They also think that figuring out how to avoid repeating it is relatively simple: have the people involved reflect on what they did wrong, and then encourage them to prevent making similar mistakes in the future. Another option would be to assign a team to examine what happened and prepare a report about it, which would then be circulated throughout the organization.
Although it is commonly believed that failure is always bad, this is not the case. In some instances, failure is bad, while in others it is merely unavoidable. Additionally, there are even times when failure can be beneficial. It is also not easy to learn from organizational failures. Most companies lack the necessary attributes and perspective to effectively detect and analyze failures. Consequently, organizations must find new ways to progress beyond obvious or self-serving explanations for failures. To do this, old cultural beliefs and preconceived notions about success must be abandoned in favor of the lessons learned from failure. Leaders can start this process by recognizing how the blame game gets in the way.
The Blame Game
In many households, organizations, and cultures, admitting failure is seen as taking the blame. This is why so few organizations have shifted to a culture of psychological safety in which the rewards of learning from failure can be fully realized.
Executives from various organizations admit that they don’t know how to respond to failures in a constructive way that wouldn’t create an anything-goes attitude. If people aren’t blamed for failures, how will anyone ensure that they try their best work?
The idea that a culture that makes it safe to admit and report on failure cannot coexist with high standards for performance is based on a false dichotomy. In actuality, a culture that makes it safe to admit and report on failure can—and in some organizational contexts must—coexist with high standards for performance.
Deliberate deviance is blameworthy, but inattention might not be. If it results from a lack of effort, it might be blameworthy, but if it’s from fatigue near the end of an overly long shift, the manager is more at fault. It gets more difficult to find blameworthy acts as we go down the list. In fact, a failure resulting from thoughtful experimentation that generates valuable information may actually be praiseworthy.
Almost all of the executives I’ve spoken to say that the majority of failures in their organization are treated as blameworthy, even though only a small minority are actually to blame. This means that many failures go unreported and their lessons are lost.
Not All Failures Are Created Equal
An understanding of the different types of failures will help to create an effective strategy for learning from failures. There are three main types of failures: preventable, complexity-related, and intelligent.
Preventable failures in predictable operations.
Since most failures in this category are “bad”, they usually involve deviations from the closely defined processes of high-volume or routine operations in manufacturing and services. With proper training and support, employees can follow those processes consistently. When they don’t, deviance, inattention, or lack of ability is usually the reason. But in such cases, the causes can be readily identified and solutions developed.Production will not continue if the problem cannot be fixed in less than a minute. They will halt production, even though it causes them to lose money, until they understand and resolve the issue.
Unavoidable failures in complex systems.
Work is inherently uncertain, which is why a lot of organizations fail. Whether it’s triaging patients in a hospital emergency room, responding to enemy actions on a battlefield, or running a fast-growing start-up, people are constantly faced with unpredictable situations. And in complex organizations like aircraft carriers and nuclear power plants, system failure is always a risk.
Small process failures are an inevitable part of working with complex systems. To avoid consequential failures, it is important to identify and correct small failures rapidly. Most accidents in hospitals are the result of a series of small failures that went unnoticed until they lined up in just the wrong way.
Intelligent failures at the frontier.
Failures provide valuable new knowledge that can help an organization leap ahead of the competition and ensure its future growth—which is why the Duke University professor of management Sim Sitkin calls them intelligent failures.By conducting smaller experiments, managers can avoid the unintelligent failure of conducting experiments on a larger scale.
IDEO, a product design firm, launched a new innovation-strategy service. The service would help clients create new lines that would take them in novel strategic directions. The company started a small project with a mattress company and didn’t publicly announce the launch of a new business.
IDEO learned from its failures and was able to adapt and change its strategy. They started to hire team members with MBAs and made some of the clients’ managers part of the team. As a result, strategic innovation services now make up more than a third of IDEO’s revenues.
Leadership is required to get an organization to accept failures, which are still emotionally charged.
Building a Learning Culture
Leaders must establish a culture that prevents finger-pointing and makes employees feel comfortable and responsible for revealing and learning from failures. A clear understanding of what went wrong (rather than “who did it”) can only be achieved if failures, big and small, are consistently reported and analyzed. Proactively searching for opportunities to experiment is also key.
Detecting Failure
It’s easy to spot big, painful, and expensive failures, but many organizations try to hide any failures that won’t cause immediate or obvious harm. It’s better to surface these failures early, before they grow into disasters.
This is a summary of a management technique that Alan Mulally instituted when he became the CEO of Ford. He asked his managers to use a color coded system (green=good, yellow=caution, red=problems) to report on the status of their projects. Mulally believed that this would help to identify potential problems early on. The system was met with some resistance at first, but eventually became standard practice.
The story demonstrates a problem that is both widespread and fundamental: there are many ways to surface current and impending failures, but they are not used nearly enough. Total Quality Management and soliciting feedback from customers are both well-known methods for revealing issues in routine operations. High-reliability-organization (HRO) practices help prevent catastrophic failures in complex systems like nuclear power plants through early detection. Electricité de France, which operates 58 nuclear power plants, is a good example of this: they go beyond what is required by regulation and meticulously track each plant for anything that is even slightly out of the ordinary, investigate it immediately, and inform all of their other plants of any anomalies.
Many messengers are reluctant to convey bad news to bosses and colleagues because they don’t want to be the skunk at the picnic.
The researcher found that there were substantial differences in nurses’ willingness to speak up about errors and other failures in hospitals depending on the patient-care unit. The behavior of midlevel managers was found to be the cause, with managers who responded to failures openly, welcomed questions, and displayed humility and curiosity being more likely to have staff who would speak up. This was seen in a wide range of organizations, not just hospitals.
The solution to this problem is to make failure less shameful. Eli Lilly has been doing this since the early 1990s by throwing parties to celebrate intelligent experiments that failed to achieve the desired results. These parties don’t cost much, and redeploying resources – especially scientists – to new projects earlier can save a lot of money and potentially lead to new discoveries.
Analyzing Failure
It is essential to go beyond the obvious and superficial reasons for a failure to understand the root causes. This requires the discipline—better yet, the enthusiasm—to use sophisticated analysis to ensure that the right lessons are learned and the right remedies are employed. The job of leaders is to see that their organizations don’t just move on after a failure but stop to dig in and discover the wisdom contained in it.
Why do people often avoid doing a failure analysis? Because it is unpleasant to examine our failures and it can make us feel bad about ourselves. Also, analyzing organizational failures requires inquiry and openness, patience, and a tolerance for causal ambiguity, which is not something that most managers are rewarded for. Cultures that encourage reflection are important for this reason.
Even though we may not mean to, we all tend to gravitate towards evidence that supports our existing beliefs rather than alternative explanations. We also have a tendency to downplay our responsibility and place undue blame on external or situational factors when we fail, only to do the reverse when assessing the failures of others—a psychological trap known as fundamental attribution error.
Research has shown that failure analysis is often limited and ineffective—even in complex organizations like hospitals, where human lives are at stake. Few hospitals systematically analyze medical errors or process flaws in order to capture failure’s lessons. Recent research in North Carolina hospitals, published in November 2010 in the New England Journal of Medicine, found that despite a dozen years of heightened awareness that medical errors result in thousands of deaths each year, hospitals have not become safer.
There are some organizations that still provide hope that organizational learning is possible. One example is Intermountain Healthcare, which is a system of 23 hospitals that serves Utah and southeastern Idaho. They analyze physicians’ deviations from medical protocols for opportunities to improve the protocols. By allowing deviations and sharing the data on whether or not they actually produce a better outcome, it encourages physicians to buy into the program.
It can be very difficult to encourage people to look beyond the initial reasons for a problem (for example, procedures were not followed) to understanding the deeper, underlying reasons. One way to do this is to form interdisciplinary teams with members who have different skills and perspectives. Complex failures are often the result of a series of events that took place in different departments or disciplines, or at different levels within an organization. To fully understand the situation and prevent it from happening again, team members need to be able to communicate and work together to thoroughly discuss and analyze the problem.