Keith and Sue and I were teaching a group of leaders about safe to fail experiments a couple of weeks ago, and we learned a lot about how hard it is to teach (and learn) about safe to fail experiments. I’ll muse here a little about why that is and what I think we can experiment with doing about it.
We have found that the pull toward old habits is stronger than we anticipated. We know how hard it is to change habits in general, but we didn’t guess at how much creating a safe-to-fail experiment would slide into other habits people have about solving problems. In our work with leaders in all kinds of different places, we’ve begun to notice the difference between the habits people may have about how to solve a problem, and the creation of a suite of safe to fail experiments.
One groove in our reasoning is trying to find the heart of the matter. This is incredibly helpful in solving fairly predictable problems—find the source and solve it from there. But the complex world is different. In a complex situation, there’s no need to find the root cause of something or the most important leverage point. In the first place, trying to find such things is impossible (because these situations are both interconnected and dynamic which means that you can’t untangle the knot of the many things related to the issue, and even if you could, it would have changed by the time you finished your analysis).
Secondly, even if it weren’t impossible, it would be a bad idea anyway; the center of the issue is generally so bound up with other organizational issues and policies that it’s impossible to change. The rule with safe to fail experiments is to look at the edges of the issue and not at the heart of it—complexity theories tell us that heading for the heart of a problem is likely to deepen the basin of attraction that keeps the issue stuck. At the edges things are more vulnerable and likely to shift.
Thirdly, we have noticed a remarkable pattern that happens when people are searching for the heart of the matter: they pull out their favorite solution and attach it to whatever problem is on the table. In grad school years ago, the brilliant Susan Moore Johnson asked each of us to write down our favorite solution to vague educational ailments. We all wrote down something (generally our dissertation topics) and she asked us to read them aloud. There were a dozen of us in the seminar, and no solution was repeated twice. Then she read out a list of education challenges and asked us all what we thought the most important leverage point was. We quickly noticed that every challenge—even very diverse ones at very different scales—seemed to each of us to be a good place for our pet solution. I’ve never forgotten that, and now I see it all the time in organizations. When people don’t like the remuneration system, they attach that to all ills. People development problems? The remuneration system doesn’t reward people development. Innovation problems? The remuneration system supports sure bets. Problems with competing rather than cooperating divisions? The remuneration system again. Now you can replace “remuneration system” with “too much bureaucracy” or “too little accountability” or “bad communication” or whatever your personal objection is.
Safe to fail experiments can’t rely on the well-worn grooves in your reasoning; if these complex challenges could be solved using conventional methods, they’d be fixed already. Instead, we have to force new ways of thinking about old problems. This is the promise—and the peril—of complexity.