The experimentation experiment
There are many writers on leadership and innovation (including Garvey Berger and Johnston!) who point to the critical importance of experimentation in acting and leading in the face of complexity. The argument we, and others, make is that, because in complex situations we cannot predict outcomes and control the effects of our actions (only achieving the good things we want and none of the surprising and perverse results), then we have to experiment to lead change well.
Having accepted that this maybe the only way to go we get to the exciting and hard part: how to do it well.
A number of factors make this hard; others make it exciting. Some of the challenging factors include us having to go against the ways we have been acculturated as leaders: to be able to see more than others, predict more, and then control the actions of our group to deliver on the strategy. Most of us work in situations where the accountability mechanisms expect prediction and control – whether we are public servants accounting to our elected representatives or business executives accounting to the board and/or shareholders.
All of this makes experimentation risky. Or at least failed experiments are feared to be risky. How will explain this to the media or the board or our Minister? And yet these are the really useful ones because failure teaches us much more than success does. As a consequence, when we ask leaders to vote for the best suggested experiments they tend to favour those they think will work best rather than ones that might be more likely to teach us something new.
Experimentation is also exciting. Innovation can be thrilling at best and obsessively engaging at worst. Learning stretches teams and organisations and can drive highly productive changes. Experimentation as a way of working can also liberate large swathes of the time and energy that goes into teams spinning their wheels in memos and meetings and making cases for why one idea, policy, proposal, design will be the best one and certainly better than the other ones. Sod that for a laugh! Let’s try versions of them all and have the argument when we have some data from trials and then asked more questions and tried more things and learnt more and tried more. In the end projects need to be set up, business cases developed, investment decisions made but so much time goes into mind numbing speculations from the outset on why one approach should be favoured over the others.
I have come across a couple of ideas in recent months that have made me think more about this. In March, I had the privilege of listening to David Snowden talking through health issues and complexity with a group of senior public health leaders in New Zealand.
Snowden emphasised the need to change systems and processes of doing work, experimenting, and interacting and then organisations and individual behaviours will change in response.
He also had a number of suggestions (or imperatives!) for dealing with concerns about risks and failure:
1) Apply safe-to-fail experimentation to intractable issues and demonstrate the changes that emerge from doing many experiments.
2) Present multiple parallel experiments as a portfolio. Don’t highlight or fund single experiments. Provide a $10k budget for the whole portfolio and judge and report success across the portfolio.
3) Seek a different process (e.g. a portfolio of competing experiments). “A different process not a different mental state.”
Snowden talked about “dynamic reallocation of budgets”. He said to think of it as a research or investigation method, which also produces interventions. I heard him describing a two-stage process. First fund experiments that can be tried out in 6 weeks. From this possibilities become visible and the possibilities can be shaped into projects and funding can be allocated to projects. The first stage is a process to enable competing hypothesises to be visible.
In April I read Megan McArdle’s useful The Up Side of Down: Why failing well is the key to success, (2014). McArdle is strong on the absolute need for experimentation, for the reasons set out above, and she is also strong on the limits of experimentation. She quotes a leading experimenter on new products, Jim Manzi: “Experiments falsify or corroborate. They don’t prove.”
This leads McArdle to conclude: “[E]xperimentation is not enough, by itself. You also need iteration, to find out whether your experimental results are consistent or a fluke and, after you’ve made one small improvement, to find the next. We need a culture of experimentation as much as we need the experiments themselves.
“Experiments give us some of our greatest hope for building better companies, better policies, and better lives. But that does not mean that we can make everything great by doing an experiment and then rolling it out on a grand scale. We need to prepare to spend our lives experimenting: lots of tests, incremental rollouts that can be modified, or scrapped entirely, if it turns out that our experiment does not scale.”
Snowden is suggesting that we cannot get embedded experimentation through a culture change but through setting up processes and systems to enable this to happen. And, as McArdle suggests, this is an adaptive challenge. It requires changes in behaviours, habits, and beliefs. It needs experimentation (including with the processes and systems of experimentation) across our teams and our organisations.
The picture is of a chocolate sculpture I saw in a Sydney hotel – Let your ideas loose!