Jump to content

5 May 2022

Rationality and biases in complexity

Written by
Marco Valente

We are surrounded by a narrative about how biased and irrational our minds are, and the notions of complexity and uncertainty have become apparent to everybody we work with, from the boardroom to the first lines. But if we are biased in our decisions and thinking, especially in the face of the inevitable complexity and uncertainty that we clearly see now, what chances do we have to make sound decisions? My stance is not that pessimistic and this blog will explore six concepts at the intersection between rationality, biases, and complexity. 

My hope is that these six ideas can shed light on topics that need further exploration. At Cultivating Leadership, we work closely with leaders and teams that need to make meaning of complex situations in conditions of uncertainty. Does the notion that we are hopelessly biased even make sense in complexity? Let’s jump right in.

First idea.
Causality in complexity, what does it even mean to be biased? 

In a complex world the notion of causality is radically different to that of a simpler world. If you lived in a laboratory, under controlled conditions, exploring events for which the link between cause and effect is known or knowable, you can clearly ascertain causes by observing effects and even predict effects from known causes. But complexity shows us a different world, one which is un-ordered as Snowden explains cogently with the Cynefin framework. In the words of Karl Popper, it is a world of propensities, of patterns that tend to happen given certain constraints and environmental contingencies, but it does not mean that we can reliably predict what will happen. Against this backdrop, what does it even mean to say that someone is biased in a decision or an assessment about a specific future outcome, when two equally rational agents could either be wrong or right about a certain hypothesis due to luck and chance and not skill of their judgment? When can we reliably tell that someone is biased (versus rational) if they are making conjectures about an unknown future? Yes, there will be times where people could hold on to a certain “reference class” to gauge what probability an event has. But in some situations we don’t even have those lifeboats, and navigate in completely unknown territory. Which brings us naturally to our next point.  

Second idea. The all-seeing-eye. Is there an all-knowing scientist who holds the answers? 

You have heard the bat and ball story a dozen times. Kahneman recounts the experiment of asking participants how much each item costs, and people consistently give a quick but often wrong answer. Given enough time to do the math, people can easily see the mistake in their judgment. Contrast that with the following episode. During late February and early March 2020 some pundits, including researchers on biases like Cass Sunstein and Gerd Gigerenzer wrote editorials explaining to the world how irrational it was for people in the US to be fearful of the new coronavirus, given that at that time the number of known cases was abysmally small. These two examples are fundamentally different. In the bat-and-ball case you can imagine an all-knowing experimenter who holds the right answer, whereas in the corona virus prediction the columnists made predictions about a future outcome that was unknown to everybody in the US, themselves included, that turned out to be hopelessly flawed (we would have been better off worrying more -not less- about the coronavirus). It is important to treat these events differently, as they are of a different nature. 1) There are events where an answer is knowable and known (often to the researchers who are assessing how biased the participants of a study are); 2) There are events that are unknown to all, researchers included, partly due to conditions of high complexity explained above. When I read books about irrationality and cognitive biases, it is important to see that often the authors conflate the two types of events as if they could be treated the same way. Except for a few authors (Kay & King), the rationality debate runs the risks of treating all events as if there were always an “all-seeing-eye” that knew better, and from that watchtower was thus able to judge the irrationality of us biased mortals. The case of rationality researchers who dismissed some people as “irrational” because they “panicked” for being scared of the coronavirus in February 2020 is a sobering reminder. If there is no such thing as a known-in-advance answer available to some, it is more helpful to see us all as navigating the same causal fog. The next question is, at what cost?

Third idea. Material consequences of being wrong are not the same (Taleb’s Fat Tony problem). 

Imagine we are in a somewhat equal playing field in the sense that nobody had the answer. We are all in the same boat, navigating the same causal fog. Uncertainty comes as a companion to our less-than-perfect knowledge of the world. And risk is closely connected to uncertainty. There are domains for which the uncertainty that comes with our imperfect knowledge of the world is immaterial. I predicted a green light at the next junction, but it’s red now. Does it matter? Often it doesn’t: I will arrive home sixty seconds later. For some situations, our inaccurate predictions and our biased thinking can cost a lot. Taleb talks about asymmetries we are exposed to, both the positive ones (if we start many companies, do we increase our chances to hit the jackpot?) and negative ones (if we get rewarded 10K dollars every time we play Russian roulette and survive, is it “rational” to play? And how often?) According to Taleb’s pragmatic character, the investor Fat Tony, it does not matter to have the perfect picture of the world -because we can’t anyhow. What matters more is to take the safest route and expose oneself to positive asymmetries and away from risky ones in the face of tail risks. This is one of the reasons why often in complexity people talk about learning about a complex, unpredictable system by running multiple experiments that are “safe to fail”: we can only learn about the system by poking it first, but need to make sure that we have created some boundaries within which it is actually safe to experiment. Taleb and his character were right: everybody is biased, what ultimately matters even more are the consequences. And it was wise of Robert Louis Stevenson, too, to remind us that “Sooner or later, everybody sits down to a banquet of consequences”.

Fourth idea. Rationality? It depends on the level of analysis.  

Something that is deemed irrational at one level of analysis is instead rational and even worth doing at another level. When do these considerations apply in the biases and rationality debate? Imagine I want to start a new tech business in Malmö, Sweden. Given the complexity and uncertainty we talked about, my chances are not clearly known. However, as Kahneman showed us over the years, we can use reference class forecasting to have a sense of how competitive the environment around me is. Say that from city statistics I learn that 80 out of 100 of similar enough businesses in Malmö fail within the first three years. (I made up the numbers but entrepreneurs know how difficult it is). Then why would a reasonable entrepreneur do such an irrational move as starting a new company? For one thing, the positive payoffs could be asymmetrical. But given how small my chances look, a researcher on rationality can easily judge my choice as foolish. Now consider other levels of analysis: is it ‘rational’ for a city to invest in entrepreneurship? At a bigger level, it benefits a city, a region, and an industrial ecosystem to support startups, for instance with business incubators, because many parallel attempts are all trying to succeed, and even if the individual can be seen as irrational in not weighing their chances accurately, the collective has a lot higher chances to innovate and better society. In conditions that require innovation, lateral thinking, and a lot of diversity of thinking, the notion of reducing “noise” and zeroing in on reducing biases for everybody may not be that helpful, or even be counter productive. Again, if the problem has such a thing as a right answer, we do not need much divergence of thinking. But if the problem is truly complex and its solution far from obvious, we need to widen the scope of options, and it pays off to withhold judgment at times about irrational takes on it (as long as we don’t risk people’s safety). An investor could be irrational, an entrepreneur could be foolish, and a team may be creating a wacky prototype that does not hold promise, but many investors experimenting with multiple portfolios, and many teams innovating in novel product areas scan the system with a wider array of attempts, and even if some individual ones can be dismissed as “irrational”, at a higher level it makes a lot of sense to let these experiments run (as long as we can learn from them). 

Fifth idea. What are the boundary conditions of System 1 and System 2? 

If we had better information or more time to deliberate, our biases should go away. That at least is a tenet of System 1 and System 2. Kahneman taught us a great deal of lessons about our biased minds. For simple problems it may well be that people can easily spot the error in their reasoning and correct their view accordingly. For more complex matters, especially for strongly held opinions that people invest a lot of their identity on, it seems unclear to me that people with a bit more time and analysis will understand how erroneous their beliefs are. If that were the case, it seems very hard to see how people who create elaborate conspiracy theories spend hours connecting dots out of thin air and drawing unicorns out of stars that are not even remotely aligned. There could be at least two reasons for why the notion of System 2 does not hold so well in situations of complexity. For one thing, often there is no such thing as a definite, final answer, as complex systems lend themselves to multiple and at times equally coherent interpretations, unless we can subject these interpretations to some sorts of severe tests. But in complexity, as Max Boisot said and as it was clearly explained by Snowden and Klein, sensemaking is not merely about connecting the dots or figuring out the riddle. There are so many dots that one can conjure almost any idea, no matter how implausible. The second reason is that we may invest a lot of our identity on certain “biases”, and we will hold on to our beliefs much stronger than arguing about a simple math mistake we admit we made. For instance, research on our “tribal” and “political” minds seem to suggest that people with a high level of education, and with a lot of free time to investigate facts accurately, do not necessarily come to less biased conclusions about the world. There is robust evidence that shows just the contrary. We may get trapped by simple stories in complexity due to our motivated reasoning or by wanting to protect our sense of identity, in spite of all the time and counter-evidence available to us.

Under what conditions can we easily, without substantial cost make our biases go away? Jennifer Garvey Berger’s work on Mindtraps reminds us that we could be inclined to create simple stories in complexity to protect our sense of self. 

Sixth idea. It’s not only what the irrational belief is, but what the irrational belief does. 

The heuristics and biases school of thought brought forward by the most prominent researchers such as Kahneman, Ariely, and others seems rooted in a worldview that epistemic rationality contributes to our wellbeing. This “traditionalist view” as philosopher Lisa Bortolotti calls it, holds that we cannot be happy and well functioning if we hold on to incorrect beliefs about the world. We said that not all biases are born alike in terms of material consequences for holding an incorrect or irrational belief. Furthermore, some biases can shape action in a way that can be even beneficial. Take for instance the notion of optimistic biases about our health, romantic relationships, and our chances of succeeding at something. There is empirical evidence that such irrational beliefs not only hold some psychological and epistemic benefits, but also that they can contribute to our motivation and can under certain conditions fuel a self-fulfilling prophecy. While we can still hold a view that these beliefs are clearly false or inaccurate (and in some conditions we can judge them as such), prof. Bortolotti convincingly argues that there are some boundary conditions in which optimistically biased beliefs actually shape our self-esteem, our agency, our actions in a way that creates future behavior. So much so, that we can even close the gap between our incorrect assessment and that future reality. For instance, a person may be over-confident about his prospects of finding a job for which he is under qualified. Research suggests that in some conditions this overconfidence can shape his motivation to such an extent that makes his pursuit of the job resistant to setbacks, even to a point that makes his goal objectively more likely over time. Audaces fortuna iuvat. Even when our initial “audacity” would be deemed as objectively irrational by some.

This blog has explored six simple notions that problematize the idea of rationality, biases, and irrationality in situations of complexity and uncertainty. I hope there is some added value in some of the questions, and that it could spark a much-needed conversation.


References.
Kay and King have written a good book, Radical Uncertainty, which is one of the few that makes that distinction between an all-seeing-eye and uncertainty that applies to us all. Snowden’s Cynefin Framework, Karl Popper’s, A world of Propensities, and Alicia Juarrero’s Dynamics in Action are great for exploring the notion of complexity and causality, and the difference between a predictable and an unpredictable world.

Nassim Taleb’s Incerto is a great series of books that explores what the material consequences of our uncertainty are. His character is blunt but very intelligent and insightful.

Kahneman’s Thinking, Fast and Slow is nuanced and considers some of the very critiques I made here. See the notion of System 1 and System 2 therein. Snowden and Thaghard speak about the notion of coherence in complexity, in Snowden’s blog posts on the Cynefin company website.

On optimism and other irrational beliefs, author Lisa Bortolotti holds nuanced and very rich views on this and does not claim that unwarranted optimism is always a good thing. I recommend you read her great little book, especially chapter 6. <a href="https://global.oup.com/academic/product/the-epistemic-innocence-of-irrational-beliefs-9780198863984?cc=se&lang=en&" The Epistemic Innocence of Irrational Beliefs

A longer form of this post originally appeared on LinkedIn Pulse https://www.linkedin.com/pulse/rationality-biases-complexity-uncertainty-marco-valente/ 

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.