This is another article from Farnam Street, and I confess up until a few days ago, I’d never heard of them. Run by a guy named Shane Parrish, he’s based here in Ottawa. Some really fascinating stuff on there, with decent curation and a lot of links. This article highlights that:
Not all of our grand schemes turn out like we planned. In fact, sometimes things go horribly awry. In this article, we tackle unintended consequences and how to minimize them in our own decision making.
The Law of Unintended Consequences: Shakespeare, Cobra Breeding, and a Tower in Pisa | Farnam Street
You might think that the article is going to be about train wreck ideas or the butterfly effect causing tsunamis. Not really. In fact, I would say it is more about linear thinking from good intentions to good outcomes, without taking into account side effects. Some unknown, some unforeseeable, some just missed because they stopped thinking early. The article has a great quote from a book by William A. Sherden:
Sometimes unintended consequences are catastrophic, sometimes beneficial. Occasionally their impacts are imperceptible, at other times colossal. Large events frequently have a number of unintended consequences, but even small events can trigger them. There are numerous instances of purposeful deeds completely backfiring, causing the exact opposite of what was intended.
The conclusion is simple — systems thinking or second-order thinking is needed, but the article doesn’t pay much attention to the fact that often the culprit lies in defining the system too narrowly, when in fact the small system is part of a larger system, and it is the larger system that often has the other effects (like the examples of releasing a predator into a land to control one local population, not realizing that the predator will spread into the larger system). What I do like is the idea that sometimes the failure is in over-estimating the size of the system, assuming there are too many variables, and thus not trying at all to figure out ancillary effects.
Yet, if we know they exist (or in hindsight think we should have), the article explains some of the most common reasons:
Sociologist Robert K. Merton has identified five potential causes of consequences we failed to see:
Our ignorance of the precise manner in which systems work.
Analytical errors or a failure to use Bayesian thinking (not updating our beliefs in light of new information).
Focusing on short-term gain while forgetting long-term consequences.
The requirement for or prohibition of certain actions, despite the potential long-term results.
The creation of self-defeating prophecies (for example, due to worry about inflation, a central bank announces that it will take drastic action, thereby accidentally causing crippling deflation amidst the panic).
However, the article goes even further, adding in over-reliance on models and predictions (mistaking the map for the territory), survivorship bias, the compounding effect of consequences, denial, failure to account for base rates, curiosity, or the tendency to want to do something.
Of course, the article leads to the article I shared earlier (Articles I Like: Mental Models – The Best Way to Make Intelligent Decisions (113 Models Explained)), and the use of other mental models to help prevent a failure to consider other effects.
Cool stuff, love the site.