Highlights from "The Great Mental Models"
By Shane Parrish, Rhiannon Beaubien
In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality. We think better. And thinking better is about finding simple processes that help us work through problems from multiple dimensions and perspectives, allowing us to better choose solutions that fit what matters to us. The skill for finding the right solutions for the right problems is one form of wisdom.
Contrary to what we’re led to believe, thinking better isn’t about being a genius. It is about the processes we use to uncover reality and the choices we make once we do.
A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks.
When understanding is separated from reality, we lose our powers. Understanding must constantly be tested against reality and updated accordingly.
The first flaw is perspective. We have a hard time seeing any system that we are in.
First, we’re so afraid about what others will say about us that we fail to put our ideas out there and subject them to criticism. This way we can always be right. Second, if we do put our ideas out there and they are criticized, our ego steps in to protect us. We become invested in defending instead of upgrading our ideas.
The third flaw is distance. The further we are from the results of our decisions, the easier it is to keep our current views rather than update them.
We also tend to undervalue the elementary ideas and overvalue the complicated ones.
There is an old adage that encapsulates this: “To the man with only a hammer, everything starts looking like a nail.” Not every problem is a nail. The world is full of complications and interconnections that can only be explained through understanding of multiple models.
It’s not just knowing the mental models that is important. First you must learn them, but then you must use them. Each decision presents an opportunity to comb through your repertoire and try one out, so you can also learn how to use them.
keep a journal. Write your experiences down. When you identify a model at work in the world, write that down too. Then you can explore the applications you’ve observed, and start being more in control of the models you use every day.
There are three key practices needed in order to build and maintain a circle of competence: curiosity and a desire to learn, monitoring, and feedback.
«Learn from the mistakes of others. You can’t live long enough to make them all yourself.»
We don’t keep the right records, because we don’t really want to know what we’re good and bad at. Ego is a powerful enemy when it comes to better understanding reality.
We usually have too many biases to solely rely on our own observations. It takes courage to solicit external feedback, so if defensiveness starts to manifest, focus on the result you hope to achieve.
The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions.
When making uncertain decisions, it’s nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation?
The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.