rpls' bear blog

Models are great, life just won’t follow them

Models are great thinking tools. Once we get to know them, they allows us to rapidly identify certain sequences of events and helps us make sense of the world.

When we’re aware of some particular model and we see events fitting that model, we feel empowered. Makes us think that we start to understand how the world works.

But there's a caveat in here. We’re actively looking for instances that fit the models we know, so we're fast to make associations. But we usually don't train ourselves to recognize when the events don’t follow any models or any model in particular. This meaning that we might get the wrong impression of how effective a model really is.

This represents a distortion bias. We start to believe that the world’s behaviour can fit in little black boxes, with known inputs and outputs, so we can always predict the outcomes based on its initial conditions.

The only realistic relation we can establish is that: the only time life fits the models, is when it does.

Models are great to identify patterns in past events. They are not so great to predict the future. Despite sometimes being used to trick us into reasoning that it is: like convincing us into a "sure" investment.

There is no guarantee that the same sequence of events will perform in the same way and generate the same outcomes.

And while this seems very obvious, we're all prone to fall for this trap sometime. Because we're caught of guard, inexperienced, tired or confused, desperate or even overly enthusiastic about something. That's why gamblers keep gambling, investors keep sinking money in bad investments, project managers keep projects indefinitely delayed.

Models are very useful tools, but only when used in a conscious way.

An experienced investor or project manager will base it's decisions on models, for sure. But the emphasis here is on "experienced", as they will know better than to blindly trust the model.

They will create some sort of cushion or safe plan, taking in account the very real possibility of deviations. Knowing that the model might fail completely, they might even prepare backup strategies.

They will track, measure and take actions along the process, trying to identify errors and make corrections - or even try to apply other models that better fit some new conditions.

So the only thing to be mindful about models is that, although we might be able to see them every time they occur, we're mostly not considering all the other times they don't. And so we're biased to think of them as highly probable when they should be no more probable as any other random thing in the world.