How First Principles Thinking Fails

One of the threads I’ve been pulling on in this blog is the question “Where should the line lie between first principles thinking and pattern matching? When should you use one or the other?”

This is a companion discussion topic for the original entry at

So, some backstory here: this was another one of those 3k-4k word essays that I gave up on and then extracted a part out. I may turn the rest into next week’s post.

A reader emailed me and told me that I should have turned the content of this post to a list. So here it is.

When reasoning from first principles, you may fail if:

  1. You have flawed assumptions.
  2. You make a mistake in one of your inference steps. (An inference step is a step in the chain of reasoning in your argument).
  3. You start from the wrong set of principles/axioms/base facts.
  4. You reason upwards from the correct base, but end up at a ‘useless’ level of abstraction.

The first two are reasonably easy to spot — they’re basically the sort of analytical criticism that most of us are taught in schools. It’s the third and fourth ones that I’m more concerned about, because you can have a set of completely true axioms, and a completely coherent, tightly argued case, but then you turn out to get everything wrong anyway.

I realise this is also the source of my ‘test against reality’ principle, and the general distrust I have of management consultant-style frameworks thinking. Whenever someone makes a detailed, coherent argument, I usually say: “makes sense; let’s see.” — and this is why.


Are you familiar with David Chapman? This sounds very similar to some of his meta-rationality ideas.

One of his arguments is that rationality and probabalism don’t work under conditions with unknown unknowns, which is all the time in reality. As I understand it, the way to operate is to break down situations into smaller situations where rationality is more likely to work. From this experience, you piece everything back together.

He uses a framework of perception and rationality to do this. Most people think that you perceive things and those become inputs into a rational system. So perception is input and rationality is a CPU. He says that perception and rationality are both inputs and processing units. What you perceive affects your understanding and what you understand affects how you see things. For example, if you understand how colors are different frequencies of light waves, your answer to “why is the sky blue” will be different than if you don’t understand. Reasonableness is the tool to bridge perception and rationality.

So, to analogize this, perception is pattern matching and rationality is first principles thinking. Both concepts are relevant in every situation and actually influence how you use the other. When you pattern match, you think of other first principles that apply to the situation. As you use first principles, you see other situations that use those principles. Pattern matching will help you identify situations when first principles will fail and help you understand those situations.


I’ve read some of David Chapman’s writing, but sadly I’m not as familiar with his work as I’d like to be.

I quite like everything you’re quoting here. This ties into everything I’m trying to get at — and also what Boyd was getting at when he wrote about ‘creation and destruction’, and also what Kahneman and Klein were writing about when they talk about ‘expert intuition vs more analytical approaches (that attempt to resist cognitive biases)’.

As an aside, a reader sent me the following link, in response to the post:

I think it’s great.


Your concept of Levels of Abstraction fits in here also. Tradition is one level and scientific understanding is another. Just following tradition let’s you eat the cassava without harm, but it’s a very lengthy and intensive process. Using a science level of abstraction, maybe you can shorten that process but would you have thought that the food could be edible at all?
The traditional approach using trial and error is better at discovery but the scientific method is better at optimization. The combination is far more powerful than either alone.


Love the follow up essay. And please think about exploring David Chapman a little more. His ideas fit really well with this series. Here’s a couple of passages from a chapter in his book on meta-rationality.

You can “do statistics wrong” at three levels:

  1. Making errors in calculations within a formal system
  2. Misunderstanding what could be concluded within the system if your small-world idealization held
  3. Not realizing you have made a small-world idealization, and taking it as Truth.

There’s no substitute for obstinate curiosity, for actually figuring out what is going on; and no fixed method for that. Science can’t be reduced to any fixed method, nor evaluated by any fixed criterion. It uses methods and criteria; it is not defined or limited by them.


Will do! I’ve added this to my toread; can’t wait to get started once I take my two week break a week from now.