How Experts Sensemake - Commoncog

This is Part 2 on a short series on sensemaking. You may read Part 1 here.


This is a companion discussion topic for the original entry at https://commoncog.com/how-experts-sensemake
3 Likes

Lots to read, re-read, and then mull over further here!

One part that stuck out to me was the section criticising confirmation bias

Unfortunately the recommendation by the confirmation bias folks backfires completely: in just about all natural settings that the authors examined, folks who use the ā€˜keep an open mind’ or ā€˜delay forming a view too early’-type strategies underperformed the experts. In fact, Klein et al lays this out as one of the falsifiable assertions of the Data-Frame theory: if you can find a single expert in a naturalistic setting using the ā€˜keep an open mind’ approach, then the Data-Frame theory is falsified and needs updating. (And if you think you’ve found someone who does this, you should check: is this really a top percentile performer? Or is there someone — or many someones — who have better performance?)

It reminds me of one time I was working on a project with the consultancy AT Kearney. I was quite confused when the consultant told me they were going to use a hypothesis-led investigation process, which really meant they start with the conclusion and they find the evidence to support the conclusion.

I was quite critical of this, as it sort of seemed to me to be confirmation bias in action. However, the idea of having the partner look at a case and gain a good sense of what the main issues are, and then sending the engagement team to validate if that hypothesis is correct, seemed to work well in practice.

3 Likes

Why did the confirmation bias folks get things so wrong? The answer — which is a bit of an open secret — is that the entire field of cognitive biases and heuristics is built around a flawed methodology. The field conducts experiments by administering toy problems to unskilled test subjects (undergrads, mostly) in artificial lab environments. All the cognitive processes demonstrated by the participants in these studies are then labelled as ā€˜reasoning errors’ or ā€˜biases’ whenever they result in the wrong answers. And make no mistake: these cognitive processes are real; the vast majority of them are reliably replicated in lab study after lab study, over the course of decades. But if you study practitioners solving real problems in naturalistic environments — problems that they have actual expertise in — you will find that all the same cognitive processes that result in ā€˜reasoning errors’ on lab tests are suddenly deployed in ways that produce excellent performance. This has been the finding of the Naturalistic Decision Making (NDM) branch of applied psychology — which Klein helped start — in study after study, ā€˜bias’ after ā€˜bias’, for the past 30 years.

I can write a longer essay about this ā€˜open secret’. Perhaps I will. But I want to leave you with the following observation: whenever you see a cognitive bias, you should understand that there are two ways to get better performance. You may do error reduction, or you may fix it by gaining expertise.

This part (all bold emphasis mine) echoes a practical takeaway suggested by @Jared_Peterson in a piece on the related schools of thought (again, bold emphasis mine) —

Conclusion

Should these schools of thought reconcile into one unified discipline? I’m not sure. Each has its strengths, and I worry about what might be lost in the process.

Maybe instead of seeing them as competing theories striving for comprehensiveness, we should think of them as tools, each illuminating different aspects of decision-making. HB and FF uncover heuristics; HB explains when they go wrong, and FF explains when they go right, while NDM shows what happens when heuristics are chained together by experts into larger macrocognitive processes like Recognition-Primed Decision Making (RPD) in the real world. Meanwhile, CDM explains what to do in certain types of well-defined problems.
[…]

—which has been an excellent primer for me to think about what I can learn and use from these four schools of thought. One reason is that the temporal accounts in that piece are well organized and can serve as a mnemonically sticky resource for the landing phase when reading related topics (or, after you have landed as a non-researcher, for strengthening your foothold without expanding much, for that matter).

5 Likes

So many thoughts on this post.

First of all, thank you for writing this. Great overview into something that initially seems like common sense, but perhaps can help people make sense of their own sensemaking (recursive!).

Both experts and novices use all four cycles in the Data-Frame model: they construct frames, question frames in response to sufficiently incongruent data, elaborate frames and perform reframing in the exact same ways.

It sure seems like the Data-Frame model is a core part of human cognition!

I mentioned this in another thread, but this seems consistent with the reference frame hypothesis of Jeff Hawkins that he shares in his book A Thousand Brains. He actually gets into the neuroscience of which cortical cells build a reference frame, and how other cells are referencing to those frames. This was probably originally evolved to navigate the physical environment, but the great insight I took from the book was how the same mechanism explains how our brains navigate thoughts, relative to the frames we have.

As Ben Thompson of Stratechery described the theory:

The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.

One notable takeaway is that this constant prediction is how we can see or notice things that aren’t there (this was something Gary Klein mentioned in his book, Sources of Power). If our thinking or analysis was solely a function of the sensory data we receive, then we couldn’t see what’s missing. But we do this all the time - because our brains are constantly making predictions of what we should see next, and thus we notice when those predictions are wrong (again, useful for navigating a physical environment so we notice when we are off track).

To be clear, I’m not a neuroscientist, and haven’t read any of the papers or research Hawkins references, but it was a compelling model for me as it matched my own experience.

4 Likes

Beyond that, this post mostly confirms my existing frames :slight_smile: .

An unintended consequence of my generalist career trajectory is that I have more frames to bring to sensemaking than most people - I can use a hard-science frame of physics, an engineering frame, a business frame, a sociological frame, a psychological frame, etc. This has given me a superpower of ā€œinterdisciplinary translationā€ - I can explain things that might be hard to understand to people using their own language and frame of reference so they can more easily make sense of it.

It also means I can get up to speed in new situations surprisingly quickly, because I can relate it to some other frame to help me focus on the relevant aspects of the situation - to use the language of this post, I quickly find anchor concepts that ground my understanding.

And because I have a lot of frames, I’m not that attached to any individual frame - if new data comes in, I don’t get defensive, I just look for a different frame.

I am also more sensitive to when my frame doesn’t match the data. As a kid growing up in a place where I didn’t feel belonging, not fitting in was a survival issue, so I learned to be hypervigilant when something didn’t match what I expected, and to go build new frames so I could learn to fit in more smoothly. When I see something that doesn’t fit my frames, it bugs me until I figure out how to make sense of it - this is part of what drove my pivot from engineering to business (after watching a company I worked at go bankrupt despite great technology), and later into personal development (after noticing how being blind to the choices I had available to me led me to burnout).

what experts do is to commit early and eagerly to a frame but remain vigilant about possible abnormalities.

This is great advice. Our conscious brains (System 2) can’t process all the incoming data so we have to use a frame to filter and process the deluge of data, but staying alert to inconsistencies allows us to quickly pivot when something doesn’t line up (Bessis’s System 3). Klein had a great example in Sources of Power from a firefighter who had a feeling something wasn’t right, and pulled his team out just before the floor collapsed. When Klein asked him about it later, he said he initially thought the fire was in the back room, but then he realized that the fire wasn’t as hot or loud as he expected, which meant that the main fire was someplace else, and that led him to pull out because he didn’t know where it was (turned out to be underneath him).

A useful way to apply this idea that I often use for myself and coaching clients is to make sense of people doing things I don’t understand. If I assume that they are not incompetent or evil, then they must have a frame within which their actions not only make sense but feel inevitable. I take it upon myself to imagine what assumptions or beliefs they must have, or what data they must be looking at, that would make their actions make sense. Like any skill, this gets easier with practice, so I suggest constantly coming up with frames that explain people’s behaviors.

While I think the bias research is interesting, the idea that we should hew to some logical perfection is unrealistic (and doesn’t confirm to how our brains actually work). Biases are mentally efficient shortcuts that mostly work for navigating the world (System 1), so expertise lies in building stronger safeguards around when things are no longer corresponding to reality (Bessis’s System 3) (and, say, coding harnesses around agentic AI).

One last cue. Our bodies often know when something’s off before our conscious brains. Pay attention to your physical cues of tension - that’s often a sign that our frame predictions aren’t lining up with what’s happening, so our bodies are activating into survival mode because unfamiliarity means danger. When we use our conscious brain to override our physical bodies, we might be missing the warning signals of a need to reframe.

4 Likes

So much to say! Forgive me for responding to scattered remarks

I don’t think the unease is totally unwarranted. You still have to be open to anomalies from the hypothesis, as anomalies are themselves a type of falsification. The problem with this is how in the hell do you figure out what counts as an anomaly? If your existing causal mental model is perfect, then an anomaly is obvious - but the whole point of sensemaking is you don’t yet have a perfect mental model.

I once was in a book club where a couple of people were making fun of an experiment that found support for a theory we were not sympathetic to, but the experiment couldn’t in principle falsify the main alternative theory which we preferred. But then I pointed out that the experiment was done before the alternative theory we preferred was even conceptualized, and so the researchers weren’t even aware of what kind of falsifications they should be looking for when doing an experiment.

I think about this all the time because it terrifies me. Data depends on a (flawed) frame, and (flawed) frame depends on data - it’s circular reasoning and this circularity is probably why deep cognitive expertise is so hard to develop in the first place, and can’t be learned through linear step by step skill training. And worse, what anomalies you look for and notice often depends on other frames you are considering. So how you develop a new and novel frame in the first place?

Asking someone to do this always feels unrealistic to me. Like asking them to do Revolutionary Science (as opposed to Ordinary Science). I suspect this circularity is precisely what makes expertise so hard to develop in the first place. Acquiring expertise requires Revolutionary-style thinking.

Thanks for sharing my piece and Cedric’s which I hadn’t read before. I definitely feel like I can’t even understand what’s at stake in a field until I get a feel for the opposing personalities and narrative.

Gary Klein and Robert Hoffman call this seeing the invisible, and IIRC make the claim that it is the principle thing which distinguishes experts and novices. It’s been a while since I read the paper, and I’d have to sit and think about whether I agree.

The Gift of Fear is a good book by someone who grew up in an unsafe home, and now coaches people on how to manage dangerous situations. He’s a practioner rather than a researcher and so he’s not always as careful with his words as someone like Gary. But I found it pretty consistent with NDM overall. [Content warning. The book does deal with violence]

Gary talks about this all the time. Important mindset for both interviewing and training.

This is part of my interest in the neuroscientist Lisa Feldman Barrett who describes emotions as ā€œmulti-modal summariesā€ of our current context. Emotions are not irrational convulsions, but instead are themselves frames. Embodied and felt rather than logically understood, but still frames.


Finally, a remark on the piece itself. I can confirm its a good overview of the theory, and more in depth into the specific than any other account I’ve read. I’ve been mulling over how to write a short primer on the theory for our company website, and now I’m intimidated! I’ve been one upped before I’ve even began writing.

5 Likes

This is a great essay! I was thinking that in investment research, I did the same thing but at the beginning I’d have three parallel frames (for base case, bull case, and bear case), and then gradually during the elaboration phase, I’d mentally put the data into those buckets. Depending on which bucket had the most stuff, I would get a rough idea of how much I liked the company.

Edit: Hmm. Now that I think about it more, one of the big differences in analyst quality was that the good ones were able to do this. The bad ones could only do one frame at a time. The average ones could do bull case + base case, or base case + bear case.

4 Likes

I too love this essay, and it reinforced the frame of how I think about developing expertise.

One of the things that helped me immensely in my career was that I built up expertise on 2 tracks:

  • School of hard knocks by working in technical support: it’s useful to be in a world where you get immediate feedback on whether what you tried worked, and people are disinclined to let you off the hook until they genuinely see the results they need
  • Framework collection: to better understand how the rest of the IT function worked, I started going through the industry frameworks for each area and got certified in many of them. While those frameworks all have their problems, it at least gave me the language to talk with practitioners in each space and enough of a frame that works to sensemake in those spaces. This included frameworks for how IT as a whole worked and how it fit in with the broader enterprise. Over time, I also supplemented this with getting some kind of working orientation to how every other business function worked, especially back office functions like finance and HR that virtually every organization has.

These provided an excellent point/counterpoint as I have progressed throughout my career, and the fractal nature of my knowledge also makes it easier to have an alternate frame to fall back to when anomalies start showing up. I can either go up a level to look at broader connections, or dig down a level to better understand the specifics of a situation.

My suspicion is that this is a decent template for expertise development, but I’ll leave that to those better positioned to judge that.

1 Like

It’s actually really interesting to think about where these ā€˜frame and confirm’ tendencies come from. I know this might be a ā€˜just-so’ story (at least I haven’t done the work to verify how plausible this is!) but I’m willing to bet there’s an interesting evolutionary story behind why our cognition developed this way.

Yes, I’ve noticed this! And with @Roger too! At the risk of sounding like that kind of person, I think this is a decent argument for the benefits of a broad, liberal arts-style education. Or at least a decent argument for following your curiosity into other fields.

So that’s why you’re interested in LFB, @Jared_Peterson ! I’ve often wondered … (even after reading your Substack essay about her). OK this makes more sense now.

I am constantly reminded that @Jared_Peterson is kinder than I am. I’ve found very little in the heuristics and cognitive biases (HCB) tradition that has been useful (but maybe I’m in the wrong domains? Both Jared and I agree that HCB are more useful in, say, trading, where there’s often a mathematically optimal move. So I should perhaps temper my tone about HCB …)

But on the other hand, I’ve found the NDM work to be chock full of ideas that — if and when I try them — always seem to lead to some great set of outcomes. So I’ll admit to being biased here.

Ok this explains a lot! I don’t know if I’ve said this to you before, but I have a) noticed that you have a facility with frames that I do not — I tend to be quite the curmudgeon and tend to be suspicious of ideas that I do not think might be / cannot see are useful. You, on the other hand, have had decades of experience just picking up frames. You once told me that you seek out writing at the edge of what is politically acceptable, just to collect ideas. And b) I have long admired you for it, because I don’t think I would be able to do that. And I have noticed that you are wise in a way that I am not.

This theory, and your explanation of your history, explains where this wisdom comes from.

3 Likes

I once did research in physics labs doing cutting edge science, and it’s striking how clean people think science looks from the outside, and how horrible and inelegant it looks from the inside. Yes, in theory, we use the scientific method, and construct hypotheses and try to falsify them, but in reality, there’s a lot of confirmation bias in play.

Example: I spent a year as a researcher at CERN working in the lab of a Nobel Prize laureate, and at the lab meeting, one grad student was presenting his data. The professor glanced at it, and was immediately suspicious - it looked too good. He said ā€œShow me the unfiltered dataā€, and it turned out the student had constructed the data by applying filters until it looked right - a process that was well accepted but had been applied without the necessary judgment. But a less experienced professor might have accepted the data and ended up publishing it, even though it was constructed.

Yeah, there’s a reason that scientific revolutions (per Kuhn) often take a generation to occur, as the people with power arose as part of the previous paradigm, and they basically have to die or retire for the new paradigm to become entrenched as the new normal. When your expertise and credibility depend on a frame, it’s really hard to let go of that frame.

But developing a new and novel frame doesn’t necessarily mean you have to create it yourself from scratch. A lot of innovation comes from bringing frames from one domain into another domain (another point in favor of liberal-arts interdisciplinary thinking). I mean, what @cedric is doing here is taking decades old research in expertise development, but bringing it to the business domain. And it’s a super helpful frame!

This is reminiscent of Munger’s emphasis on collecting mental models (frames) so you can sort through them quickly and decide which to apply to your current situation. You don’t have to be a super innovative thinker, but you do have to be curious and willing to change your frames and perspectives quickly when new data arises.

4 Likes

She’s a constructivist fighting back against the computational view of the brain, and provides an account of emotion that looks to me like it’s just a neuroscientific and embodied account of RPD. Plus we share a very similar philosophy of science.

Well I was trained in it, and understand the inside view a little more. I used to do marketing for pharma and biotech, which are two industries where there is a lot of misunderstandings, ignorance, and lack of expertise. We often found biases to be useful frames for coordinating messaging to address certain issues we saw; Base Rate Fallacy, Naturalistic Fallacy, Affect Heuristic, Typical Mind Fallacy, etc.

Plus, a large part of my audience is trained in that tradition and I don’t want to scare them off before I can convince them.

3 Likes