I am a fan of the tension described here, but I’ve grappled with it using slightly different language. It seems to me that the question here is, “what is the appropriate and useful level of abstraction?” (And “useful” means having some predictive property that I can leverage). I think schemas and case study models (as well as the “accurate vs useful” distinctions) coexist in that sense.
When you start with an extremely high level concept, there is a lot of variability. Taking a case study approach helps tease out those differences between things. Eventually, enough cases end up coalescing back into their own useful patterns and schemas. You can then dissect those new sub-schemas with their own case studies. It’s a cycle of schemas/cases/schemas/cases as you work through making sense of a topic.
They key is retaining enough cognitive flexibility to not have schemas become “magnets”/“knowledge shields” that strip away differences, but also to not get too hung up on identifying differences between cases and becomes blinded to where similarities are useful groupings.
So another way to frame the question for business purposes is, “what is useful (a predictive pattern that you can exploit) versus what is interesting (an idiosyncrasy that you can’t exploit)?”
I keep coming back to Ill-Structured Domains Aren't Necessarily Wicked - Commoncog and the idea that “for ’ill-structured domain’ is “concept instantiation is highly variable for cases of the same nominal type””… which to me suggests that what is happening is that the abstraction/schemas is mismatched from the instantiation/case. If the cases are really different and context specific, is the concept useful? I’d argue it isn’t. The concept is descriptive more than predictive, and descriptive concepts aren’t useful except in retrospect.
So the other thing about CFT is for “how do experts deal with novelty in their domains — that is, things that they’ve never seen before?” … and I think that what we think of as ill-structured business concepts are not an example of novelty, they’re an example of descriptive concepts and overfitting cases.
CFT feels like the approach for when you’re looking at a bunch of companies and not understanding why some seem to be succeeding while others aren’t (or trying to understand why a predictive concept has failed to be predictive/useful). Then you take the case study approach to tease out similarities and differences between them until you find out what the useful/exploitable patterns are, and what are the idiosyncrasies that make them different. Popularly, I think we see this play out in epidemiology with emergent outbreaks, where you need to start with individual cases to discern the overlying patterns, while still recognizing the key differences between cases.
So the problem with “accuracy” is that the logical end point is when the ‘map becomes as as large as the territory’. Usefulness doesn’t need to be accurate in the sense that everything outside of 1:1 reality is an abstraction. The question is, what level of abstraction should that be for your purposes (and different people have different purposes/problems/opportunities and thus operate with different abstractions).