First comment after lurking for a long time, as “Love this blogpost” doesn’t feel like it contributes anything.
I think what you’re highlighting @rishabhsriv is that being able to ask good/helpful questions is a tacit skill in itself (1).
What I’ve found is that you have to calibrate the sense of scale in your questions (too big and it’s overwhelming or unanswerable or frustrating, too small and you get fragments on knowledge). You also need to calibrate for the specificity of your questions (if they are very specific, you will find answers that don’t scale well; if they are too generalized, you might get good heuristics but struggle on the specific implementations). So there’s a 2x2 grid or an X/Y graph of scale and specificity for whatever you’re trying to follow on. No one quadrant is good or bad, they just come with different goals and different challenges. I think @cedric’s jump from “how does one design good architecture” to “how do experts learn what they learn” is one of those jumps (from specific/large to generic/large), but it could have been directed at solving a problem and jumped to a specific/small question (“how would you design /this/ architecture”) or to a generic/small question (“how do you design architectures for resilience over changing use cases in the future?”) (2)
The best way that I found to describe this is when I was leading a product team and trying to give them guidance about “what is the right problem to take on”. The version of “follow your nose” that we worked out was that at the start of any PRD we would go through a multi-step exercise of “zooming in and zooming out”.
For example: once we had identified a problem/opportunity space, any initial anchor of an idea would be good enough to start zooming in and out from. We worked in enterprise healthcare software and one of the challenges was task or service automation. Someone would propose to build a bot to solve for “scheduling”. The zoom out would be: “Were there other scheduling-like problems (collect data and process it through an API) in the space?” Are the questions the same each time, or do they change? Do you need to be able to edit these questions as an admin or write it once and forget? Said differently: are there other problems that share similar parameters, or in the language of this blog: is this a leaf, a branch, or a trunk that we’re talking about?
So we zoomed out and reframed the problem space more generally (from (a) a specific scheduling bot to (b1) bots that interact with APIs and (b2) bot-maker tooling… again, working through the variations of scale and specificity). Eventually we zoomed out to the level of “all input/output data collection” which didn’t feel helpful at all, then zoomed back in to a specific problem space. Zooming in and out often had a deliberate overshooting built in to force you to explore things that don’t work (reminds me of Rory Sutherland’s “test counterintuitive things” mandate (3)).
Zooming in and out along the scale/specificity parameters solved our tacit ‘follow the nose/ask good questions’ problem and set the stage for part 2 by making the entirety of the possible problem space and showing us what options we had. So then Part 2 was the PRD itself, which leads to what Cedric explored in more detail in Time Allocation as Capital Allocation. Which is to say, once you know what the options are and what table you’re at, you can then start evaluating all of those possibilities to figure out what the possible impact from solving that problem and probability of solving it are. (Won’t go deep into the product-management opportunity-evaluation-space because I could write a a few thousand words on that alone).
Over time and deliberate practice — we made it a mandatory part of the PRD process — our version of “following the nose” became a tacit skill where it looked like people could zero in the right problem space almost from the start - but to outsiders or new team members, it was a constant calibration conversation. (4)
As I wrote this, I also realized I informally used another process for personal learning, a variation of the “Five Whys” process used for root cause analysis. When I wanted to learn expert judgement, I would often compare my novice system against an expert’s system and then identify the key points of divergence. Then, I would ask a series of cascading “why” questions about those divergences while also exploring counterfactuals and alternatives. For example: If a systems architect chose to use a certain database and a certain schema, what were they anticipating? If I chose a different database or schema, why did they not even consider that option, what are the consequences of going down that road? So it would make the tactic knowledge explicit and overlap expert decision making — often in the form of narrative and informal case studies — on top of deliberate process in the form of me-doing-the-thing. The prerequisite was that I had to go through a cycle of “doing the thing” first because otherwise I wouldn’t know where to start asking questions (generic/large questions are the worst places to /start/ with because they hide so much complexity).
Anyway. That’s how I go through the “follow your nose” problem!
—
Footnotes:
1 - My knowledge of “asking questions” comes originally out of the integrated service design / design research curriculum. Jan Chipchase’s “Field Study Handbook” is a fantastic primer on this. It’s not about question asking per se (more about sense-making), but his work has been my personal equivalent of a Lia DeBello type. I also picked up “the book of beautiful questions” by Warren Berger but too early to tell if it is useful here.
2 - “Building Evolutionary Architectures” by Ford, Parsons, Kia (Orielly)
3- Rory Sutherland, “Alchemy”. I have a relevant excerpt here: No One Ever Tests Counterintuitive Things
4 - There’s something to be said here about product management being one of the few departments that needs to make this skill explicit, as their actual contribution to the “product-design-engineering” triangle is precisely that: an ability to find the right problems to work on, and define the correct parameters around them. Not to solution, though most product managers end up “solutioning” more than “problem finding”. But that’s getting off topic.