Hi @petro — this is an excellent question, and I say this partially because I had the exact same reaction when I started working on the WBR project.
I think the exact question I asked was a more typical marketing attribution question, something like “say one input metric is number of newsletter subscribers to Amazon deal emails, and another is ad spend, how do you attribute a lift in sales in some e-commerce category to ad spend vs deals”. And the answer I got from Colin was something along the lines of you want a rough sense of causation but you don’t need precision — which in this case is impossible anyway.
I’ll give a concise answer to your question first, before following it up with a few points.
The short answer is: yeah, for certain types of questions, they do it ‘by feel’, as you put it.
Except that they would object strongly to that characterisation! What they would say is that the causal model in their heads is the result of a deep qualitative understanding of the customer. On top of that, (and this is my interpretation): you can get verification of causality by taking action to generate information — in many of these cases, you could just “ok let’s drive it or let it go for a few weeks and see if the output metric improves/declines”.
But, yes:
This philosophy and the need to practice it (a relentless focus on free cash flow) successfully drove the creation of other capabilities, such as Amazon’s robust, extremely accurate unit economic model. This tool allows folks like the merchants, finance analysts, and optimization modelers (known at Amazon as quant-heads) to understand how different buying decisions, process flows, fulfillment paths, and demand scenarios would affect a product’s contribution profit. This, in turn, gives Amazon the ability to understand how changes in these variables would impact FCF. Very few retailers have this in-depth financial view of their products; thus, they have a difficult job making decisions and building processes that optimize the economics. Amazon uses this knowledge to do things like determine the number of warehouses they need and where they should be placed, quickly assess and respond to vendor offers, accurately measure inventory margin health, calculate to the penny the cost of holding a unit of inventory over a specified period of time, and much more.
The long and short of it, I think, is that for certain business things, you really can’t do it any other way, and attempting to get at a precise understanding of correlation vs causation is counter productive. But for other things, where precision is possible and good, they do go after it.
There are two other pieces that I think might be useful to think about:
First, a huge chunk of Amazon’s approach is to use proper process control tools. One big one is that metrics owners must have the ability to recognise routine vs exceptional variation. I was working on a piece that explains how SPC does this, but couldn’t get something I was pleased with before the Chinese New Year holidays hit. Expect that members-only essay in two week’s time.
Second, Colin told me he believed very strongly in ‘understanding your customer first, qualitatively, before forming a hypothesis and verifying with data’. He did not like the approach of developing an understanding of the customer from data or surveys. My understanding of this is that this is an adaptation of the scientific method — you don’t want to come up with a conclusion from some pattern in pre-existing data; you want to come up with a hypothesis and then test it — or at least test it on some other data set.
I apologise for being a bit over the place in my response to this. As is the case with many things in business, Amazon’s approach to data is a combination of judgment, a deep understanding of the customer, and numerical rigour, and the mix was what I wanted to learn from Colin in the months that I worked for him; I’m not entirely sure I understand it fully or I’m able to articulate it quite yet.