Why Interviews Matter (a Scientific Proof)
If you've been involved in the startup world or in innovation in general, you probably already know the golden advice of: "talk to your customers", right? Especially after the golden age of Design Thinking/Lean Startup we've been going through for about 30 years you'd think people would've gotten the memo.
Unfortunately I still run into loads of companies and people that keep neglecting this simple rule and all are "designing for me", working inside-out and thinking: "we know what's best for our customers". I've literally ran into Innovation and R&D managers of huge hardware tech companies that have held that position for decades, that pull out the "Ford Quote". Even my former boss at one of Europe's leading design & engineering firms — who I respect deeply and have learned a great deal from — has pulled this one on me (if you're reading this Wouter, no hard feelings).
"What's the Ford Quote?" you might ask? Oh blissful summer child, you. It's only something I hate with the passion of a thousand burning suns and is — as far as I'm concerned — one of the main reasons so many startups fail. It goes like this: "If I had asked people what they wanted, they would have said faster horses".
I hate you
Now, I have some beef with this, but maybe not why you think. First; Ford never actually said that, there is zero evidence supporting this claim, so conflating his succes with something he never even said is absolute insanity. Second, and maybe most important, people just get it wrong. The quote in and of itself isn't wrong, it's just that it's interpreted incorrectly.
Here's another one that holds a close second place on my shitlist and is a bit more recent. Steve Jobs famously said: "people don't know what they want until you show it to them". Basically the same thing, but worded a bit different. Again, not wrong, but again, interpreted wrong. Both of these quotes are almost exclusively used by execs that want to absolve themselves of the responsibility of talking to their customers, they assume "it doesn’t work" and that "the company knows best" and "the users won't give you the answer".
That is the absolute worst thing you could take away from that, it would be like watching Saving Private Ryan and coming out of cinema's thinking "wars look pretty awesome, we should do more of them".
Functional Fixedness
I'm sorry, but this really is a pet peeve. But on to what I feel is the purpose of these quotes and what they're talking about. They're actually referring to a phenomenon known as "functional fixedness": meaning when consumers are asked to make product recommendations, they fixate on the way products or services exist today and are normally used, making people unable to imagine alternative functions or novel solutions.
Hence the “faster horse”, people would've been unable to think of a car, only horses (especially if you're asking how they might improve their horse experience). Same with Jobs, if you only know classic cell phones, not many people will be able to dream up an iPhone.
Side note: as you all know, Ford didn’t invent the car as the first car predates the Model T by 22 years, making the “quote” even dumber.
I have a wonderful anecdote myself that proves this point: a story my grandmother once told me about her mother-in-law (my great-grandmother). She and my grandpa got one of the first ever commercial dishwashers in our country installed in their house.
Disclaimer: not my actual grandma
Important to know that conventional laundry washing machines had been around quite some time by then and were pretty well-known. When my grandmother was talking to her mother-in-law about it, she couldn't believe it! "Agnes!" she said "How can that be, and how would it even work?!" she exclaimed, "All your dishes will end up shattered from all that spinning around!".
So what happened here was that she was unable to think beyond her existing frame of reference of a washing machine with it's big spinning drum, especially in terms of how it functioned. She was unable to imagine how it could work in any other way!
To bring it back to our CPS framework, I'd like to give some credit back to some of the people waving this quote around: your users won't, in fact, give you the answer as related to the solution you're developing, as they're unable to. Furthermore, you're the expert in developing the solution, you know all about the latest technology and applications. But as we've discussed in Always De-Risk Your Market First, you need to look at the problem first, and if there's one thing users are experts in, it's their own problems.
So I would like to propose we start a new tradition of attributing fake quotes to the wrong people, it would've been mighty helpful if Jobs said: "Customers are not the expert in the solution, but they are in their problems"
Ah yes. Sweet, wrongful attribution.
And to bring it home with Ford, a fun fact: Ford didn’t compete on speed with his cars, people didn't care about faster horses, but about more reliable and affordable ones. A horse cost around 2000 USD/year, got tired and got sick regularly. So it was unreliable and expensive. Cars were still too expensive for most folks at the time. As an answer to that, Ford innovated the assembly line to mass-produce affordable, yet reliable vehicles. The model T came in at about 850 USD at the time (and at 265 USD only a few years later when production ramped up), way cheaper than a horse and always ready to go. And how, pray tell, did he figure that out? Right, by talking. To. His friggin'. Customers!
Enter Interviews
Interviews often get a bad rap, mostly driven by all of the above nonsense I've been talking about. I've regularly been faced with doubts and criticism, more often than not related to the fact of "what can five people possibly tell you that is even remotely statistically relevant?".
Turns out, quite a lot! I really love the visual in Jake Knapp's book "Sprint" that explains this. It’s based on the work of Jakob Nielsen, a website usability researcher in the 90’s who wanted to figure out how much user interview were enough to learn enough of the interaction issues people were having with websites. His research uncovered that — like clockwork — 85% of all user problems would be uncovered after 5 interviews. So he didn’t bother to do more than five, fixed those problems and went back to testing and repeated his exercise. It's a classic example of diminishing returns, where after the 4th or 5th person you will rarely get a lot of new information.
Jakob Nielsen’s graph as visualized in “Sprint” by Jake Knapp
Now, one thing bothering me, and what naysayer keep bringing up, is the statistics thing. Until recently I didn't really have answer for this until I stumbled on the work of Douglas W. Hubbard on applied information economics in his book “How To Measure Anything”. He talks about “The Rule of Five: The Power of Small Sample Sizes". In it, he showcases a really simple thought experiment where he asks you to imagine taking a survey of five people in a 10,000-people company. You're asking them about what their typical commute time is.
Next, he asks you to take the highest and lowest number you've encountered and he asks you what you think the probability is that the median of the entire population falls between those two numbers? Loads of folks guess it to be a 30% chance, or even 50% if they're feeling adventurous. The truth is, that if you run the numbers, there is a whopping 93.75% certainty that the median will fall within that range! And it's actually quite simple to explain.
The chance you randomly get a number that is on either side of the median is 50-50, it's the literal definition of median. So that's a coin flip. Now the chance of getting five consecutive numbers on the same side of the median is the same as getting 5 times tails in a row in coin flipping. If you've paid even a little attention during your statistics classes, you'll know this equates to a 1/32 chance (1/2 to the power of 5), otherwise known as a 3.125% probability! As a result, chances are 93.75% you'll end with the median within your range and encompassing a large chunk of the population as a result.
What are the odds? Well, 3.125% it turns out…
Now, fair to say this works rather nicely with numerical data questions, and not all things you'll be asking will be of that nature. You might also be asking about a certain behavior or experience, or asking fairly binary yes/no questions. But with a binary question it works even better. If you get the same answer 5 times in a row, there's a pretty good chance it's true for the majority of the population (a 93.75% chance even!).
You can drive this theory even further. Hubbard describes another thought experiment called the "Urn of Mystery" proving “The Single Sample Majority Rule”. In essence, he proves that when faced with binary populations of which the proportions are unknown (they could be divided 0-100 or 100-0 and anything in-between), taking a single sample, results in a 75% chance that the sample is from the majority of the entire population! So if you would have four urns with black and red balls inside of which you didn’t know the distribution, pulling a single sample will produce the color of the majority three out of four times!
Three out of four will always represent the majority
In terms of interviews, it means that the answer to a binary questions from even one person already has a large chance of being the majority's point of view.
Employing the rule of five in searching your SVA
I know I won't shut up about it, but here the importance of finding a good Smallest Viable Audience (SVA) pops up again. Moreover, when you are still searching for one during Customer Discovery (as explained in How to Develop Consumer Hardware Part 1), the Rule of Five is a great tool in figuring out if this is in fact a good SVA you've targetted.
A telling characteristic of a good SVA is that it is a very homogenous group, especially in terms of needs, values and willingness-to-pay. What I've seen time and time again, is that when your group is too heterogenous, you end up with very varying answers out of your interviews, that are scattered and lie far apart, making it difficult to draw conclusions. This is of course very frustrating, but it's telling you something equally important: "you're not there yet".
Thinking about the Rule of Five and always using about this sample size of 5-6 people for your first interviews with potential SVA's, will quickly signal you that the positioning isn't tight enough yet. In fact, it's a great alternative for employing quantitative tactics such as MaxDiff and latent class clustering (which is in essence all I've talked about here on steroids and in a brute force approach, but the cost is aligned with it).
So next time someone pulls the Ford quote on you, remember this article and give them hell. For me.