In my previous post I described how to apply the Double Diamond - a tool of design thinking (https://www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolved-double-diamond) - to product discovery. In this post I will be diving deeper into diamond two (D2) where we address four types of risk as defined by SVPG:
- Is it valuable to the customer?
- Is it usable by the customer?
- Is it feasible for us to build it?
- Is it viable for our business?
D2 in a nutshell: conjure as many solutions as possible (divergent thinking) and then validate each idea to eliminate all but the optimal solution (convergent thinking).
NB: The final convergent step should not be confused with the “Learn” step which is intended to reflect a “ship to learn” mindset (https://www.intercom.com/blog/intercom-product-principles/).
D2 is an iterative process and starts with some hypotheses which can be validated at varying levels of fidelity. A good hypotheses takes a format something like this “We believe suggesting improvements to authors at point of submission will increase the quality of submitted manuscripts as measured by a decrease in the desk reject rate from 50% to 45%”. See a great post on hypothesis here <https://www.producttalk.org/2014/11/the-5-components-of-a-good-hypothesis/>. Surveys, interviews, wireframes, live data prototype and AB testing are typical methods to validate a hypothesis. On occasion, for a big bet, an idea may pass through multiple levels of validation, say starting with a survey, then wireframe and then AB test (assuming at each validation your hypothesis is not disproven). Commonsense is needed to determine how much time and effort to spend on validating an idea (i.e. cost vs benefit). Ray Dalio principle 5.6 (https://www.principles.com) “Think of every decision as a bet with a probability and a reward for being right and probability and a penalty for being wrong. A winning decision is one where rewards times it probability of occurring is greater than the penalty of it occurring”. So let’s dive into D2.
The discovery workshop
First we need to make sure you have the right people in the room - physical room where possible - this should include product, engineering, design and, depending on your situation, a data scientist. In my current role we always include a subject matter expert from the area of the business where the problem is causing the most pain.
It is important to find a real subject matter expert rather than someone with a lot of opinions.
I start this kind of workshop with a slightly modified version of the lean canvas (http://leancanvas.com). I prepare a first draft of the canvas prior to the meeting and then, in the meeting, walk through the canvas box by box and update the canvas. Its important for everyone to contribute and feel part of this process - I find this helps get everybody in a ‘product thinking (https://www.mindtheproduct.com/why-product-thinking-is-the-future-for-product-management/) mindset. This should take about 30min to max an hour.
At this point everybody should be focused on the problem.
Its important to note that the outcome of these workshops are shared with other key stakeholders to mitigate viability risk early on. For example, it may not be neccessary to include a participant from your ethics department, but sharing your ideas early with ethics can save you a lot of time and wasted effort. Unfortunatly I speak of this example from personal experience :/
There are many methods to conjure up ideas to solve the problem at hand (selected in diamond 1). Often the ideas come from outside the room but typically ideas come from:
- Random gut feel
- Iterating existing features
- Customer feedback
- Tech eg scale
- Quality eg bugs/performance (cite this)
Which method is best for your team is an experimentation process in itself. Using Jobs to be Done (https://hbr.org/2016/09/know-your-customers-jobs-to-be-done) is one of my favorites. A method I haven’t yet experimented with but am super keen to try is the opportunitiy assessment tree (https://www.producttalk.org/2016/08/opportunity-solution-tree/).
Easier said than done
One of the things I struggle with is speed - validating ideas quickly to reduce the cost of failure.
Iterating at speed (without forgoing too much quality) is difficult. This is what I like to refer to as Discovery Flow. This is, in my imagination, when dual track development (https://www.jpattonassociates.com/dual-track-development/) is working in perfect harmony (NB: I have never achieved perfect harmony). Setting up a Customer Advisory Board (CAB) (https://www.productplan.com/glossary/customer-advisory-board/) and having recuring scheduled calls with users on an ongoing basis is key to having good Discovery Flow. I, with my awesome UX lead, are recruiting our CAB at the moment - so any academic researchers out there wanting to help please reach out [end plug].
The other thing I find difficult is designing experiments to validate ideas which are both meaningful and as small as possible. One of my heroes Don Norman (https://www.nngroup.com/books/design-everyday-things-revised/) (if you havent read this book it really is one of the classics - I think Design of everyday things, along with Lean Analytics and Inspired are the 3 essential product management books). In this example Mr Norman and his team want to understand how pedestrians will respond to driverless cars - so they go to an auto-store, buy some car seat covers and make suits out of them. They then drive around in these suits and it appears there is no driver. Genius!