We are all familiar with the term MVP. Lots of digital ink was spilled in order to explain this concept. Minimum Viable Product (or Minimum Lovable Product as Henrik suggests) is the smallest product that brings value to our users, lets us validate our initial assumptions and adopt for the future. An MVP should be small and delivered relatively quickly so that we won’t lose too much time and money in case we fail to validate our assumptions.
It is all very nice in theory, however in reality building an MVP still requires time, effort and usually R&D resources. Small as it can be, we don’t want to waste our effort on a false assumption. Yes, we’ll learn from it, however in this post I want to describe another way to answer the most important question before we start an MVP — “Do people really want our product/feature?”. The concept is called a “Fake door”, the MVP before the MVP.
For the sake of this post, let’s say that we have created a wonderful application and our users love it. Everything is great, but the app is absolutely free, and we are getting hungry. It is time to make our users pay for some of the new advanced features.
This should do it for the beginning. These are the minimal requirements that we’ll evolve over time. This is our MVP. After we’ll build and deliver it, we’ll measure our user’s actions, learn and adopt. So after approximately week or two we’ll be live and running. Lean startup by the book!
Sound so, right? All the items above are essential for this MVP and for us to start learning. Right? Wrong!
Let’s say we build the MVP. Then, we wait. We wait for a day, two, three, a week and guess what? Not a single user decided to pay.
This entire MVP relies on a single assumption that we’ll be able to validate only after the entire MVP is ready — Are our users willing to pay for our product?
Why do we need to integrate PayPal if the users are not interested in our premium features? Should we build the permission system before we validate this assumption? Probably not. Let’s see another way for building our product. This time, let’s look on a real life example — How buffer approached the same scenario. I encourage you to read their post since it perfectly describes my point.
Buffer started with a free version of their product and after a while decided to introduce premium paid tracks. However, instead of developing all the “minimal requirements” from the start, they decided to validate their assumptions one by one without actually delivering value to users. They moved using the smallest possible test at a time.
This is what they’ve launched with:
As you can see all they did was to create a landing page with a button “Plans and Prices”. That button didn’t lead to any plans or prices. It led to a page that explain that the actual plans are not ready yet and the user can leave her email for updates. It doesn’t really matter if the user leaves her email or not. What matters is the fact that someone clicked on the button and showed intent to learn more about the plans.
Only after a sufficient amount of users have showed intent, they moved to the next test.
Let’s stop for a second. How much time will it take to develop 2 landing pages with a button? Probably an hour. It is so simple that anyone in the company can do it. This MVP delivers no value to the users, but it provides huge knowledge to the company that validates our assumption in a super-fast and cheap way.
This concept is called a “Fake Door”. There is a button but it leads no where. By the way, Google asks something similar in their interviews for product managers (How do you figure out if people would buy a McSpaghetti?).
The most important role of an MVP is to let us as a company learn about what our users want. MVP’s should provide value to our users, but it is a secondary priority. It should provide value only after we make sure they are actually interested in that value.
Here you can find a great post by Elad, my product manager, who explains in detail how have we applied this principle in our own app.
We cannot always trick the users. Eventually we do need to implement the features that we promise. The huge difference is that we’ll do it only after we are sure that this is indeed what our users want. We’ll reduce the waste of building an MVP for user who don’t need it by first building an even smaller MVP for us, to test the user’s intent and validate the need.
So before you start developing any feature ask yourself:
Is this really the smallest test to validate our assumption? Hint: if it brings true value to users, then it is already too big.