Hackernoon logoTrue MVPs vs “what everyone calls an MVP” by@nachobassino

True MVPs vs “what everyone calls an MVP”

Nacho Bassino Hacker Noon profile picture

@nachobassinoNacho Bassino

Speaker, teacher, and coach, helping product teams improve their practice to achieve greater impact.

With two real life examples

We keep using the MVP term for the wrong thing.

There has been a lot of debates, articles, and information about what we should call an MVP (and I even discussed introducing “MVE” for experiments). But I want to focus on a particular misuse I see in most companies, especially when the term is mentioned by a non-product person.

What most companies do is to:

  • Define a potential project/feature
  • Start talking about it, defining it some more and getting a more detailed idea of what is expected
  • By the time they want to start working, they’ve all fall in love with it and it’s a gargantuan project
  • So some features are cut off and “an MVP” is released

That is certainly not an MVP.

When we ran into these sort of definition process, we are working towards a “Release 1” or sometimes a Walking Skeleton.

To be running an MVP, we need a hypothesis that we want to validate it or invalidate, and our focus should be learning and iteration over growing and adding features.

Let me provide 2 examples that clarify the mindset you need to have in both cases.

Catalog Manager

In an e-commerce company, we had many providers that we were connected to, in order to display to the customer the products available for purchase. Without going too technical, we get products from the provider with its price, availability, descriptions, pictures, etc.

We had the need for a product that would allow a commercial team to manipulate the product information (everything except price and availability) in order to make descriptions, pictures, specs, etc more accurate and attractive for our customers.

So the usual process came in: we had the need, we start discussing what we should do, and many great ideas started to come up. We started discussing things like adding videos, merging multiple descriptions from providers and a ton of cool things that should add value to the solution.

Of course, the time came to actually build it, and before going through a 6 months construction process we decided to take only a few features that were the more important ones, estimated a 1,5 months construction effort, and called it the MVP.

That was it. There was no hypothesis, there was no way to measure the success of the project. There were no learning processes in place. We built it and move on to the next thing.

And the question that I heard 2 months after the project was launched wasn’t “how much better are we selling now that we have it?” but rather “when are we going to be able to add videos?”.

NOTE: just to share how outrageous this is, the tool wasn’t even being used when the videos question came in. Most of the effort was wasted.

Automatic Pricing Policy

Another team I worked with were selling products that were kind of commodity, so price parity was crucial. There was a commercial team that every day will manually check prices against other players and update price policies based on the result. This, of course, covered a selected range of products, it was impossible to manually adjust all of them.

They wanted to automate that process and see if having a more accurate parity over a larger set of products will increase sales.

This was a huge project, with many integrations and many different kinds of policies to apply.

The team was working with 2 weeks iterations, so they came up with a goal: what is the smallest part of this project that we can implement in 2 weeks to see if the hypothesis is true.

The result was amazing, they split the project to apply only to a particular set of products and rules, and incorporated a small manual task (uploading a file from another system rather than automatically connect to it). But they were able to deliver a fast MVP.

They had a very well defined metric to test whether the result was a success or not (basically A/B test the conversion rate).

And the MVP was “a failure”. It did not increase sales. But! They found some constraints about how much a price can be lowered, how frequently the process should be executed, even what factors other than price policy might affect parity. They learned. So they updated their backlog. And 15 days later they were releasing another iteration with much more chances of success due to that new knowledge.

Wrapping up

I started this article trying to get you to avoid using the term MVP if you are working as most companies do, in a “Release 1”.

But this truly is a call for using more MVPs 🙂

I created a quick guide that I hope will help define better MVPs that I encourage you to read and use as quick reminder if you find it useful.

But again, this is about having a Lean Experimentation mindset. If you thinking about a new product or feature, think about what benefit or problem you are trying to solve and your hypothesis about why this may achieve that goal.

Think about the truly smallest thing you can build to test that hypothesis. It will help if you set a timebox, like an iteration, where you force to test something with real users in that period.

I hope you can see how beneficial this is, and I also hope you help me spread the word within your organization and get people more accurately defining MVPs vs 1st Release.

Originally published at leanexperimentation.com. If you enjoyed it and want to receive more tools & tips on Product Discovery, you can subscribe here.


Join Hacker Noon

Create your free account to unlock your custom reading experience.