paint-brush
How to Test Your Startup Ideaby@amanjcorp
1,100 reads
1,100 reads

How to Test Your Startup Idea

by Aman JainSeptember 11th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Most new products are unable to get any traction in the market, even if the execution is good. This often comes as a big surprise to the founders when they’ve “vetted” their idea before building. To validate your idea, you need real market feedback. There are techniques you can use to generate ideas but, for the purposes of this post, I’m assuming you already have an idea. The rule of thumb is that the idea needs to have the potential to have at least $100 million in revenue.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Test Your Startup Idea
Aman Jain HackerNoon profile picture

In this post, I’d like to share my learnings on how to validate product ideas quickly. This is an extremely useful skill to have when building 0 to 1 products, because most product ideas will fail in the market. So, better not spend 3-6 months building something that nobody wants. Note that idea validation is different from problem validation. Problem validation requires following a different playbook.

Most new products are unable to get any traction in the market, even if the execution is good. This often comes as a big surprise to the founders. They are surprised because they’ve “vetted” their idea before building, typically by asking their friends and family, many of whom have enthusiastically said they’d use or pay for such a product if it existed.

What gives? The reality is that there’s a large difference between what people say they’ll do and what they actually do. So, simply posing a hypothetical question (would you use X if it existed?) to people is not good enough. User surveys like these might tell you that a problem exists, but it won’t validate that you have the right solution. To validate your idea you need real market feedback. And ideally, you need a quick way to get this market feedback without spending a ton of time and energy. Because, why spend months building something that you don’t even know if the market wants?

Luckily, there’s techniques that can be used to get real market feedback in days instead of months! I’d like to describe an idea validation playbook I learnt from reading the book The Right It by Alberto Savoia and by working on brand new apps myself. Once you’ve followed this playbook you’ll know if your idea is worth building or if you’re better off moving to your next idea.

Step 1. Write down the idea clearly

The first step is to come up with the idea and write it down clearly. There are techniques you can use to generate ideas but, for the purposes of this post, I’m assuming you already have an idea.

Let’s work through an example that recently popped into my head while being quarantined at home during COVID-19. I’m used to working out at the gym 3x a week. Now, with no access to a gym, I started scavenging through Youtube workout videos. I found that most of them were cardio-focussed whereas I prefer more resistance-based training. I also tried different fitness based mobile apps, but none of them were exactly customized to the equipment I had at home. What an opportunity, I thought. I wrote down my idea clearly.

iFitness idea: iFitness will offer personalized workout plans, tailored to each user’s needs and goals. The plans will be customized based on a variety of parameters including the equipment the user already has and how much time they are willing to devote. A smart recommendation engine will be built to enable this product.

Step 2. Identify your Market Engagement Hypothesis

The next step is to convert your idea into a market engagement hypothesis. This is your hypothesis of how the market will engage with your product if you built and launched it. It forces you to think about who is the “market”, i.e. your customer.

iFitness Market Engagement Hypothesis: A lot of working professionals who are quarantined at home will pay for iFitness subscriptions to get personalized workout plans.

This is a small step forward. We’ve identified roughly who we think the customer is — working professionals quarantined at home. But, how will we objectively measure if the idea is worth building or not? We need to get a lot more specific. Time to do some back-of-the-envelope math and make some assumptions.

Let’s assume you want this to be a big venture-backed business. The rule of thumb is that the idea needs to have the potential to hit at least $100 million annual recurring revenue (ARR) over a 7-10 year horizon to get venture funding. Alternatively, if you didn’t care about raising venture money but wanted a business you’d be comfortable leaving your job for, you’d probably shoot for at least $1 million ARR potential.

How many people do I need to earn this $100 million from? We’ll have to make another educated guess — we expect iFitness to be a subscription based service where users will pay $20 monthly.

MAU = $100 million / ($20 monthly x 12 months)
MAU = 400,000

This means we need about 400,000 MAU worldwide. Now, it’d be useful to represent this MAU number as a percentage of total market so that we can validate this hypothesis more easily. To achieve that, we need to use a few more rules of thumb.

About 40% of revenue on the App Store + Google Store comes from US.

According to the last census (2010) there’s about 80 million people in the 25-44 age group.

It’s reasonable to assume iFitness will also earn around 40% of its revenue from the US. Also, 25 - 44 years is a reasonable target demographic for iFitness. Using these numbers, we can figure out what percent of the target demographic we would need to capture for iFitness to have $100million ARR potential.

X / 100 = (40% x 400,000 MAP worldwide) / 80 million people
Percent of target market that needs to become MAP = 0.2%

Great! We’ve learnt that our idea could be big enough if 0.2% of our target market becomes a monthly active customer. Usually some percent of people will try the product once, and then some of them will become monthly active. For a paying product, it’s fairly generous to assume that only 10% of people who try iFitness will become recurring customers. So, in our case we’d hope that 2% people try the app and 0.2% (10% of 2%) become recurring customers. Let’s summarize our Market Engagement Hypothesis far.

2% of people in the US aged between 25-44 years who come across iFitness app will try it and 10% of those people will become monthly active customers.

This is progress, but we need to zoom-in even more. What’s a niche audience we can test this hypothesis against. I work at Facebook and we have an internal social group for all employees in the Bay Area. Most Facebook employees are between 25-44 years, so it serves as a pretty decent representative group. I can use this as my initial testing ground.

An example of a hypothesis would be:

2% of people in FB Social Group who see my post about the $20 iFitness subscriptions will visit our website and submit their email address to be invited to the beta.

The submission of email addresses in the above hypothesis represents “skin-in-the-game” for the user. The greater the skin in the game the more confident you can be in your result. For e.g. sharing your link more broadly or entering their credit card would count as even greater “skin-in-the-game”. On the other hand, a simple website visit doesn’t count as useful signal.

To summarize this step, we went from a vague idea to a very objective and testable hypothesis.

Step 3. Run your experiment and analyze results

Time to test. How do we test this without actually building the product? The market engagement hypothesis we came up with above is fairly simple to test.

We’ll buy a suitable domain and use one of the many no-code website development tools (like SquareSpace) to build a website that explains the iFitness concept and collects email addresses. Make a post in the FB Social Group linking to this website. Then measure number of email addresses collected versus number of post views. This technique is also known as The Painted Door Test

Running this experiment successfully requires some thoughtfulness, though. We might get muddled or incorrect signal for a variety of reasons. If we do not display that fact that this service is $20 per month, we might get too many sign-ups. Or if our marketing content isn’t crisp enough, we might not get enough sign-ups, even though the idea is sound. It’s best to run different versions of the experiment to build more confidence in your result. You might also want to see how different cohorts react to your idea differently. For e.g. you might want to experiment with how women react differently to this idea than men. To achieve that, you might run an Ad where you target women only.

Okay great, we have some data! We ran 3 such experiments, and the numbers look encouraging. What’s next? Our experiments were successful in proving that there is good initial interest about iFitness. But who’s to say that people will become repeat customers? Remember our numbers expect 10% of the users who try the app once to become monthly active.

We can use another technique called the Wizard of Oz Technique. This is a popular experimentation technique where the customers interacting with the product believe it to be autonomous, but the product is actually being operated behind-the-scenes by a human being

How might we use this for testing iFitness retention? I have a good friend who’s also a fitness trainer. I discuss the idea with him and he’s willing to help. I ask him to be the wizard behind-the-scenes and punt on actually building the recommendation engine. So, I make a lightweight mobile web-app that accepts user preferences and a way to accept money. Behind the scenes, I work with my fitness trainer friend to create these workout plans and then send them in a nice and polished email back to the user. We do this for little more than a month. At the end of the month, we ask the customers to pay for the next month via email.

It’s important to make the experience “feel” real though. If the user sees the figurative “man behind the curtain”, then the experiment is ruined. Therefore, even though it’s an experiment, the UI/UX needs to be polished and slick.

Another important aspect to talk about is the ethical aspects of running this experiment. Some people might feel it is disingenuous to not disclose the fact that there’s a human doing the actual work. I think this really depends on the context. If there’s something more sensitive like medical data involved, I would be upfront with the user.

To summarize step 3, you took a very testable hypothesis and figured out a quick way to validate it using creative techniques like the Painted Door Test and the Wizard of Oz Experiment. Now, based on concrete data, you may either drop your idea, tweak the idea and experiment again, or decide to move forward with building.

Hopefully, this playbook helps you in validating your ideas faster. When building 0 to 1 products, shots on goal is most important. This idea validation technique will allow you to do exactly that.