paint-brush
Why We Run 200+ Product Experiments in a Year and Why You Should Tooby@gr8tech
138 reads

Why We Run 200+ Product Experiments in a Year and Why You Should Too

by GR8 TechApril 9th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Parimatch Tech has run 200+ product experiments in a year and why you should too. Founder of the company says it's essential to test new features before launching new products to avoid wasting time and money. He says it’s impossible to progress based on just believing in some idea. First, you should understand whether it works or not before moving on and scaling it. There is nothing bad about being wrong. If you don’t test your hypotheses and generate new solutions, it's unreal to move forward quickly and effectively, he says.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Why We Run 200+ Product Experiments in a Year and Why You Should Too
GR8 Tech HackerNoon profile picture

During more than 8 years of experience in the product sphere, I’ve worked with different segments: marketing, analytics, product management. I’ve been holding the Growth team’s product leader position at Parimatch Tech for the last year. Together with my team, we have conducted over 200 experiments, testing new features, improving product and business results. But why is it so vital? Before launching some recent product changes, it’s essential to understand whether they gonna work at all to avoid wasting time and money.

The owners, CEOs, and employees of product companies, who don’t want to spend months on development and hundreds of thousands of dollars to test their hypotheses, will defiantly find this article very useful. Thanks to experimentation, everything can be done quickly and cheaply. This approach has become our routine. So, let’s see how it works.

Experiments and ways of launching them

Experiments are cheap, fast, and easy hypothesis testing. It’s impossible to progress based on just believing in some idea. First, you should understand whether it works or not before moving on and scaling it. Product companies start to think about a new feature’s effectiveness from time to time, quickly proceeding to its implementation without even checking the problem and its solution.

For example, someone from management saw that competitors or other products have a particular feature and wished to realize the same for their business. But there is no understanding of what exact value it’ll bring to the user and whether it’ll bring anything at all. In this case, we see no need for validation — only a desire to do something. However, you can already be aware of the problem/ demand but can’t determine the conceived solution’s effectiveness until the feature is completely actualized. Of course, such things can happen. Nevertheless, you can always find a way to test the solution before rolling it into production without running out of your budget and breaking the deadline.

There is nothing bad about being wrong. If you don’t test your hypotheses and generate new solutions, it’s unreal to move forward quickly and effectively. A culture of experimentation drives us to detect many different product problems and a considerable number of solutions. Thus, if something didn’t work during the experiment, it’s worth keeping your “failure” in mind, concluding the future without wasting resources. So, what prevents companies from running experiments? Lack of time for this process. The second reason is that the team’s daily routine simply doesn’t include this kind of activity.

Many people imagine running experiments in a company is going the next way: “Hey, I have such a cool idea. Just check if it works.” It has nothing in common with reality. There is a straightforward and streamlined process behind the experiments. The core principle is to highlight the right goals, create hypotheses, test them, and focus on the growth point.

Each experiment has four main components: hypothesis, action, data, and conclusions. It means we create a theory, test it, analyze the data obtained and make conclusions.

Our teamwork is structured in such a way that we generate ideas together. First, we define the product’s growth area: have a group session, try to put ourselves in the user’s shoes, and check how the product’s functionality (that can affect the metric we want to improve) works. Then we meet other teams and show what problems we found and what assumptions we have for solving them. Brainstorm and ask to add some missing points. Then we prioritize all hypotheses according to the ICE system’s criteria — influence, confidence, and ease of implementation. The last condition is the main one. Some growth teams don’t even start an experiment if it takes more than 5 days to create it.

The developers are a significant part of the team that does experiments. These are guys who understand how everything technically works. They are involved both in generating ideas and in prioritizing and launching them. Previously, our team has only front-end developers since we found all experiments only on the front-end and passed them on to the core teams for further implementation. Recently, we have a back-end specialist, so now we can correctly implement proven ready-made solutions into the product.

Practical part — how everything works

Often employees don’t believe in experiment effectiveness: they say that everything is working well and are very surprised to face “unexpected” results. Here’s a real-life example. We had an unclear password validation during registration (no exact instructions on what characters the password should consist of). When we tried to discuss this with the developers, they said: “Well, what’s so complicated? There is a question mark in the page corner, and you can click on it and read everything if something is not obvious.”

However, we decided to run an experiment and make a more structured password validation that explained which characters must be entered to complete the registration (at least one small letter, one capital letter, and one number). The result was fantastic — in two hours, we got + 64% to the registrational conversion rate.

Another story is about experimenting with a need to increase the minimum bid for a user. The company wasn’t excited about this experiment: everybody thought such doings would lead to negativity. But we took a chance and ran it on a small number of users.

The result was incredible because almost all users were delighted. This was demonstrative: we realized the possibility of being wrong in assumptions about how users would react. Such type of thinking slows down any progress. Therefore, it’s essential to be brave enough to try all experiments, even the risky ones — the best way is to check how everything works on a small number of users.

Following this simple suggestion, we stop being those people who just sit in the office and think they know everything in the world. We honestly admit that we don’t understand how users will behave, and each time we check it, we come up with new conclusions and improve the product based on practical experience.

Why fail experiments are crucial

Some companies have an ideology that can be described as “If you’re wrong — you’re incompetent.” Our company has other principles. We can easily accept failure, especially during experiments. Having made a few mistakes, we’ll know for sure whether our hypothesis works or not, and we’ll get the opportunity to show an even more remarkable result. This creates a culture of actual results instead of fair assumptions.

Failures in experiments often occur when you start working with new functionality that you haven’t faced before. For example, our latest experiment was connected with the activation of an inactive audience. Our team has never done this before, and no one has had a similar experience. We naively thought we could immediately catch our passive audience with the app’s help. To handle this task, we uploaded all “lifeless” users, divided them into cohorts, and run a test. But it turned out, and it was mission-impossible to make effective contact.

We decided to concentrate on those who’ve visited the site less than twice a week in the last month. They were previously unloaded from the database, and half of them took part in the experiment. During its timeline, very few people visited the site. So, we decided that the possibility of attracting inactive users with stock is highly low. We represented our results in the company’s internal channel, and it was concluded that it’s better to use communication channels to return such audiences to the product.

Fortunately, we have never had any failed experiments, after which the performance indicators went downhill. We consider as failure those experiments, after which nothing positively changed, but we believed in it.

We don’t lose a lot of money on such failures. It can be said the cost of a mistake is covered by the price of our team’s operational work. Since we try to run small and short-term experiments, we don’t spend a fortune to launch the functionality. Large-scale experiments don’t even go to work. Until we are sure that the hypothesis will work, we won’t bury months of development and endless budgets on the new feature’s full launch. So in this sense, fuck-ups in experiments not only save the company’s money but also make more.

It’s essential not only to learn from failures but also to share your mistakes with other teams. For what? Sometimes people don’t even think about why things work the way they do and don’t question them. But when we share the experience of failed experiments in the company, everyone starts to admit; perhaps there is a problem in their area of responsibility too. And then colleagues can also doubt their everyday actions and test new hypotheses.

Lifehacks for experiments

There are several rules to turn experiments into common practice with pleasing results on all company’s levels:

Set clear goals. This is the most crucial component. The goal is not only the growth of indicators but also the number of experiments.

Keep focus. It’s important to define a specific time for an exact metric. For example, we are working on the registration page for three months, the next three months — on something else. Many workers want to test their hypotheses asap and implement the functionality that they came up with themselves. Therefore, you need to keep the team’s attention and say: “Yes, it’s an amazing idea, but we can’t take this to work immediately. We are focusing on another thing, which is our priority now.”

Simplify everything. MVP squared is the slogan of our team. We often hear the term MVP in the context of new product launches. But this also applies to experimentation because the faster you get user’s feedback, the quicker you will earn or save.

If we have a giant hypothesis that may take a long time to test, we simplify it as much as possible and experiment in several stages. For example, to check whether users are interested in new features, we can imitate them: for a small percentage, we launch an interface change to see the improvement. They can interact with it until a particular stage, after which they’ll see the message: “Sorry, development is in progress.” Thus, we can understand and calculate how people interact with new features. If they are interested and active, we’ll continue our work, connecting the back-end, and if not, we won’t even waste resources.

Set a goal to improve your metrics. When people are responsible for launching some innovations, they tend to do their part done and move on, not paying proper attention to results and specific metrics. For example, they commit to launching functions A, B, and C in six months. After the first one is ready on their side, they could even not notice that it hasn’t worked. This is because when new features are the goal, all the focus is there too. But if the goal is to improve metrics and specific business indexes, the results are always the priority. Product development is a complex and time-consuming process that constantly re-evaluates what you need to focus on at the moment. Therefore, the goal on metrics helps to keep the right-center.

Where to look for hypotheses and why it’s crucial to turn them into a routine

Selection of hypotheses such as “I saw a cool thing here, so I guess it may work for us too” wouldn’t bring any result. It’s vital to understand where the product’s development zone is ending and what exactly you want to improve. In addition to product and competitor performance analytics, there are other sources for finding new hypotheses. Here’s what works best for us:

User Research should become a routine. For instance, our team guys have made it a rule to hold meetings with users for two hours every Friday, getting feedback. Such processes are complicated to make routine, but it’s still possible and really works. There is no more extraordinary treasure trove of insights than your users. All of our best successful experiments have been shaped with feedback help.

Every idea matters. Company employees are the main experiments moving forth and new ideas makers. Ask everyone to develop ideas for improving the product, but remember that you set the rules. To do this, announce which product area you are working with at the moment and what its deadline is, and take only these ideas into work.

Conclusion

Experiments allow a company to test its ideas for product improvement simply, quickly, and cheaply. Before launching a new significant function, it’s best to make sure if anyone needs it at all. This process part allows making a mistake since each fail doesn’t spend the company’s money. It saves them. And this is what helps the team see the real results of their work.

Anastasia Burdyko,

Growth Team Lead at Parimatch Tech

Also published on: https://medium.com/@parimatch.tech/how-to-run-200-product-experiments-in-a-year-2a9d88a48b08