Before you go, check out these stories!

Hackernoon logo4 Ways You Are Probably Doing Product Analytics Wrong by@Uggernaut

4 Ways You Are Probably Doing Product Analytics Wrong

Author profile picture

@UggernautPallavi Modi

Building Product @FlixBus Twitter: @Pi_Modi

An article that I read recently stated that 41% of businesses struggle to turn data into decisions. This got me thinking about my own experience with product analytics.

Over the last two decades, the Web has transitioned from being a space for the hobbyists to one where billion dollar businesses are made.

This transition of Web made way to another kind of transition. What used to be Web Analytics, evolved into what we now know as Product Analytics.

In their glossary, Optimizely defines Web Analytics as:

“Web analytics is the measurement and analysis of data to inform an understanding of user behavior across web pages.”

Product Analytics, on the other hand, is a combination of quantitative and qualitative insights that inform product decisions.
For these purposes, you may find yourself looking at different kinds of data, ex:

  • user interactions with the website and apps,
  • behaviour patterns such as retention and churn,
  • purchases and cancellations
  • user feedback and Net Promoter Score (NPS)

In my product journey as a PM for Vistara (Tata Singapore Airlines JV) and FlixBus (one Europe’s largest online travel portal), these are the 4 common mistakes ( and solutions) that I have observed when it comes to Product Analytics —

1. Blind trust in the tool is dangerous

First things first — having the best tools at your disposal means nothing, if:

  • You have no idea why, where and how the data is being collected;
  • You don’t use the tool to actually deep dive into the data to gather insights;
  • You never question the correctness of the data you see;
  • You are sure of the correctness of the data but not what it says about your product.

In the end, any analytics tool is only as good as the the people using it.

To move into the direction of product analytics, once you have set-up the
tool(s), ensure you have done data sanity checks, and actually start
using the tool but with a curious and a critical mind-set.

2. Not defining success early on

One of the first lessons I learnt very early on is to clearly define what success looks like for your product.

It is possible that your ideal success metric is not the same as the
organization’s success metric. As long as your success metric feeds into
the overall organization’s success metric, you are doing fine.

Say you are a product manager responsible for the Search Listing page in an eCommerce organization in fashion domain. A true success metric for
your organization could be conversion rate and average order value.

However, your success metrics could be Progression rate to the next step and basket size.

While selecting a success metric, I follow these 3 steps -

  1. Identify an actionable core metric for the product that feeds into the organization’s goals.
  2. Choose a supplementary metric, ex: Avg basket size along with progression rate
  3. Apply the “So What Test”

3. Missing Issue Trees/ hypothesis driven analytics

We all know that gathering insights about product usage is important for
product development. Then why do most of us give up on digging them out?

After having spoken to many of my peers, I have narrowed it down to 1 thing: Data can be overwhelming.

However, the problem often is not the amount of data or shortage of time at hand. It usually is the lack of a hypothesis driven approach, or a
preference for, what I call, diving-in-head-first approach.

Let’s say your daily email dashboard shows a 5% dip in Day over day conversion rate (CVR). What do you do?

If your first instinct is to open your favourite analytics tool — Stop! Take a breath. Grab a pen and paper and try to answer —

What is the impact on Y if X changes.

Y being CVR in this case, and X could be any number of factors that you think could impact CVR.

Once the list of factors is in place, start plotting an issue tree.

This might seem like a lot of work the first time, but trust me, you already know the factors by heart.

By following this approach, you will be able to narrow down the exact
problem that could be causing the dip in CVR and you won’t have to dig
into 500 different reports to find clues to this not so big mystery.

4. Not combining qualitative with quantitative

I found my biggest learning in the realm of Product analytics from House of Cards. In season 1, episode 12, Raymond Tusk says -

“Decisions based on emotions aren’t decisions at all. They’re instincts, … which can be of value. The rational and the irrational complement each other.
Individually they’re far less powerful.”

By bringing quantitative and qualitative together, you can triangulate the solution and have greater confidence and richer insights for product development.

One way to combine the two approaches is to use Qualitative methods like
user research to formulate hypothesis that you can validate by using
quantitative approaches like A/B testing.

By combining the forces of Quantitative and Qualitative methodologies, you can answer the WHY with an evidence-based decision making approach rather than only answering the WHAT by sticking with data-based decision making.


We all appreciate good product instincts in our fellow Product managers,
but I can’t agree with Alistair Croll more when says in his book Lean Analytics,

“Instincts are experiments. Data is proof.”

And in the end, as my dear friend once put it, "if you’ve never enjoyed a
detective story, if you’ve never spent hours in front of a data anomaly
and felt a rush of “guilty excitement” trying to solve a problem,
consider doing something else, something that gives you pleasure".

Further Reading -


Previously published at


Join Hacker Noon

Create your free account to unlock your custom reading experience.