paint-brush
Is There a Golden Standard of Stickiness?by@David
1,875 reads
1,875 reads

Is There a Golden Standard of Stickiness?

by David SmookeOctober 20th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I had the opportunity to interview what motivated the bright minds behind this report: Senior Analyst <a href="https://www.linkedin.com/in/philaperry/" target="_blank">Phil Perry</a> and Senior Product Manager <a href="https://www.linkedin.com/in/hublin/" target="_blank">Hubert Lin</a>.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Is There a Golden Standard of Stickiness?
David Smooke HackerNoon profile picture

Product Benchmarks Report Interview

Disclosure: Mixpanel, the Data Analytics company, has previously sponsored Hacker Noon.

I had the opportunity to interview what motivated the bright minds behind this report: Senior Analyst Phil Perry and Senior Product Manager Hubert Lin.

This interview has been condensed and lightly edited for a better reading experience.

David Smooke: When you do a report like this, you go in with your hypothesis and then you come out the other side with the actual analysis and data. Were you surprised by any of the specific numbers?

Phil Perry: Everybody has their own preconceived notions of how an industry’s going to behave. I think I was little bit surprised that media & entertainment products weren’t as sticky as I expected them to be. We could’ve reported on just average daily active users or monthly active users. But in terms of really helping people build out better apps, we felt that a better initial metrics report out long is a ratio of the two, i.e., the stickiness, the daily active use over monthly active users.

I think about my own consumption habits of media, i.e., I go to Hulu every night. I expected [media stickiness] to be a lot higher than what we initially expected. But part of that is because not every app is Hulu. And I think back to classes I’ve taken in my studies where you hear that X amount of businesses fail and that that X is actually fairly large. It becomes clear that actually success is something to really be celebrated. A lot of apps ultimately just don’t have the stickiness you’d like. If you are above the median number, you’re actually doing fairly well and probably doing something right.

David Smooke: I found the golden standard of stickiness to be quite a term and also an interesting number. A product’s stickiness is the degree to which the existing use of a product or service encourages its continued us. Could you speak a bit to the 25%, and is there really such a thing as a golden standard of stickiness?

Phil Perry: When you say golden standard you’re putting something on a pedestal, right? And that 25% is the 90th percentile. So ultimately it means that this is better than 90% of apps. It’s probably good enough to get some sort of honors in college.

So, yeah, I think that 25% is something to strive toward. And essentially if people are logging into your product for pretty much a quarter of the month, that’s fairly successful.

Hubert Lin: Some of these 90th percentile metrics we found were fairly consistently across all the industries. Which is actually surprising. I thought we’d see more variation there at the higher level. But the 25% stickiness was actually quite similar across the industries that we looked at. And I do believe that number is something that folks can aspire to. But I always would caveat use of a golden standard number, because the reality is you have to consider what makes sense for your business, your products, and what you actually want your customers to do.

Let’s say you have a product that only needs users to come back once a week or once a month; all of a sudden that 25% number’ could be an unrealistic standard that doesn’t actually make sense for your business. Or maybe you’re the other way, and the only way you succeed is if someone uses you every single day. These are the kinds of things that you have to consider when you look at these numbers: am I a business that fits into the framework of this benchmarks report or do I have some additional characteristics I need to take into account when I assess myself versus this number?

David Smooke: What would you say to the next product manager as they’re looking to define what is a monthly active user, daily active user and should it be measured in a specific action versus just logging in?

Hubert Lin: This might be personal preference, but I’ve always leaned towards taking action as the rubric for whether you’re a daily active user. There’re lots of ways to get “usage wrong”, whether that’s tracking accidental clicks or other things that don’t indicate an actual interest in using the products. I don’t think the bar’s too high to have DAU geared towards key actions. Especially since with Mixpanel you can track pretty much anything. But I would use action oriented measures to indicate the active user.

David Smooke: The jump in mobile engagement above the average on Sunday nights for financial services apps was a cool detail of the report. Could you speak a little bit to how you found insights like that?

Phil Perry: The exciting thing about any dataset is that analyzing it is a very exploratory process. So often you start with a simple hypothesis. For engagement, our initial hypothesis was along the lines of, well, if I’m trying to build an app and trying to optimize for certain outcomes, when are people doing those things?

From there, it became more of just how else can we parse this data. We can parse it by our industries and we can parse it by web versus mobile usage.

I think the thing that I found really interesting with that same data was, in media, the mobile usage on the weekends being higher than on weekdays; versus on the web, where people tended to use media apps above average during the week. I think it’s again the case where very often times you’re at work and maybe you’re playing a song as you’re working, or maybe somebody sent you a funny video and you click through, so you look at it on your laptop.

Hubert Lin: For me the thing that sends me into kind of circles is how much of this is determined by the way end users just naturally use digital products or determined by the way that PMs and marketers promote usage and engagement to their end users. For example, do PMs have a standard time and date when they release new features or new versions of apps? Do marketers have always send promotions at a certain time? These factors could predetermine what the pattern will be versus just how people would want to engage with apps on their own . I think this is tough to disentangle. And so that part for me is always the confusing part, which is to take a look at the information and try to figure out what causes what, if anything!

Phil Perry: The classic chicken and the egg problem.

David Smooke: How do you look at this report as helping the next product maker? So if you know your product is a low percentile, maybe you should do more push notifications or do more emails to bring them back to the eCommerce site. How else will this report help the reader make a better product?

Hubert Lin: For me, if I looked at these usage patterns, I would hope that the product managers and other folks that read this report are aware that there are these kinds of determining factors out there. Like when do marketers send promotions, among other things. But I would take the numbers at face value and then use that to kind of shape some other strategies.

There are more takeaways here than just “run promotions to get conversion”. If I’m a product manager and I want to run A/B tests, it’s really important to me to get results as soon as possible with as much engagement as possible. And so clearly from the data, setting an A/B test on a Saturday afternoon is a pretty bad idea across industries. That’s basically the way I would approach this: look at the engagement levels, think about what you’re trying to do, and make sure that your strategies fit your business model and needs. So there might be cases where apps absolutely want to try to re-engage folks on the weekend and before low engagement moments. And then great, you know when those are for your industry and your type of app, and that will help you out. Or for apps who need high engagements — this promotion, this A/B test, whatever it is — I’m going to focus my attention on the days where I see these usage spikes.

If my product doesn’t actually follow the patterns here, if I have a product that actually does see additional usage on the weekends, it would just give me another data pointfor me to dig into. Why is it that my product is different here? Why is it different than the rest of the industry? Is that good or bad for me? And this might lead you towards additional analysis where you can understand a more about why your customers might be different or why your product might drive different behavior.

Phil Perry: I think that’s a really important point. When you’re looking at any sort of benchmark, just because you might deviate from it, that doesn’t mean it’s bad. Your business is purely your own, your app is purely your own. Think of a paycheck app. I only get paid so often, so I’m not going to log into that app every day. So if I log into it two days out of 30 days in the month, then that’s actually pretty good.

So ultimately, deviation from the benchmark is not necessarily a bad thing. It’s purely a case of understanding the why behind it. And that’s ultimately a question that you can probably spend a very, very long time on.

David Smooke: Lastly, could each of you could just speak a little bit on your general philosophy of business data, and how businesses should use data.

Phil Perry: When I think about data and I think about business, ultimately data aims to reflect what is happening in the business and inform decision making. It’s very easy to track things because they make you feel good. It’s also very easy to track things because those things are the easy things to track, like a page view.

But that doesn’t necessarily mean that this is the thing that you need to track in order to understand if your business is succeeding. And so ultimately, put a good deal of thought into what you want to understand and why you believe that is the case. Getting that right will result in having good dashboards that let you know if your business is working or not.

If you sort of cavalierly track everything, that lack of focus can result in erroneous assumptions and being overwhelmed by your data. Ultimately, it’s difficult to say if an initiative was successful without some metrics that are paired with it, both leading and lagging. Let’s say you want increase your average deal size. Well you’ll probably do that by sacrificing a lot of small deals, and so you probably want to pair another metric with “increase deal size”, like number of deals sold.

So ultimately when you’re thinking about metrics, thinking about the overarching business, you have certain things you’re driving towards. For each initiative, some sort of metric should be tied to it to say did we succeed, did we not? And if we didn’t, it’s not a bad thing; let’s learn from those mistakes.

Hubert Lin: Yeah, on my side of things, I use data in a couple of different ways. One is the way that Phil described, which is that data can provide an answer. So was this hypothesis I had true? And if not, now I know, and if it was, then great. That is one specific way to do it, which is obviously useful for your business. Beyond that, I try to make sure that everything I’m monitoring and tracking has a purpose and is related to the actual success of your business. So that’s a very important part as well.

And then the last thing for me is when data brings up areas of investigation. If something doesn’t meet expectations, or if I notice something that seems out of place, the data can lead me to investigate further. Then we need to have conversations with customers to find out context.

Phil Perry: I also want to emphasize the value of generating additional insight and data through experimentation. Setting up business experiments such that you can see specifically significant results. Ultimately, experiments are the best way for you to validate any of your hypotheses. You can certainly look through a bunch of data and generate all kinds of correlations, and that’s great, because they can at least inform some of the directions you should go in. But once you’ve got there, there’s only so much more iteration you can do on that correlation before you just need to try it out to see if it works. Collect the data correctly, and then ultimately choose to go forward, or modify things.

Hubert Lin: I will say from experience that it’s extremely difficult to figure out or come up with features that statistically significant results in A/B tests. It’s a really tough thing to do, and potentially disheartening if you think you have a genius idea and then the test turns up nothing significant.. But that is kind of the truth of the A/B tests, which is why you do need to keep going after it because there are big wins out there but they happen fairly rarely.

Phil Perry: People are hard to move.

David Smooke: Yes they are. They just want to keep doing what they’re already doing.

Hubert Lin: Exactly.

Phil Perry: I’d be happy to have stayed in bed this morning but I got up.

David Smooke: Congratulations on fighting the inertia.

Phil Perry: Right. I count it a strong win.

Get your full free copy of the Mixpanel Product Benchmark Report.