paint-brush
Are Your Features Actually Improving Your Product?by@dcook
570 reads
570 reads

Are Your Features Actually Improving Your Product?

by David CookOctober 12th, 2016
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>This is an essay I wrote while consulting for </em><a href="https://sysdig.com/?utm_source=medium&amp;utm_medium=link&amp;utm_campaign=cooks-blog-post" target="_blank"><em>Sysdig</em></a><em>, an awesome container-native monitoring platform.</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Are Your Features Actually Improving Your Product?
David Cook HackerNoon profile picture

This is an essay I wrote while consulting for Sysdig, an awesome container-native monitoring platform.

Theoretically, new features should help users derive more value from a product. There are countless ways they can do that. A feature could help power users take advantage of more advanced functionality, or a feature could help new users understand the basics of the product. Despite those differences, the approach to measuring how they impact product usage is similar.

I like to break down my analysis into 3 different categories: exposure, usage, and retention.

  1. Exposure: quantifies the number/percentage of users who actually saw and used the feature.
  2. Usage: determines how those people interacted with the feature.
  3. Retention: looks at the impact the feature might have on long-term usage of the product.

Exposure and Usage analysis can usually occur within a week of launch, but Retention will take much longer (often months) to really understand.

I’ll explain how to approach each category below with broad enough strokes that should apply to any kind of feature.

Exposure

Someone can’t use the feature you just poured days, if not weeks or months, of effort into if they don’t know about it. And no, publishing a blog post about the feature is not enough. Most of your users don’t read your blog. Unfortunately, feature discovery is often ignored even though it’s quite easy to do. But that’s a topic for another post, so I won’t go into how to do that here.

Exposure is basically a ratio of the number of people who saw the feature in your product over the number of people who would be interested in the feature (relevant users). The kind of feature you’re trying to measure dictates how you measure exposure. In general, features are targeted at new users, existing users, or potential users. The denominator is the only thing that changes for each type of feature.

In order to count in the numerator of this calculation, a user must open your app and see the screen on which they would use the feature in your app. Viewing an email or blog post is not enough since people can open that content without actually reading it, but they’re less likely to do that in your app. If the feature is a small part of an existing, cluttered screen, I would draw their attention to it using a tooltip or something similar and count interactions with that tooltip as views of the feature. Of course, anyone who interacts with the feature counts as a user who saw the feature.

Features for New Users

A new onboarding flow wouldn’t be used by your existing users, and you probably don’t want to tell new users about it either. Yet exposure is still relevant here. It’s main purpose is sanity-checking your expectations. Regardless of the size of the feature (whole new onboarding flow vs. changing one aspect of it), you still want to make sure your expectations line up with reality in terms of the number of people who see the feature. If you just shipped a new onboarding flow, you might expect 100% of users to use it, however, that might not be the case if users can skip or exit the flow. In other words, exposure for features targeted at new users will identify gaps in your product’s path to an a-ha moment.

Features for Existing Users

These features should help your existing users derive more value from your product, but users can’t unlock that value without learning about and using the feature. You can think of this process as a funnel. You start with a set of users that will benefit from this feature. Hopefully, that includes all users, but it may not. Then you tell them about the sweet new feature you built and hope they use it.

Obviously, some users may not see your communications and still find the feature, so the funnel may not have a perfect descending order.

The overall exposure measure for a feature can be simplified by just looking at the first and last steps of the funnel:

Features for Potential Users

The final category of feature is for users that looked at your product but claimed they weren’t interested because it was missing x feature (you’re keeping track of this, right?). For these features, you want to make sure you reach out to everyone who requested them and then track how many actually come back to the product and try the feature. Again, a funnel works well here:

Usage

You may think usage analysis is where we find out how useful/awesome the feature is, but it’s hard to quantify value soon after a feature has launched (What is value? More time in the product? Less, but more efficient, time in the product?). Therefore, we wait until the retention category of analysis to assess the value of a feature. Usage is an intermediary step to calculating value where the goal is learning how people interact with a feature. This should give some indication of the usefulness of a feature.

What we want to learn here is whether people engaged with the feature in the way it was intended or not. In other words, how did the design and implementation perform? You can start by looking generally, but you’ll probably want to trace individual user actions to discern what path certain users took. Exploratory analysis is a big part of this.

This analysis is pretty feature-dependent, so you’ll have to make a subjective call on how to look at the data. Here are some of the methods you might use along with what they are good at:

  • Funnels: charting user progress from viewing a feature to successfully completing usage of that feature (success can be events like creating an alert/dashboard or receiving a notification in Slack)
  • Trees: tracing user movement through your app (each node represents the number of people who completed each step)
  • Bar Charts: comparing the popularity of different options
  • Line Charts: displaying change in behavior over time (are more people using the feature over time)

Once you’ve made some charts and identified some areas of interest, dig into those. Are users running into any errors? Are they abandoning the product or flow at a particular stage? It may be helpful to reach out to specific users to get more color on why they did what they did.

It’s probably worth immediately fixing any obvious problems here before moving on to retention.

Retention

This is the most challenging, but often the most enlightening part of analyzing feature usage. Since a new feature is unlikely to cause people to cancel or upgrade immediately after using it, you have to let a lot of time pass before you can start trying to tease out a feature’s relationship with retention/revenue. However, this additional time complicates things: it allows for more opportunities for users to fall in love or lose interest in the product for other reasons which obfuscate the particular feature’s contribution to the outcome.

To try to account for this, it’s important to pay close attention to the number of people who use a feature. The more people who use a feature, the more likely a trend will emerge amongst them. This is why measuring and maximizing exposure is crucial. For benchmarks, I would use 3 months and 60 users as the minimum bars before attempting to calculate retention.

Once enough time has passed or enough users have interacted with a feature, then you can use a couple different methods to tease out correlation between usage and retention: cohort reports and regressions.

Cohort Reports


**Time Cohorts**You can create a time cohort report using SQL, but Mixpanel makes creating some cohort reports extremely easy if you don’t know SQL. They allow you to create 3 different kinds of cohort report: recurring, first time, and addiction. Recurring is useful for tracking use of the same feature over time. First time is great for drawing connections between two actions. And addiction shows you if people are using your product more or less over time. Each of these cohort reports groups people into cohorts based on time.


**Usage Cohorts**You can also group people into cohorts based on usage, though you’ll have to do this outside of Mixpanel. For example, you might want to compare people who didn’t create an alert during their trial, people who created one, and people who created more than one. For each group, you could calculate the percentage of users who converted at the end of the trial. You may learn something like 70% of users who create an alert during their trial become paying customers, compared to 40% of users who don’t create an alert. This suggests alert creation contributes to a successful trial. Careful, this is correlation, not causation, but it’s still useful information to know. You should design and run an experiment to determine if a relationship is causal.

Usage cohorts are the easiest way to uncover relationships with small cohort sizes. And a good way to represent them visually is with a barchart.

Regressions

Regressions are a powerful way to calculate how a number of different variables relate to an outcome (in this case: retention). Check out my answer on Quora to learn how to build and interpret a logistic regression. A linear regression is similar but is used for non-binary outcomes like months-as-a-customer or revenue-per-customer. Like cohort reports, regressions only give you correlation.

Tying It All Together

After completing all of these analyses you should have a good idea of how many people are using your key features, how they’re using those features, and how usage translates into revenue. The findings could suggest a number of areas for improvement. Here are some examples:

  • If a feature is highly correlated with retention but has low exposure, then you should find a way to introduce it to more people in a contextually relevant way
  • If a feature has high exposure but few people successfully use it, then you might want to look into where people are getting stuck and possibly redesign it
  • If a feature has high exposure and usage but little correlation with retention, then it’s probably a table-stakes feature that is still important but shouldn’t consume a lot of your dev/support resources

If you discover data that seems hard to explain, try pairing it with qualitative data. Ask users who seem representative of a larger group how they feel about a particular feature. Data can help illuminate what people are doing, but conversations reveal why they’re doing it. Both are critical when telling a story and making decisions.