paint-brush
Setting KPIs for Platform Productsby@willemdoesproduct
2,647 reads
2,647 reads

Setting KPIs for Platform Products

by Willem de KleijneNovember 30th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Here's everything I wish I knew five years ago when a startup entrusted all of their internal tooling to me (a startup co-founder who had only built consumer products). To make this article actionable, you can start measuring the success of platform products using the checklist at the end of the post.

Company Mentioned

Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Setting KPIs for Platform Products
Willem de Kleijne HackerNoon profile picture

Here's everything I wish I knew five years ago when a startup entrusted all of their internal tooling to me (a startup co-founder who had only built consumer products). To make this article actionable, you can start measuring the success of platform products using the checklist at the end of the post.

This post covers four sections: why product management for internal tools is different, understanding user goals through problem statements, how to set leading Key Performance Indicators (KPIs) for them, and how to deal with the people trying to achieve those indicators.

Why platform products are the Special Olympics of product management

Most books on product management and measurements focus on consumer products, which have a very different customer relationship than creating a machine learning platform for the data scientists sitting one slide away.

We'll use the term 'platform' interchangeably with 'internal tools' for the sake of this post, meaning services that are inward facing (your customers usually consist of other engineering teams).

These are technologies that enable products to share data with each other, like a content management system, data pipelines, or offering Kubernetes as a service.

This makes the following tools from customer-facing product management literature less useful:

- Your users often can't choose alternative products to do their daily work, so business performance KPIs around revenue, retention or customer acquisition costs make zero sense.
- The user base is smaller than those of consumer products like e-commerce websites or apps, making the N for experiments low. Customer satisfaction (Net Promoter Score) is cool for consumer apps but when your userbase consists of seven colleagues you're not going to capture enough data points.
- A/B/n-testing goes out the window when your colleagues spend hours a day following the same flow in the tool you provide. They probably know the tool better than you do.
- Some internal products (in data processing or devops for example) don't have a user interface, making product usage KPIs more difficult to adapt to your context.

If we can't use those classic indicators that the rest of the product people in your organisation are using, what do we put on our slide deck for the next quarterly planning session? Instead of adapting existing business or user experience KPIs, let's build our own indicators from the ground up by starting with the user problem we're trying to solve. But how does one recognize genuine user need?

Rant: real user needs versus top-down company needs

Too many internal tools are conceptualized top-down to fulfill a perceived company need. This tendency stems partially from a 'build it and they will come' mentality and a desire to copy Big Tech initiatives because a manager read about it in a Gartner report.

Drop a line in the comments if you're working on a tool "because LinkedIN does it this way too". I've witnessed teams replicate Netflix's Kafka cluster setup even though they only had 0,0001% of Netflix' throughput. Another sunk two years into constructing a data catalog from scratch without knowing who they were building it for.

Before measuring product success, we have to make sure that a user exists for the product. This sounds simple enough until you realize how many products are C-level pipedreams and pet projects. Very few colleagues wake up in the morning with the desire to migrate to a new AWS account or want to get used to a different data warehouse to access the same old dataset; it's the company that benefits from increased security features, reduced costs and adhering to compliance. That doesn't make them bad project goals, but they aren't useful for measuring user-centric product development.

One hallmark of products without users are metrics which are output focused, like the amount of data pipelines built or the number of services migrated to Kubernetes. It's logical to do so: our managers want roadmaps with solutions on them, so we as product people tend to be passionate about solutions.

Measuring the output through lagging indicators is easier than focusing on leading indicators like user input (e.g. decreasing the amount of time a task takes). With all of the classic success metrics (listed in the first paragraph) that are output-oriented, it's understandable that product people measure output as well. However, no amount of solutions output are useful if you don't first understand your stakeholder's problem.

What's your problem?

So how do we distinguish user problems from company goals? Talk to your stakeholders and fill out the following problem statement with them. Don't know who to talk to? Then it's likely that your product doesn't answer a real customer need.

Problem Statement

stakeholder (describe person using empathetic language)
NEEDS A WAY TO
(needs are verbs)
BECAUSE
(describe what you’ve learned about the stakeholder’s needs in terms of business impact and urgency)

You're not done, yet. User problems change over time and to ensure a good product-market fit you're going to have to revisit this with them. When is your next sprint review? Use this existing ritual to refine the problem statement with your customer.

Problems turned into product KPIs

Now that we understand what our user needs we can measure how far along we are in solving their problem. Remember: measure user input (difficult but worthwhile to capture in leading indicators) over solution output (usually reflected in lagging indicators).

What kind of leading indicators can we come up with? Here are some examples:

Mean Time to Decision (for Business Intelligence products)Mean Time to Publish (for a Content Management System)Mean Time between Failures / Mean Time to Fix (for devops platforms)

See a pattern here? They are measuring the task you're automating for your stakeholder. Try it out yourself: how can you measure the task your stakeholder is trying to complete?

What if your stakeholder says that their problem is bad data quality or too many bugs? Then you're most likely not getting past their surface level frustrations with the existing product. Use the Five Why's-technique to get to the bottom of the task they are trying to complete. You can recognize surface-level frustrations because they tend to focus on quality aspects of your solution (see architectural fitness functions), not on activities stakeholders need to complete.

How to deal with smart people: forget about targets and set balancing KPIs

Okay, so now that we've got some KPIs defined you're going to have to deal with the people trying to achieve them.

Keep the amount of product KPIs low (1 to 3 per product), this prevents cherry-picking. Stick with them for a while so you can see developments over time .

If you're leading product teams you've probably noticed that you're dealing with smart people who will go to great lengths to achieve targets, especially when their bonuses depend on them. Targets are not KPIs. When you give people targets they will focus on achieving their small part at the cost of larger company goals. This is called local optimization.

The same predicament happens when teams cling to an indicator set long ago without frequently getting feedback from users: they will optimize for that KPI instead of zooming out to see if they are solving actual problems. One way to mitigate this behaviour is by balancing one KPI with another: your front-end dev can't improve the (otherwise sensible) mean time to publish metric by stripping away the safeguards in your Content Management System if there is a qualitative activity focused KPI that balances it (like % of time spent on fixing errors per content creator).

Checklist to start measuring internal products

1. Understand the user need by filling out the problem statement template.
2. Come up with indicators that measure how the problem is being solved. Are you measuring outcome instead of output? Remember to focus on activities (leading indicators) instead of solutions (lagging indicators).
3. Reflect on the people who will work with those KPIs: what metric can balance out someone cutting corners to achieve your primary KPI? Don't incentivize cheating with personal targets.

Willem de Kleijne is a Group Product Manager at SumUp (we make payments simple), hiring product people and engineers with an affinity for data products. Apply!