In my last article, I attempted to describe ways to use a standard behavioral analytics stack, like Mixpanel, to measure NLP bots. It should be noted that Mixpanel is not the be-all and end-all of analytics across the entire bot stack, but it can provide a quick way to understand and visualize key bot behaviors by tapping directly into the NLP process (i.e. Intent identification).
I did not touch upon more advanced topics, such as using the Mixpanel Notifications feature (via webhooks) to send bot messages to selected groups of users based on their past interactions with bot(s). For example, on a chat platform it could be used to promote bots using some kind of bot-recommender (a simple collaborative filter will suffice).
If you want to go down this route, then pay attention to making use of the people profile (as this is what Mixpanel uses to identify the recipients of campaigns).
I also didn’t mention the key question of how to interpret bot metrics. On a collaboration product like Cisco Spark, there are many different types of bots with potentially different usage behaviors. For example, some bots might be used infrequently, yet still be highly useful. Standard “usage profiles” might be misleading, especially when viewed as part of a wider collaboration context where users are engaged in complex multi-task activities.
Mining these kinds of usage profiles is where detailed analytics becomes useful (way beyond what I described in part 1) and relates to the subject of this article: developer analytics.
Here I want to consider the role of analytics not in the usage of bots, but the making of bots — i.e. the “behaviors” of developers. I will briefly mention why developer analytics is important and then a few notes about how to think about developer funnels and even back-to-back funnel analysis. Note that the remainder of the article applies to any developer funnel, not just bots. (Also, this is mostly about developer analytics in general, but with a Mixpanel flavor when looking at certain approaches to specific challenges.)
The strategic imperative of opening a digital service or product to developers via APIs is now well established. Some call it the “API Economy.”
Economy is a useful way of thinking about developer services. Indeed, the expert analyst Andreas Constantinou refers to the entire developer ecosystem by the phrase Developer Economics. If you think that term is hyperbole, just take a glance at this presentation for a glimpse of how real (and sophisticated) the developer economy really is.
From an economics point of view, products that lie at the intersection of two or more complementary groups of actors are often referred to as platforms. Economic complement is exactly the right term because it reveals how positive supply and demand characteristics on one side of the platform can lead to virtuous (positive) supply and demand on the other side. (This is in opposition, say, to the cannibalization effect of a non-complementary economic relationship.)
Put simply, the more apps that an app store has, the more it ought to increase demand for the core product — e.g. an Android device. (As it happens, Android is a 5-sided platform: users, advertisers, OEMs, carriers and, of course, developers!)
Of course, looking at the number of apps (or integrations) is a simplistic statement because quantity can quickly become a vanity metric. However, by the laws of long tail and just a kind of statistical view of the product space, the more that developers interact with the platform to add “features”, the more likely that benefits will ensue.
Platform economics in terms of innovation dynamics is a well known trope amongst innovators, many of whom refer to Open Innovation by Henry Chesborough.
This innovation aspect is the key to the API boom as many players, from small to big, have realized that it’s often better to let others innovate rather than attempt it alone.
The developer economy per se is nothing new. Microsoft have been courting developers for decades. And it doesn’t have to be a self-serve platform as the big gaming platforms (e.g. Playstation) demonstrate.
But self-serve APIs are still a rapidly growing part of modern business and, just like any other business process, ought to be subject to the same rigor of analysis as say a sales pipeline. However, as the new breed of developer marketing agencies (e.g. Catchy) might declare, the reality is that many developer platforms mostly follow the “if you build it they will come” approach, often with a heavy focus on the initial acquisition (e.g. via developer events) but neglect systematic analysis and marketing based on standard funnel metrics.
If you’ve had anything to do with running a start-up or managing an online product, then you can’t have failed to have missed Dave McClure’s AARRR metrics pitch.
I will stick with his template and show how it might be applied to the “Southbound” developer side of the platform — i.e. the “developer portal”.
Let me differentiate the two primary uses of analytics in this process:
The first part is where business leaders often go first— i.e. with questions like “How many devs on my platform?”
Unfortunately, it’s easy to get caught up in what I call “vanity metrics” where the only focus is on numbers and not the story they’re telling.
This second part is more revealing. Again, as my friends at various developer marketing agencies tell me, developer platform managers often don’t really know who is doing what on their platforms: who came and why, how long did they stay for, what made them leave (or stay), what would have made them stay, how effective is what they built, who uses it, and so on?
All of these questions — and there are lots more like them — have answers somewhere in the analytics (and/or via plain old surveys). Only a systematic and rigorous instrumentation and analytics approach will reveal any answers:
And there’s an even deeper form of question, which is how do we know what developers might be doing if we gave them X — where X is unknown. Figuring out that there might be an X is one step, whilst figuring out what it might be is yet another step. Together, they’re called an insight and often only revealed by a relentless analytics focus. This is more the domain of the “unknown unknowns” (yes — that really is a thing).
Returning to AARRR, let’s remind ourselves of the acronym’s meaning:
A — AcquitisionA — ActivationR — RetentionR — ReferralR — Revenue (hmmm, we’ll get to this one.)
Of course, the process of mapping the statistical events at each stage of the AARRR process (which isn’t necessarily linear) is where we use funnel analysis and where Mixpanel comes in.
We start by drilling down into what the AARRR mean, one step at a time:
The next step, per McLure’s advice, is to build a comprehensive table of as many actions (or events) as possible that characterize and quantify each step in the AARRR funnel. An example is below with generic events, but you should create one with events specific to your developer portal, onboarding process and acquisition activities. Assignment of which activities belong in which stage of the funnel can sometimes be fuzzy, but the point it to at least start with something and start measuring and trying to drive growth and the desired behaviors.
One novel use of Mixpanel that I attempted for acquisition analysis was to join more than one website into a single Mixpanel project in order to track user behaviors from a partner’s hackathon site. This gave us raw numbers in one place for subsequent analysis.
Of course, in this case we were unable to control the DistinctID
setting for the partner events, but this could be solved programmatically in some cases. Moreover, it is my view that Mixpanel should consider how to solve this problem more generally (e.g. essentially via some kind of tracker) so that more of their customers can create multi-site projects in order to have a more holistic view of the entire funnel.
As it happens, the developer funnel that I am tracking in one Mixpanel project is related to the data in another because we track data for the developer portal and data for the bot marketplace in separate Mixpanel projects.
Even better, it would be ideal to perform joined-up funnel analysis on both sides of the platform, like so:
However, this kind of joined-up analysis in one place is not easy. The challenge with user-behavior platforms like Mixpanel is that any joined-up-ness is attempted via the continuity of the DistinctID
which is a user attribute. However, in a back-to-back scenario like the above, the continuity is more likely an app id, or similar.
In other words, funnels are not joined by user, but in this case by app. It’s a kind of “outer join” via an app id to view the two funnels back to back, which in Mixpanel land would look something like this example:
I said I’d mention the R-Revenue problem, so here goes. Many (most?) developer platforms and API programs are not aimed at creating marketplaces for developers to make money. The revenue issue here is revenue from the core product. I sincerely doubt that most dev platforms have any idea, or model, for translating developer actions into end-user $ as the relationship is potentially complex.
However, as I come from a financial modeling background, I think there are potential approaches that might be at least worth pursuing.
The standard approach, which kind of forms a revenue proxy, is to attempt correlation between usage of developer apps on a platform and DAU or MAU kinds of KPIs. However, this approach often rolls up the entire engagement into one event — i.e. a user engaged with an app. This cannot be usefully dissected into any value estimates on an app by app basis.
One approach is to ask the sales guys to produce a kind of tiered financial model related to the lifetime value of certain types of customer and the cost of churn etc. These models could be used to tag certain customers and customer activities as having relatively higher (or lower) value than others.
It seems that it would then be useful to tag these events with “value attributes” which, although crude approximations and proxies for R-Revenue, would at least make the R-Revenue strand of the AARRR framework visible to all users of analytics — e.g. product managers, stakeholders, whomever.
Otherwise, it is potentially too easy to fall into the trap of self-referencing and vanity metrics that make people feel good but don’t necessarily mean much to the business. This, so I am told, is a common trap across many developer platforms.
I’ve tried to give a very brief overview of the basics of funnel analyis of the developer journey via the standard AARRR metrics framework. Just considering the developer journey as a formal entity that is measurable can yield fruit, assuming that a developer community is being constructed to add value to the core product (as opposed to merely ticking a box).
Remembering that the journey is related to the core product usage is vital so that attempts can be made to create joined-up funnel analysis. However, this is not easy. I have made a few recommendations for how this might be done by joining different user flows (sites) into a single Mixpanel project, although the tool is not entirely designed for that kind of approach. Some experiments in this regard have proven interesting, but there is still a long way to go in creating joined-up analytics for multi-sided platforms.