Fun is the holy grail in games. Delivering a fun experience is a basic ingredient for making successful games. To succeed at this, we need to understand and deconstruct fun.
Before co-founding Product ML, I was a gaming product manager and I try to think of these things from first principles.
Product managers make changes and ship features in their products to achieve specific goals. These goals are informed by strategic priorities, tactical data analysis and qualitative user feedback. A goal is represented by a metric or a KPI.
Say you’re a PM for a flight search site. The user funnel is:
Each step of the funnel has a dropoff rate. Reducing this dropoff rate in each step of the funnel results in more users purchasing a flight ticket from your website, and ultimately, higher revenue.
PMs come in more flavours than ice cream, so it is futile to define the typical PM process but I’m going to give it a shot.
When using ML to affect core product experience, these product management principles remain the same. Product teams affect the product indirectly through their ML models. Does this change anything?
Let’s look at a hypothetical ML-driven version of a match-3 game like Candy Crush
This is a 9x9 board with 81 available positions for candies to occupy. Each position can be filled by 6 possible candies (green, yellow, orange, blue, red, purple) — yum! When a level begins, the starting board can be arranged in 6⁸¹ ways, which is approximately 10⁶² aka a bajillion ways. Each arrangement leads to a different difficulty experience. In practice, there are many other factors that affect this difficulty experience — the arrangement of new candies that arrive on the board when a successful match is made, which candies the player chooses to match, boosters used etc. For simplicity, we’ll focus only on varying the starting board configuration using ML.
“Life is a game of cards. The hand that is dealt to you is determinism; the way you play it is free will”- Jawaharlal Nehru
Credit: Games for learning institute
Before making Khayali Pulav (building castles in the air) and hooking up our game to an ML system we don’t fully understand, let’s train our first ML model.
We want to use ML to vary board configuration because we believe that changing the board configuration will affect the player’s perception of fun. We want players to be in their flow zone.
A good proxy for fun is retention. If players have fun playing the game, they come back to play the game again. So fun = retention. Our goal here is to increase retention. For our example, let’s pick D7 retention — technically defined as the % of players that have logged into the game on the 7th day from today.
Now we want to build a model that predicts D7 retention for a user, given a specific starting board configuration. With a bajillion possibilities for the starting board configuration, it is unlikely that we have enough observations of each configuration — there is a 1 in 6⁸¹ chance that 2 players have the same starting board configuration. This means that we cannot predict retention from a starting board configuration. We can try, but the model will have the predictive power of a coin toss.
One strategy is to split this prediction into 2 steps.
Step 1: Pick a metric that represents the board configuration. Let’s pick win rate (number of games won vs. played) as a representation. We then use win rate as an input to predict D7 retention
Step 2: Decide starting board configuration. Given a desired win rate for a player, work out the starting board configuration to hit the desired win rate.
Whoa whoa whoa! ML is deciding win rate now? Surely it will make stupid decisions like increase the win rate to increase retention?
This will only happen if a higher win rate actually led to higher retention in the training data. This is why it is important to train these models on live gameplay with real players. If real players behave like this, the model will learn this effect and make such predictions.
When training a D7 retention model, we also need to understand what factors will affect this prediction. These will become features when training the model. Here’s a (short) illustrative list.
1. Win rate2. Days since install3. Time of the day4. Has the player spent money5. Does the player usually use special items such as boosters
To use ML to control product experience, we need to understand our goals (D7 retention) and the factors that affect these goals. There may be multiple competing goals like retention and monetization. Similar to ‘traditional’ product management, we’ll need a clear view of the priorities/tradeoffs and a way to translate these decisions into our ML models.
Ultimately, the perfect ML system for product experience should work like this:
Personalization can be annoying. I dread the day a Viagra ad will show up when I’m watching YouTube. But when done right, it leads to a fun product experience.
More articles on Machine Learning and Product Experience:
At Product ML, we believe that all products in the future will be dynamic. We’re building a platform that redefines product management and user experience, starting with dynamic difficulty in games using machine learning.