Fourteen billion years ago the universe was immensely hot, dense plasma. Then, as the universe expanded after the Big Bang, the gases within it cooled. When these gases reached a temperature of around 2,700°C (billions of years before galaxies existed) protons and electrons were able to combine into hydrogen atoms, releasing energy in all directions in the form of electromagnetic radiation.
Fast forward to 1963, when researchers Arno Penzias and Robert Wilson were studying the microwaves emanating from the Milky Way galaxy. They kept detecting background noise, which at first they thought was caused by pigeon droppings on the large microwave receiver they were using! However, they soon realised that this noise was the electromagnetic radiation left over from the Big Bang.
This cosmic microwave background radiation fills the universe. We can detect it in every direction we look. It is a constant source of noise that has to be taken into account for experiments in this field.
Part of my role as a product manager is to evaluate the potential of new products and features. These could be tests of intent, like buttons that don’t actually do anything, or banners that link to an external website or landing page. They could be tests to explore what’s important to people, like new ways to filter data or provide new information. They could expose existing functionality to different segments of your traffic, or they could be rudimentary implementations of possible future products. I’ve helped to release a number of such experiments and have often noted that there is a relatively consistent but low level of engagement (around 1%), regardless of the experiment.
I wonder if this level of engagement exists regardless of the feature you’re testing. Certainly you can’t discount the possibility that people will notice a change and interact with it just because it’s new. But the novelty effect only applies to returning visitors, who are familiar with your product. Does this engagement expose a natural tendency of people to explore things they see in a website or app out of curiosity? Will a small fraction of people click your thing just because it’s there? Is there a persistent low level of interaction that we need to account for when assessing the merits of features? Are we just detecting the background noise of the internet?
In any experiment (scientific or A/B test) it’s important to understand the context in which it takes place. What are the other possible signals that could be conflated with your results? What other factors exist that could affect your interpretation of the data? What else do you need to be aware of so that you can accurately isolate the impact of your experimental change on your metrics? Good scientists and product managers explore all possible reasons behind any variation in the data before claiming it was caused by their implemented modification. They try and prove themselves wrong.
The Lagoon Nebula, in the constellation of Sagittarius - ©NASA Goddard Space Flight Center
However, perhaps there’s a more important question: do we really care about experiments that produce results that are indistinguishable from background noise? Are they going to push your business into new and exciting territory? You could argue that optimising the conversion rate in your core product funnel is an example of when it is appropriate. Feeding the cash cow while you explore other avenues. There will come a point where this produces diminishing returns though. I’ve also seen many examples of where serving customers’ needs better actually decreases funnel conversion rate. It becomes a balancing act between satisfying people’s jobs-to-be-done and understanding (or potentially evolving) your business model.
In his 2017 letter to shareholders Amazon’s Jeff Bezos describes obsessive customer focus as a way to stave off “Day 2” complacency. Obsessive customer focus could take the form of iteration and incremental improvement or, as Bezos says, you can “plant seeds, protect saplings, and double down when you see customer delight”.
Improving your solution for a user need that has already been met has its place, but the step-change improvement to your business will come from discovering and satisfying an unmet user need. So adopt an exponential mindset and look for indicators that improve your key metrics by 10 times instead of 1%.
Of course this relies on you testing the right things, like the riskiest assumptions and the largest unknowns. Or focusing any MVP on the core tenet of the hypothesised customer need. Maintain the big picture; if you tackle it at too fine-grained a level there’s a risk you’ll get stuck in the details too early. A good rule of thumb is to ask yourself whether the proposed experiment will provide enough information to stop you moving forward on that particular train of thought. Or whether at the end of the experiment will you say “that’s interesting but the real product will be different enough so we should go ahead anyway”. As Gareth Williams, the CEO of Skyscanner, once said:
“We went looking for minnows and what have we got? A bucket of minnows!”
Remember, in the era where the perceived wisdom for winning World War II was more and better bombers, a civil servant decided to authorise the building of a prototype for a radically new plane, the Spitfire, because it would be “a most interesting experiment”. And for the small price of a house in London the British created a plane that saved millions of lives and arguably the free world.
So the next time you look at your roadmap, ask yourself how many of the items are contributing to the background radiation of the internet, and how many are aiming to change the way your business or your market works.
Check out my new site: ExperimentationHub.com for free tools and advice to help you run better experiments and become more confident in your results.