Over the past year at EmailOctopus we’ve blogged about our successes; from sending 1 billion emails through to our launches on Product Hunt. Inevitably, though, all startups and small businesses go through tougher times. Times where acquisition slows down and you have a period of slower growth, or even worse, a contraction in revenue or customers.
In the past year, since we went full-time with the business, we’ve had our share of bad weeks and a couple of bad months in there too. Very few businesses don’t have a tough period at some point, so it’s important to identify slow periods so as a business you can focus resources on rescuing those tough weeks and avoiding them becoming a bad month. Here’s how we do it at EmailOctopus.
Monitoring the business
We monitor hundreds of data points when assessing the health of our business, but we’ve always found revenue to be the most important metric. As EmailOctopus becomes more established, the more predictable the business becomes. In the early days acquisition could fluctuate significantly, we’d get excited that ‘growth was up 50%’ this month. Percentages are great vanity metrics, however for most SaaS businesses, including our own, exponential growth is rare.
Through monitoring our revenue and customer growth regularly and carefully, using ChartMogul, we have a solid idea about what makes a ‘normal’ month at EmailOctopus. Our MRR (Monthly Recurring Revenue) growth has now got to the point where it usually fluctuates by just $300 per month or so, assuming we don’t do any huge launches, change our pricing structure or increase our paid advertising spend significantly. Our customer growth also is somewhat predictable. We know that ~20% of leads become trials and around 60% of trials go on to become paying customers — again without any significant change to our traffic or on-site conversion, we can predict how many customers we’re going to acquire with a certain degree of accuracy.
Reacting to a slow month
At EmailOctopus, once we experience a week which seems slow we look to react to it and we run what we call a ‘fire drill’. It’s a term originally used by the team at Secret Escapes when analysing and attempting to resolve bad trading days (as Secret Escapes operate in B2C daily deals their days were where negative trends could be spotted).
A ‘fire drill’ is split into two parts, the first part is finding any issues which may have caused the downturn. At EmailOctopus, the majority of our fire drills resolve around the key question of “Why has revenue growth slowed?” or put simply “Why aren’t we making more money?”!
From the core question we whiteboard out all the factors which may contribute to a slower growth in revenue. These include factors such as churn, whether overall acquisition of leads and trials is down, or whether an issue lies within the paid plans. From each of these factors we draw off questions for us to answer through data analysis, examples of some questions are below:
- If overall lead signup levels are consistent, has the split by source changed? (A larger cohort of FB signups for example could indicate a lower quality of lead)
- Has Churn increased? What was the ARPA (Average revenue per account) of the churned accounts?
The more answers we find, the more questions we have, and it eventually leads to a spider’s web of questions and data which looks something like the below:
As we source the data from Google Analytics, our own database and ChartMogul, more questions inevitably crop up and what started out as a simple 2 question diagram (Has Churn increased? Or have paid plans decreased?) soon becomes a spider web of questions, with each answer leading to only more questions. Like any analytics project, we’ve found that we can spend all day asking the questions as to why site behaviour has changed so we begin the day setting time targets for when we’ll convene and discuss our findings. There’s nothing worse than analysis paralysis.
We generally look to spend around 3–4 hours, or up until lunch, in the analysis phase of a ‘fire drill’ before then moving onto the action phase. Looking at our whiteboard filled with questions and stats — backed up by a number of Excel spreadsheets — we begin to highlight the areas of concern to the business. Our Trial to Paid Conversion Rate, for example, may be down along with acquisition of leads.
Very rarely is there a silver bullet, or single driver to a bad day. It’s regularly a combination of factors, some of them out of our control. In the travel industry (which Secret Escapes operated in), weather played a big part in daily performance as did the time of the month and how long it was to pay-day. For B2B SaaS businesses, financial years and holiday periods also can play a part in seasonality.
Having selected the metrics we think we can improve, which usually ends up being around 3, the team will again whiteboard ideas on actions we can take. The whiteboard again, at the end of this session looks something like this:
Chock-full of ideas, some crazier than others, on how we can improve the business. To finish up the session we’ll then take a vote on the ideas, with each team member receiving 5 votes to freely distribute throughout the board. The amount of ideas we choose to work on depends on the severity of the issues we’re experiencing — the darker the picture, the more ideas we’ll be working on to turn them around. The ideas with the most votes then get worked on, a democratic way of choosing what we work on as a group and a way which hopefully builds some camaraderie.
The remaining ideas we don’t work on go into our ideas spreadsheet, along with a more scientific (but less fun) way of choosing future experiments to work on. We prioritise these based on effort, potential and importance of the metric they’re moving. This list of experiments then gets chipped away at, with less urgency, by our marketing team during regular times.
Working on and launching our ideas
With the ideas chosen, they’re then picked up and worked on by whoever is best placed to. We’re not opposed to taking Jonathan, who runs all things technical, away from working on features if we feel his skillset is required to help make a push in growing the business. Most often, though, the ideas can be tested in a semi-technical MVP (Minimum Viable Product) fashion. The simpler the implementation of the experiment, the less attached we are to it and the easier it is to throw away if it doesn’t work.
Once built and launched, which is ideally done within a matter of days, the success of the experiments are then tracked. We use different tools for monitoring each experiment and the methods of tracking success will always be considered in the implementation. On-site conversion pieces we’ll try to A/B test, if it’s in a high enough traffic part of the website. SEO experiments, we’ll monitor organic search traffic using Google Analytics and the terms ranked for using SemRush.
At EmailOctopus, despite our detailed analysis, we often run unsuccessful marketing experiments — it’s the very nature of trying out new ideas. An example of an unsuccessful experiments is above — sometimes the biggest winners are simpler, quicker tests. Irrespective of results, from each fire drill we learn more about our customers and EmailOctopus. We learn what makes the business tick and the levers we can use to generate customers, revenue and reduce churn in both happier and tougher times. By taking a potentially negative situation and working as a team, it helps build an experimental culture. A culture where we’re not afraid to try ‘risky’ things in the pursuit of a growing business.