When it comes down to A/B testing, it is important to have different hypotheses behind each test. The objective of A/B testing is not only about having better open rates or click-throughs or conversions this time round, but it is part of a never ending process to understand your audience better. It is not about throwing random things to see what sticks, but about testing the assumptions you have about your subscribers. A/B testing can let you know what works better in comparison but the possible elements you are creating to test still come from your marketing instincts and strategies.
Each successful A/B testing leads to accumulated knowledge about your subscribers, helping you create better content and messaging which eventually increases your ROI for your marketing efforts.
Once you have a broad understanding of what you are going to test, it is important that you test one thing at a time. It might be tempting to mix variables e.g. try different subject lines and send those emails at different times but DO NOT be tempted! If you do so, it will be hard to draw any useful conclusions from the data, and it will be a hit-and-miss at best.
Below are a few elements worthwhile to experiment on when you are doing A/B Testing:
Depending on your email client, this first thing you see when an email lands on your inbox is either the sender’s name/email address or the subject line, followed by an optional preview text.
There are 3 options when it comes to the sender’s name —
Which should you use? Well it depends on the context. Using a personal name might give you a better open rate at the start but if you are mainly sending emails resembling mass newsletters, there might be a disconnect which can diminish your messaging, resulting in lower click-throughs. As a general rule of thumb, try to meet expectations i.e. send conversational-like emails in your personal name and send marketing-like emails with your company’s name, unless there is a strong reason to do otherwise. When in doubt, you can go for option 3 i.e. both.
The one you can really experiment on in A/B testing is the subject line, and we recommend that the preview text should support it accordingly. It is important to step back a little to understand the pattern you are going for rather than going with small copy changes just for the sake of testing.
For example, you might be keen to find out whether a ‘cliff-hanger’ type of subject line or something which conveys urgency has a better open rate for your email audience. You can go with something like these -
In comparison if you are using the below subject lines for your A/B testing, the difference will probably be quite negligible since both primarily inspires urgency.
Check out this guide for a list of subject line patterns!
Note: Preview text is usually generated from the preheader (basically the first elements containing text in your HTML email) but it can be explicitly defined by a hidden text element at the top of your email. For more information, look at this ultimate guide.
Your subscribers can respond differently based on the time of the day. While you have no control over when they will read your email (although you can have dynamic images based on different times of the day, but that’s a discussion for next time), you can control when you send the email notification reaches your users’ mailbox. If you subscribers are international, your ESP might have an option for you to send emails at a time based on their local timezone (e.g. with Timewarp by Mailchimp).
While there are already many studies on the best time of the day or the best day of the week to send these emails, you can’t really assume that these applies directly to your users. After all, the customers of women fashion are vastly different from the consumers of video games, and the only way you can identify what is relevant to your subscribers is A/B Testing.
Again, you got to have some hypotheses, although they might be simplistic at first. For example, if you are running a weekly email newsletter on the local arts scene, you might hypothesize that the best time to send this is
Both seems valid on the surface and the only way to test these is through A/B testing first, perhaps coupled with some qualitative methods like surveys.
Design is not art — each design decision should ideally aim to achieve specific objectives and not be based on personal preferences, but traditionally it has been expensive to test the assumptions of these decisions. With advanced email analytics that can measure engagement (amount of time spent reading the email), we can evaluate different design variations on metrics beyond just clickthroughs and conversions.
For example, in the fashion email marketing world, we often see image-based emails with very dynamic layouts and niche typefaces to match the lookbook of the season:
Such an layout will be impossible to build exactly in a mixed text image layout for HTML emails, given the custom fonts and the rotated text which mail clients do not support. And there are pitfalls to using image-based emails only e.g. default image blocking behaviours.
However even with these pitfalls, we can arguably say that these lookbook images do add unique value to the Love, Bonito brand more than the disadvantages. It will be interesting to compare, in terms of conversions, such a lookbook style against a more standard grid like the one below. Or perhaps there is a way to design a lookbook style fitting for the season while working around the basic principles of email development and caveats of mail clients in their current state.
Or the A/B testing can be something simpler, like comparing a 3 x 4 grid versus a 2 x 6 grid. In terms of design principles, a 3 x 4 grid communicates variety better (good for a discounted marketplace of well-known products?) than a 2 x 6 grid which allows for a better view of the individual images (better for an independent brand with impeccable images?). As to whether expressing variety is better for your particular audience and for your specific purpose, the best way to gauge is via A/B testing.
Of course this means that more effort is required to build these variations. (And that’s why we have Mailworks in the first place so that your team doesn’t go crazy building and testing all these variations!)
Do you agree on the importance of having hypotheses in A/B testing for email marketing? What are the other elements you test for in A/B testing? Please let us know in your comments!