As a growth hacker, you need to cultivate a sense of scientific curiosity. Almost every element of your marketing strategy or business plan can—and should—be viewed as an experiment, with the data guiding your next move. Even lackluster or failed experiments tell you a lot about what’s working, what’s not, and what direction to try next.
The epitome of a growth hacker’s scientific mindset is A/B testing (or “split testing”). In this model, you divide your audience or customer base into two groups, with one acting as a control group. Each group receives a distinct version of your email campaign, landing page, discount offer, or the like, and you carefully track each version’s performance. By looking at the results, you can tweak your strategy as needed, then go on to test another element.
A/B optimization testing is used to try out how changes to a minor element of an existing process affect customer engagement. The goal is to optimize the impact of that process, getting the best results with the lowest cost to your company. Most frequently, A/B testing is used to test how different versions of one digital marketing campaign perform; for example, the two groups see two different designs of a website, or receive two variants of the same email blast.
Typically, optimization tests compare the results—measured in click-throughs, email opens, social engagement, conversions, or by some other metric—of minute changes to design, wording, or visual elements. Does a long email subject or a shorter one result in more opens? Does a green button or a red button get more clicks? Does a Facebook post with an image get more likes than one without?
Because the tests measure audience engagement or other results based on very small changes, with little or no cost for each test, companies may perform many thousands of A/B tests per year. The data from testing then informs the next campaign.
Structuring an A/B Test
Like any good experiment, A/B testing is rooted in the scientific method. A refresher course on the scientific method: ask a question based on observation and research. Make a hypothesis about the answer to this question. Next, formulate testable predictions, collect data, and use the data to measure the accuracy of your hypothesis. Growth hacking (and any digital marketing) has an added step here: tweak your test based on results. Then retest, retest, retest.
Once you have come up with come up with a question and a hypothesis—and done some background research into your audience’s current behaviors using analytics—it’s time to design your test. Some factors to consider:
- Changes Per Variation
How much will version B differ from the control group’s? You can make many changes from the A version to B, or (more typically) just one. Single changes may take longer to see results, or may not show results as dramatic as a total overhaul, but it’s easier to track responses and the causes of responses. When making many changes, any one of them could be prompting the result you’re seeing.
- Metric for Data
How will you measure your results? If you are testing the effectiveness of an email campaign’s subject line, for example, it makes sense to use the metric of number of email opens. If you’re testing a change to an onboarding procedure, on the other hand, sign-ups or other forms of conversion provide your data.
- Test Scope
What is the scale and timeline of your test? When testing something small like a change in the wording of email copy, results will be evident very quickly. Use a tool like VWO’s A/B Split and Multivariate Test Duration Calculator or Evan Miller’s sample size calculator to determine either how many days you will run the test, or how many audience members you will include in your sample.
Who will be the subjects of your test? If you have a good sense of how your audience is segmented, you can run A/B tests on one specific part of your audience, or set up one portion as your A group and one as the B group.
Strategic A/B Testing
Strategic testing applies the same principles of optimization testing to larger, more important elements of your marketing strategy, or even your entire business philosophy. If you are launching a new product, offering new pricing, or changing your branding strategy, the stakes of such a change are high. It makes sense to test this kind of big move with a small part of your audience first before deploying it across the board.
Strategic tests tend to be run much less frequently than optimization testing: the stakes in terms of profits and marketing budget are much higher, the moving pieces are bigger (a new brand strategy takes a long time to change, and then to see results), and more people are involved. But there are definite benefits to testing a strategic change on a small target audience: it gives you a sense of whether the initiative will succeed with your entire consumer base, and helps you refine your approach if it’s not quite ready for prime time.
What You Can Do Right Now
A/B testing is a valuable tool for getting the most out of your strategy, whether on a small scale optimizing an email campaign) or on a large one (a wide-spread sales promotion). Ready to give it a shot? Here’s how to get going.
- Start small. For your first A/B test, choose something low-cost and low-risk: a minimal change to your email blasts, social media postings, or website. Starting with one small change has the added benefit of showing definitive cause-and-effect results.
- Be very clear about the goal of your testing and about how you will measure results.
- Set a defined test period or number of subjects. Keep the scale appropriate to your experience with A/B testing, your resources, and the type of initiative you’re testing (bigger changes often take longer to show results).
Neutralizing all digital channels, we accelerate performance by applying data driven optimizationin real-time across a superior blend of mobile, video,display and email inventory. Converting the right people at the right time, we drive brand solutions, while securing optimal impact, engagement + results.