The 3x Conversion Playbook by@barroncaster

The 3x Conversion Playbook

Barron Caster HackerNoon profile picture

Barron Caster

Step-by-step instructions to increase customers and fuel your business.

I run the Growth Product Team at 1 Designer, 7 Engineers, and 2 Product Managers. We are responsible for building products to acquire new customers and increase revenue across Rev’s 6 businesses. The team has acquired over 100,000 customers and we’re just getting started.

In the past 18 months, we have tripled the conversion rate for 3 separate Rev-owned services — Rev (On-Demand Human Audio Transcription), Temi (AI Transcription Service), and MathElf (On-Demand Math Tutoring) — across three different mediums — Mobile Web, Desktop Web, and Mobile Apps.


Increasing conversion = more money for same cost!

I started the Growth Team at Rev and focused on one service line and one channel. At first, my work (landing page design and graphics, service information, copy, and checkout flow) tanked all relevant metrics. There was doubt brewing. By focusing on Growth as a process rather than a series of growth hacks (hat tip to Reforge), we saw the metrics turn and then skyrocket within a couple months.

After seeing success, I petitioned for more resources. I was turned down.

To prove the growth process I was building was not just random luck, I set out to do it again. This time, I called my shot, starting with writing out a playbook that anyone could run with.

At first, it was ugly.

I cut excess steps.

Added necessary ones.

Honed the process.

Applied the Feynman technique.

Then, I applied the playbook to a new service line and saw another 3x conversion improvement! It was gold, and I felt confident we could replicate our success.


What I am sharing here is the conversion playbook that you can apply to websites, mobile apps, or any business you’re trying to grow. The steps listed will help you work on the right projects. By being as user-centric as possible, you will see the best results.

This is an open book process. Please ask others for help—no work is done in a silo.

Step 1: Information Gathering (Understand the Problem)

Key Questions

  1. Where is the biggest opportunity in improving conversion?
  2. Which pages or steps have the highest bounce and exit rates?
  3. What information is lacking?
  4. What are people not engaging with?

Gather Your Data. Look at all sources of funnel information. If you’re not tracking your funnel, you won’t be able to accurately measure changes. Here are some of the tools we use:

Observe, don’t assume customer behavior

  1. FullStory/AppSee: watch user sessions (pro tip: filter for ‘rage click’ to see frustrated users)

Understand your customers

  1. Who are they? What positions do they hold and at what sort of company?
  2. What is important to them? What are they trying to accomplish?
  3. Look at NPS results, past orders, and talk to customers

Step 2: Develop a hypothesis. Seek to understand the problem. Get ideas for improvements.

Talk to customers

  1. Intercom
  2. Telephone calls
  3. Emails
  4. NPS Survey
  5. Other surveys

Watch customers

  1. FullStory
  2. Talk to strangers
  3. CraigsList — bring in randoms to use your product. Great for general exploration
  4. Validately — remote user testing. Use for one-off questions.

Talk to experts

  1. Wherever you can find them
  2. Past examples: Online Geniuses Slack Group, Conversion Experts — Chris Neumann @ CRO Metrics, Joshua Bretag @ Blueprint Solutions

Copy Ideas from Successful Websites

  1. Dropbox, Box, Stripe, Square, Grammarly, Airbnb, Asana, Drift, etc.
  2. Depends on what you’re designing for — should always be relevant to the problem
  3. No need to start from scratch

Develop a hypothesis on what will change and why. Hypothesis should be tied directly to a metric and have the following form: The current state is ___. If we do ___, then ___ will happen to our key metric because ___.

Create a list of ideas based around your hypothesis and learnings.

Step 3: Prioritize Tests using ICE framework or related framework

Score each test on 3 areas

  1. Impact, Confidence of Success, Ease of Implementation
  2. High = 3, Medium = 2, Low = 1

Rank the tests by score

Ignore anything below 7

  1. Before working on a 6, rinse and repeat Steps 1 & 2 to find great tests
  2. Small tests will likely not show statistically relevant in a small timeframe
  3. If necessary, look into batching a few 6’s together to create a bigger overall test

Focus on High impact wins

Prioritize small, quick wins over huge “big bang” improvements that may take a month to get in front of customers.

Bring the top tests to your peers and get feedback on best tests. Pick one and proceed to the next step.

Step 4: Design Your Visual and Content Changes

Visual Design

  1. Create rough mock-ups showing what you are changing and how (I use Skitch)

Nail the copy

  1. Appeal to the customer and their sense of value
  2. Simplify. Take the words you want to say. Cut it in half. Cut it in half again
  3. Make it honest and believable

Ask yourself, is this valuable for the user? Will they benefit from this information?

Get v1

Iterate until completed

  1. Show to other people (product/design/management/strangers) and get feedback
  2. Incorporate feedback with all of your other inputs
  3. Look at more best-in-class examples
  4. Repeat

Get sign-off on the test

Step 5: Final Checklist Questions

  1. With the resources available, is this the best test we can be running right now?
  2. Is this the most efficient way to test the hypothesis?
  3. Is our test instrumented correctly? Are we collecting the right data?
  4. What is the right amount of data we need to gather for this test to be valid?
  5. What do you expect the results to be and why?

Step 6: Run Your Test

  1. Confirm test with your team/key stakeholders
  2. Prioritize the test in the engineering backlog
  3. Run the test until you met the appropriate (predetermined) threshold of data
  4. Look at the results

Step 7: Results

Measure whether your test (challenger) had a higher conversion rate than the current design (champion).

Declaring a winner

  1. 1st method: challenger > champion with >=95% statistical significance
  2. 2nd method: challenger > champion and provides value for users
  3. Value = information, context, etc


  1. Challenger =< Champion
  2. Challenger slightly > Champion but not statistically significant or better for users
  3. Aesthetic change, small copy change

Step 8: Analysis

  1. Was our hypothesis correct? Why or why not?
  2. What is the major takeaway?
  3. Are there any common learnings with our other tests?
  4. Does this influence the next test we should run?

ALWAYS communicate results and learnings from the tests. Even the losers.

You lose credibility if you only talk about the wins and gloss over losses.

Final Notes about Rev’s Growth Team

  1. The Growth team is hypothesis and data-driven
  2. We believe in constant experimentation. Always Be Testing
  3. We expect most tests to fail. The test failed, you didn’t fail
  4. We don’t need to test everything (e.g. button colors)

If you are working on Growth or have questions about the above, feel free to reach out: [email protected]

To see specific examples run through this process, click “Follow” to subscribe


Signup or Login to Join the Discussion


Related Stories