Conversion Optimization Archives - Alex Birkett https://www.alexbirkett.com/category/conversion-optimization/ Organic Growth & Revenue Leader Mon, 30 Dec 2024 16:56:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i2.wp.com/www.alexbirkett.com/wp-content/uploads/2016/02/cropped-mustache-.png?fit=32%2C32&ssl=1 Conversion Optimization Archives - Alex Birkett https://www.alexbirkett.com/category/conversion-optimization/ 32 32 The 9 Best Optimizely Alternatives in 2025 https://www.alexbirkett.com/optimizely-alternatives/ Sun, 30 Apr 2023 15:14:27 +0000 https://www.alexbirkett.com/?p=4553 Optimizely got me started on A/B testing. Back in 2014, I spun up a free account and ran some really bad experiments on my personal website. Then, I went on to run experiments at CXL, HubSpot, Workato, and several other large clients. I became an absolute nerd for experimentation strategy. In that time, I’ve used ... Read more

The post The 9 Best Optimizely Alternatives in 2025 appeared first on Alex Birkett.

]]>
Optimizely got me started on A/B testing.

Back in 2014, I spun up a free account and ran some really bad experiments on my personal website.

Then, I went on to run experiments at CXL, HubSpot, Workato, and several other large clients. I became an absolute nerd for experimentation strategy.

In that time, I’ve used several Optimizely alternatives.

What is Optimizely?

Optimizely is a leading experimentation company that offers the Digital Experience Platform (DXP) software as a service.

The platform is designed to help businesses manage their content lifecycles and test and optimize every customer touchpoint. Millions of experiences are served with the platform, and it is known for its content, commerce, optimization, and personalization capabilities.

The company was founded in 2010 by two former Google employees, Dan Siroker and Pete Koomen. They started out as a simple A/B testing tool; in fact, they marketed themselves on how simple and straightforward it was to run a split test.

Optimizely has since diversified its platform to cover broader digital experiences, including mobile applications, feature flags, and other touchpoints.

Despite its success, Optimizely’s platform is not without its downsides.

One downside is that it can be quite pricey, which limits its accessibility to small and medium-sized businesses. Nowadays, there are many Optimizely alternatives with similar features (some even more feature rich) and much less costly.

Editor’s note: I’m going to use some affiliate links when possible to try to earn some revenue from my content. These don’t change the opinions espoused in the content nor the style in which they are written. If I think a product sucks, I’m not going to say otherwise. This is just a bonus and a way to fund the whole operation. Anyway, enjoy the article!

The 9 Best Optimizely Alternatives

  1. VWO
  2. Convert
  3. Adobe Target
  4. SiteSpect
  5. Mutiny
  6. Conductrics
  7. Unbounce
  8. Crazy Egg
  9. AB Tasty

1. VWO (Visual Website Optimizer)

VWO is undoubtedly one of the best Optimizely alternatives on the market, with a comprehensive suite of tools that you can use to optimize your website’s conversion rate.

The VWO platform includes A/B testing, website personalization, heat maps, session replays, form analytics, and user behavior analysis.

They’ve really got the whole suite of conversion rate optimization tools, so you could probably replace the need for several tools (HotJar, Fullstory, Qualaroo, Typeform, etc.) if you use VWO. They also offer a direct integration to Google Analytics.

The session recordings are particularly interesting to watch users interact with your website and marketing campaigns.

VWO has an intuitive user interface that makes experimentation easy and fast. Whether you’re a beginner or an experienced optimization professional, VWO has everything you need to help you achieve your goals.

Since Google Optimize has sunset, VWO came to the table with a freemium offering to replace Google Optimize.

They do lack some of the power that Optimizely offers, especially around product management use cases like feature flagging. They also only offer Bayesian statistics (which may be a benefit to some), and their personalization capabilities aren’t incredibly strong.

But overall, it’s an awesome A/B testing tool, especially for beginning optimization teams and those on a budget (it’s one of the more affordable Optimizely alternatives).

Price: Starts free. Next plan begins at $200 per month, billed annually

G2 Score: 4.3/5

2. Convert

Convert.com (Convert Experiences) is another excellent Optimizely alternative, one of my favorite experimentation platforms overall, that offers a range of conversion rate optimization tools, including A/B testing, segmentation, personalization, multivariate testing, and multi-page experiments.

One thing I like about Convert is that the founder and the team genuinely care about the conversion rate optimization industry (and the environment). Their customer support is absolutely top notch (their reps help you actually succeed in using their software). They also offer courses (CXL Institute) as well as access to Goodui.org for test ideas.

As for the platform, it’s both strong as well as easy to use for a broad swath of website optimization cases.

They allow for a WYSIWYG visual editor as well as custom coding, and you can easily integrate with Google Analytics or another website analytics tool to both analyze your experiments and campaign performance, as well as trigger personalized digital experiences using analytics dimensions.

This is also one of the most affordable Optimizely alternatives.

Price: Starts at $99/mo

G2 Score: 4.7/5

3. Adobe Target

Adobe Target is another great Optimizely alternative. They are the prototypical enterprise solution in the experimentation space.

True to their positioning, they really are one of the most powerful platforms in the space (especially when combined with Adobe Analytics). They enable omnichannel experimentation and personalization, AI-powered automation, and all the regular A/B testing and multivariate testing you’d expect from an Optimizely alternative.

This platform is best for large teams with large budgets. It allows for website experimentation, but also for email experimentation and mobile and product optimization

Personally, I haven’t used Adobe Target as much because it is so enterprise and pricey, but if you’re in that tier, it’s probably the most common Optimizely alternative for enterprise companies.

Price: Talk to sales homie

G2 Score: 4/5

4. SiteSpect

SiteSpect is another great Optimizely alternative, a supremely underrated one in my opinion. They’ve also been around since the early days of conversion rate optimization.

This platform includes A/B testing, multivariate testing, behavioral targeting, and personalization features. They were also one of the first platforms to offer server side testing.

They’re a true Optimizely alternative, offering feature management and rollouts as well as AI-driven product recommendations, and data insights (including real time web analytics to flag potential alerts)

Additionally, SiteSpect offers analytics tools to help you gain insights into user behavior, as well as customer segmentation and reporting features.

Price: Talk to sales

G2 Score: 4.3/5

5. Mutiny

Mutiny is an Optimizely alternative that enables you to optimize your website’s conversion rate by personalizing the user experience.

At first, they were basically just a personalization platform with a little bit of B2B flair (offering stuff like ABM personalization).

Now, they’ve incorporated artificial intelligence into target recommendations (they joined the generative AI hype) and they just rolled out their own A/B testing functionality.

They’re super focused on B2B, so if you run an e commerce store, you probably don’t want to use this one. But if you’re a B2B company looking to optimize the customer journey, from Google Ads to retention, this is probably the best alternative to Optimizely.

It’s probably best for big companies, but even startups get a lot of utility out of Mutiny.

Price: Talk to sales

G2 Score: 4.7/5

6. Conductrics

Conductrics is an Optimizely alternative that provides A/B testing, multivariate testing, on page surveys and personalization capabilities.

Conductrics lets you run an A/B test and personalize a segment based on how users behave in the experiment. Unlike other AI-powered personalization tools, they give you complete control and governance over your personalization arms.

They’re one of the most powerful tools in the game, and they have been since the beginning. They were cool before the AI hype, offering reinforcement learning and bandit algorithms to dynamically serve experiences to site visitors.

Of course, they offer straightforward A/B testing to optimize your conversion funnels, just like any experimentation platform would.

Despite all that power, it’s still an easy to use platform even offering a visual editor.

They’ve begun to add different features beyond the ability to run tests, like the ability to send messages with on page surveys to your audience to collect qualitative data.

Price: Talk to sales

G2 Score: NA

7. Unbounce

Unbounce, traditionally a landing page builder, is a single platform that allows you to create high-converting landing pages without any coding knowledge.

Because of the simplicity in creating landing pages, it’s great for testing new ideas and conversion funnels. The design elements it gives you out of the box are great, allowing even a terrible designer like me to create pretty looking pages.

The platform includes a wide range of pre-designed templates, making it easy to get started, along with an A/B testing feature that lets you test different landing pages to see which one works best (showing you statistical significance levels in the analytics dashboard).

Additionally, Unbounce offers a suite of website optimization, including popups and sticky bars to help you improve your website’s conversion rate. They also offer multi page testing and customer journey funnels.

They’ve added a bunch of AI copywriting features as well, helping you write copy targeting your ICP (ideal customer profile) and even creating a full landing page just by using a URL and headline.

As an Optimizely alternative, it’s not the most powerful. It lacks additional features like full stack experimentation and testing on mobile apps. It’s best for small businesses and customers who want to scale out paid acquisition campaigns with additional landing pages.

Price: $99 / mo

G2 Score: 4.4/5

8. Crazy Egg

Crazy Egg is a web-based analytics and optimization tool that promises to help businesses boost their website’s performance.

It provides heat maps, A/B testing, scroll maps, and other tracking tools that marketers can use to understand how users interact with their websites.

I’m actually not a Crazy Egg fan. I only put it on this list so it is more comprehensive and has an attempt at ranking in search. That’s the game, unfortunately.

Maybe if it were 2015 I’d be a bigger fan, but today, Crazy Egg is pretty weak as an Optimizely alternative.

The tool itself is quite difficult to navigate and clunky to use compared to other user analytics and behavioral analytics tools, making it less user friendly than some of its competitors. HotJar is incredibly easy to set up in comparison

Some of its features such as the click analytics don’t provide detailed enough insights due to low resolution images used in reporting. The level of detail in their reports lacks AI-assisted tools like Fullstory and Microsoft Clarity.

Finally, the customer service experience isn’t great either; users have reported long wait times when trying to get support help.

As an A/B testing tool, it’s one of the weakest on this list. It doesn’t really even compare to Optimizely.

However, it does have A/B testing functionality. And it’s relatively cheap compared to other tools. So if you have to use Crazy Egg, you’ll probably still get some value out of it.

Their blog is also basically just an affiliate engine at this point. Looks like they stopped doing feature releases years ago. Ugh.

Price: Starts at $29/mo (paid annually)

G2 Score: 4.2/5

9. AB Tasty

AB Tasty offers AI-powered experimentation and personalization, feature management, and product optimization solutions to help businesses drive more conversions and revenue on their website and product / mobile apps.

With 10 offices around the world and over 240 employees, AB Tasty is a fast-growing company that’s helping enterprises launch better products faster and drive more business.

Along with their A/B testing tool, they also offer recommendation engines and intelligent search functionalities.

Additionally, AB Tasty offers feature management, experimentation, and personalization solutions to help businesses launch better products faster and drive more sales. By combining all these features, AB Tasty offers a comprehensive web optimization platform for product managers and marketers alike.

Like any software, AB Tasty does have some downsides.

One user reported that the analytics reporting feature can be confusing and overwhelming, making it difficult to get an accurate view of website performance. However, you can easily integrate with Google products like Tag Manager and GA4.

Overall, AB Tasty is a powerful and effective tool to help businesses optimize their digital experiences, and it’s certainly one to consider if you’re looking to improve your website and drive better results.

Price: Talk to sales (no free plan)

G2 Score: 4.5/5

Conclusion

Look, Optimizely is great, but it has its downsides. It’s quite expensive when you compare it to the myriad options in the market nowadays.

I’m a huge fan of VWO and Convert Experiences as well as Sitespect for a one-to-one best Optimizely alternative.

For something better at personalization and even stronger features, Conductrics and Mutiny are excellent. Dynamic Yield is another one in this category.

And for those on a budget, Crazy Egg or Unbounce might get the job done.

 

The post The 9 Best Optimizely Alternatives in 2025 appeared first on Alex Birkett.

]]>
10 Persuasive Techniques to Increase Conversion Rate https://www.alexbirkett.com/persuasive-techniques/ Mon, 02 May 2022 22:38:02 +0000 https://www.alexbirkett.com/?p=3237 Why do some funnels convert like crazy, and others fail to move the needle? While there are many factors in the conversion optimization equation, you can boil a lot of it down to this: does your experience convince people to take the desired action? One of the classic books in this field is one you’ve ... Read more

The post 10 Persuasive Techniques to Increase Conversion Rate appeared first on Alex Birkett.

]]>
Why do some funnels convert like crazy, and others fail to move the needle?

While there are many factors in the conversion optimization equation, you can boil a lot of it down to this: does your experience convince people to take the desired action?

One of the classic books in this field is one you’ve probably heard of. Dr. Robert B. Cialdini wrote “Influence: The Psychology of Persuasion,” the ideas in the book being heavily influential in how we copywriting and advertising today.

Over the years, many other marketing experts have built upon his work, and today, there are dozens of persuasive techniques from the field of persuasion psychology that have been proven to be effective in helping you create compelling copy that gets people to take action.

You can integrate this type of content into your website, blog, social media, email, ads, product promotion landing pages, and other marketing collateral.

Table of Contents

1. The Principle of Reciprocity

First, let’s look at the principle of reciprocity.

People love to return favors.

This is why reciprocity is a widely used persuasion tactic to help increase your conversions.

When you do something for someone, they feel obligated to reciprocate. You can use this in your own business in a variety of ways, such as:

  • Giving something away on social media
  • Offering coupons to your email subscribers
  • Presenting exclusive offers on your website, etc.

The best part is, the things you offer don’t have to be costly. But, when you give someone something at no cost, they are a lot more likely to comply with your future request for making a purchase, joining your list, following you on social media, etc.

KlientBoost, a performance marketing agency, offers a “free marketing plan” as part of their sales process:

Untitled

2. The Scarcity Principle

The scarcity principle is another common persuasive technique you’ve probably seen.

Studies have shown that people value things that are rare. The less there is of something, the more people want it.

From diamonds to limited Nike editions, they will flock toward anything that seems to be in short supply.

A lot of popular eCommerce businesses like Amazon, Etsy, Booking.com, etc., make use of this persuasive technique to get more people to make instant purchases on their sites, and you can too.

Untitled

Here are a few ways you can leverage scarcity to increase your conversions:

  • Show the number of items left in stock (e.g. “Only 4 items left in stock”)
  • Sell limited editions (e.g. “Get one of 50 pieces available”)
  • Show a countdown timer to add urgency (e.g. “Get 30% off in the next 60 minutes”

Thalita Ferraz, owner of popular fashion and beauty site HerBones.com explains, “When I sell my eCommerce fashion products via social media, I always create a limited run of items to sell and embrace scarcity.  This is an amazing tactic and due to this, I’ve sold out of all product every product launch I’ve ever had.”

The scarcity tactic works whether you are selling experiences or material products, and by reducing availability to create a sense of scarcity, you will be able to increase your conversion rates.

Just avoid fake scarcity, which is a duplicitous dark pattern and can often backfire.

3. The Authority Bias

People have a tendency to put more weight on the opinion of someone in a position of authority. They also ascribe greater accuracy to (and are more likely to be influenced by) whatever that person says. The lab coat bias, if you will.

This is the authority bias. It’s something that was drummed into us from a young age when we were taught to respect authority, and it’s something you can use to compel more of your audience to convert into paying customers.

Here are a few persuasive writing tips to enhance your content using the authority bias:

  • Include quotes from niche/industry subject matter experts
  • Support your ideas with data and evidence
  • Reference source material for any stats you quote
  • Make yourself appear like the authority by using an online course platform to create eLearning content that frames you as the authority-educator in your niche
Untitled

Source

This “authority by association” method is very effective, but an even better approach is to work on building your own credibility and authority.

When you become known as an expert in your field, your ability to influence readers will rise significantly.

This also applies to trusted 3rd party review sites and analysts. Pretty much every B2B company has G2, Gartner, and Capterra logos on their site:

Untitled

4. Commitment and Consistency

People tend to stick with whatever they have already chosen.

For the most part, they are consistent in their opinions, decisions, and actions because they want to believe they’ve made a good choice.

This means they are likely to continue along whichever path they have decided is the right one, which is great news for marketers because it means that if you can get a potential customer to agree to a small request, it will be easier for you to get them to agree to another, larger request later on.

You can put the principle of commitment and consistency to work in your own business in a number of ways.

For example, whenever someone first signs up for your email newsletter, you can send an automated email with an opt-in button to confirm that they want to receive your newsletters.

Additionally, you can embed an external link within the opt-in button that takes subscribers to another page with an actual opt-in form where they can fill in more information so that you know how best to market to them.

By navigating subscribers away from their email inbox, you make it more likely that they will complete the action, giving you valuable information to consistently show your commitment to your subscribers.

In this email from shoe-brand Greats, subscribers are able to easily isolate the opt-in button and click on it without having to navigate through a bunch of unnecessary text:

Untitled

Source

For maximum effect, Greats could use this opt-in button as a way to also navigate subscribers away from the email to a form where they can provide more information to the company about how they want to be reached and what they’re interested in being updated about.

You can also use pre-launch pages with a compelling call to action asking visitors to agree with a statement (e.g. “Yes, I want to increase my conversions. Let me know when [product/service] becomes available!

5. The Liking Principle

If you have ever purchased a product just because one of your favorite celebrities endorsed it, then you understand the “liking” principle which states that the more people like you, the more likely they are to agree to your requests. This is a great way to increase conversions on your website, email, or social media.

Here are a few tips and tricks to help increase the likelihood that people will comply with your requests:

  • Use a friendly and conversational style in all your copy
  • Include friendly photos of you or your team members on your homepage or landing pages
  • In addition to persuasive writing in your copy, add a photo of yourself in your email signature

The liking principle is a bit more ambiguous in nature because it also precludes you from discovering which specific target audience you’re marketing to and appealing to their interests and tastes.

Generally speaking though, B2B brands can add more personality and humanity to their copy. Think about Mailchimp’s colorful design, copy, and imagery. It just seems friendlier than a stodgier enterprise Mailchimp competitor:

Untitled

6. The Social Influence Factor

Also known as social proof, social influence is all about how people’s opinions, emotions, and actions are affected by others.

For the most part, when people that we admire, or those who are similar to us think or do something, we consider that behavior or thought pattern to be “normal” which makes it likely that we will think or act in the same way, as well.

In other words, people tend to look around to see what others are doing before making up their own minds, and this is something you can leverage in your business.

Here’s how:

  • Add reviews and ratings to your landing pages or product pages
  • Include testimonials with real people’s names and photos
  • Feature short client stories, testimonial videos or case studies on your home page
  • Showcase user-generated content in your advertising campaigns

You can also show the number of people who have used your product successfully (e.g. “Enjoyed by over 28,000 happy customers.”

Untitled

Alternatively, showcase social shares for your content to show how good your content is and entice others to share it, as well.

7. The IKEA Effect

Named after the popular furniture retailer, the IKEA effect was first described in 2011 and states that self-assembly has a huge impact on a customer’s evaluation of a product.

Simply put, this means that when someone builds something for themselves, they tend to value it more than if someone else had built it for them.

Here’s how you can put this concept to the test to help increase your website conversions:

  • Allow visitors to choose content downloads by industry, category, or another factor that pertains to them
  • Let customers “build” custom products before placing them in their cart

Check out Copyhackers’ blog search function, for example:

Untitled

The idea of adding a more laborious process to get customers to value products more is a long-established marketing tactic. When used in content marketing, the IKEA effect can be a great way to subtly influence your readers and convince customers to love your product.

8. The Ellsberg Paradox

Named after Harvard economist Daniel Ellsberg, the Ellsberg Paradox came about after decision-making experiments showed that buyers are wired to prevent risk and would, in fact, go to great lengths in order to avoid it.

For you, as a marketer, it means this is something you will have to work on overcoming in your landing pages if you want to increase your conversions.

Here are some ways you can achieve this:

  • Use your persuasive writing skills to spell out your offer’s guarantees and warranties in detail
  • Be specific about what customers will receive from your offers, coupons, discounts, and deals.
  • Add detailed descriptions to your copy so readers know exactly what to expect after downloading your checklist, signing up for your online course, etc.

The more you can get visitors to understand your offer, the easier it will be to overcome the objections which means you’ll ultimately increase your conversions.

This is crucial in B2B, especially for product led companies – what specific platform do they get to access? For how long? Many companies are vague on these details, leaving a question mark in the mind of prospects. “What’s the catch?”

Zapier does a great job with clarity and specificity on their pricing page:

Untitled

9. The Mimicry Principle

As humans, we respond more positively to anyone who looks, acts or sounds like us. This is mimicry, and it has been shown to increase liking, rapport, and positive feelings.

You too can use the mimicry principle in your business to compel more visitors to take action on your site.

Here are a few genius ways to try:

  • Write your copy in the same way your customers speak. Match their tone and voice, jargon, etc. For example, if they use emojis, use them in your copy, as well. This requires customer research and using the voice of the customer in your copy.
  • Use images that look like the people you’re targeting. So, if your target customer is a 35-year-old yoga mom, then use those types of images in your marketing materials.

10. The Anchoring Bias

Also known as focalism, anchoring refers to the common tendency people have of relying too heavily on whichever piece of information was presented to them first whenever they have to make a decision.

Once an anchor is set, people then have a bias toward that value.

For instance, say you are shopping for a Ford Bronco with an MSRP of $30,800. You will feel great if you negotiate the price down to $28,000. But, if you first discovered that the car had an average selling price of $28,000, you would not feel so great, although you paid the exact same amount for your car.

Here are some ways to use the anchoring effect to persuade visitors and boost your conversions:

  • List your highest price first. So, if you have three pricing packages (as an example), present your most expensive one first.
  • Focus your content on the benefit your customer should measure you by (or your competitors) as a subtle persuasion technique.
  • Show “discounted from” prices in your emails in order to convince visitors to convert

The last tip is particularly effective because when customers first see a much higher price, they then tend to feel that the other, lower-priced offers are a more appealing deal, as shown in the example below from MailChimp:

Untitled

Source

8 Best Practices for Persuasive Writing

Your ability to influence human behavior depends on how well you can understand the way people are wired and what drives them to take action.

There are also some best practices to keep in mind with regard to persuasive writing. Below, I’ve listed some of the best tips (from my point of view) to help you hone your persuasive technique.

1. Create a Dialogue with First & Second Point-of-View

This is one of the most effective persuasive techniques.

Create a dialogue in your content by talking to your readers directly. Blending a first-person point of view (e.g. “I”) with a second-person point of view (e.g. “you”) creates a dialogue and is one of the best persuasive writing techniques for building trust and familiarity with your readers.

2. Integrate Third-Person Point of View for Authority

If you’re not the authority on a specific subject, you may want to use a third-person point of view to ensure that you are maximizing the authority bias in your persuasive writing.

While your high school teacher may have told you to stick with one point of view when writing essays, content writing is different. It’s only to blend different points of view in your writing to ensure you build familiarity and trust with your audience while also using persuasive authority effectively.

3. Sell the Outcome

Instead of talking about yourself, your company, or your product’s features, let your readers know what’s in it for them.

Persuasive writing allows you to do this by positioning your product as the solution to their problems, needs, or goals.

Untitled

4. Appeal to Emotion

Your persuasive writing should appeal to the reader’s emotions and senses. Use tastes, feelings, colors, and memories as a way to enhance your copy and make it resonate with your target reader while finding a way to connect this sensory language to emotions.

The point is that you want to make the readers feel something. It’s easy enough to say you have the “Latest X” or the “Greatest Y” and explain why. But this logical, rational approach is only so effective.

To really get people invested in what you’re selling, you need to make them feel something. This means you need to ensure that your copy appeals to emotion.

While the writing you use should aim to do this, the most effective way to appeal to the emotion of your site visitors, social media followers, and email subscribers is with images and videos. Even otherwise innocuous images and bland text can be paired together to generate maximum emotional impact.

This can be seen clearly in this advertisement from McDonalds:

Untitled

While the words “happiness in a box” aren’t very emotionally moving, placing them next to an apparent mother and son smiling and cuddling creates helps to not just generate a random emotional impression but direct it exactly in the direction McDonald’s wants.

While the image itself might bring happiness to a lot of people, the fact that the words next to it explicitly include “happiness” tells you exactly how you should feel not just about the image but also about the McDonald’s brand and it’s Happy Meal products.

In other words, McDonald’s is strategically using images and text alongside its branded content to appeal to your emotional happiness and to direct that happiness toward McDonald’s’ and the items it wants to sell to you and your child.

5. Be Specific In Your Writing

The fewer open questions there are, the more likely you will be to convert someone into a paying customer. So make sure you find out your customer’s concerns and then create content to explain everything through blog posts, FAQ pages, etc.

Untitled

Source

6. Use Everyday Language

When crafting your copy, make sure you use human language in order to build stronger connections with the audience. Write as if to a friend in a tone that puts your readers at ease.

In a lot of fields, jargon is often used. For example, if you’ve ever been to the doctor for a checkup, you’ve probably heard a lot of long, scientific-sounding words that you had never heard before and could probably only half-understand.

But a good doctor knows how to translate their medical jargon itno everday language you understand so you clearly understand the cause of any medical symtpoms and how best to treat or cure them.

When attempting to create conversions, you have to act like that doctor. Don’t just speak in a formal jargon that only people in your niche would understand. Not everyone is a doctor or lawyer or realtor or marketer, and they’re not going to understand the specialized language that is used by people in those professions.

For example, if I’m trying to sell you a subscription to a piece of email marketing software, I can tell you that integrating third-party email marketing plugins to automate the marketing processes can increase conversions and profit margin.

But if you’re not a digital marketer, that might all just sound like a foreign language. Instead of converting the lead, you might just end up confusing them.

It might be best to use everyday language and say that you can make more money by using this software and then offer reviews from normal everyday people who’ve used it.

The technical language only goes so far, but communicating with people as people while incorporating real social proof is usually the better and more persuasive route.

7. Include a Call to Action

This is table stakes, but your readers should know exactly what to do next and be able to find that CTA.

The art of writing a CTA goes beyond the scope of this post. But it should be prominent, relevant, and motivational. “Submit,” for example, is typically a bad CTA. It’s vague and uninspiring.

Look at Copyhackers’ CTAs, though – super-specific and inspiring:

Untitled

8. Increase Readability

One of the best ways to make your copy more compelling is to increase the readability score of your content. You can use content writing tools like Jasper to rewrite your content to be more readable, or you can gauge readability with something like Hemingway.

This includes writing shorter sentences and paragraphs, making content scannable, avoiding the use of industry jargon, etc. This will help persuade readers, increase your ranking and ultimately boost your conversions.

A Brief Summary the Best Persuasive Techniques to Increase Conversions

There you have it: 10 persuasive techniques to increase conversion rate and a series of persuasive writing best practices to help increase conversion rate across all platforms that you and your brand are using.

Use each persuasion technique strategically to enhance your copy and get more people to take the appropriate action on your website, landing pages, social media, and across your entire digital marketing mix to ensure the highest ROI across all channels. .

What other persuasive writing techniques are you currently using to convert more visitors to customers? Share your thoughts below!

Author bio:

Ron Stefanski is an online business expert and owner of OneHourProfessor.com, which has over 100,000 monthly visitors and helps other create an grow their own online business.

You can also connect with him on YouTube or Linkedin.

The post 10 Persuasive Techniques to Increase Conversion Rate appeared first on Alex Birkett.

]]>
10 Ways to Wreck an Experimentation Program https://www.alexbirkett.com/experimentation-program-mistakes/ Fri, 12 Nov 2021 21:43:25 +0000 https://www.alexbirkett.com/?p=2728 I’ve written a lot about how to create an experimentation program, improve data literacy, and promote a culture of experimentation. Let’s talk about the opposite: how to sabotage an A/B testing program. Why Not Focus on How to *Build* an Experimentation Program? Why Focus on the Negative? I love looking at things through a via ... Read more

The post 10 Ways to Wreck an Experimentation Program appeared first on Alex Birkett.

]]>
I’ve written a lot about how to create an experimentation program, improve data literacy, and promote a culture of experimentation.

Let’s talk about the opposite: how to sabotage an A/B testing program.

Why Not Focus on How to *Build* an Experimentation Program? Why Focus on the Negative?

I love looking at things through a via negativa lens. Instead of thinking “what can I *add* to make this a success,” you think, “what can I subtract?”

When I play tennis, if I focus too much on fancy moves and backspin, I mess up and I lose. If I focus on not making errors, I usually win.

In diet and fitness, there are a million ways to succeed (supplements, 204 different effective diets and exercise programs, etc. This leads to analysis paralysis and lack of action. When starting out, it might be best just to focus on avoiding sugar and injuries while working out.

I believe that if you simply avoid messing up your A/B testing program, the rest of the details tend to fall into place.

And by the way, I’ve written enough articles about how to build a culture of experimentation. It’s easy enough to tell you to “get executive buy-in” from my writer’s vantage point. My hardest lessons have come through learning from the following mistakes, however.

Love this Nassim Taleb quote on lessons by subtracting errors:

“I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them.”

So here are some ways to destroy an experimentation program before it has a chance to take roots (avoid them and watch your program flourish).

1. “Quick Wins” Forever

One of the strongest red flags during a sales call for my agency is when someone expects instant results with content marketing.

Instant results are rare, and the only times they happen are through sheer dumb luck or because a company already has a huge audience and foundations for content and SEO set up.

But those companies aren’t the ones who expect quick wins. It’s early stage startups who *need* ROI tie-back and fucking fast.

The problems with this are two-fold:

  • First, content as a channel is simply a long game. It’s the only way to really win it.
  • Second, there are tradeoffs to chasing quick wins over sustainable growth.

These problems also correspond to experimentation programs.

Quick wins are great. We all love quick wins, and they certainly help you get more buy-in for your program.

Here’s the problem with them:

  • They don’t last forever
  • They’re an opportunity cost for more complex and potentially higher impact projects
  • They set poor expectations for the value of experimentation

The first problem is easy to explain: unless your website is total shit, your “quick wins” will run out fairly fast.

Image Source

One can run a conversion research audit and pick away most of the obvious stuff. Then you pick the likely stuff. Then you run out of obvious and likely stuff, and suddenly your manager starts wondering why your win rate went from 90% to 20%.

If we knew what would win, we wouldn’t need experimentation. We’d be clairvoyant (and extravagantly wealthy).

Experimentation, ideally, is an operating system and research methodology. which you make organizational decisions, whether it’s a change to your product or a new landing page on your website. Through iterative improvement and accumulation of customer insights, one builds a flywheel that spins faster and faster.

Image Source

Certainly, there are “quick wins” from time to time — namely, from fixing broken shit.

Eventually, broken shit gets fixed and you hit a baseline level of optimization. “Quick wins” dry up, but the expectation for them is still alive. When reality and expectation diverge, disappointment ensues.

One must start with a strong understanding of the value of experimentation and an eye for the long term.

If you’re hired in an experimentation role, sure, index on the obvious stuff at first. But make sure your executive team knows and understands that fixing broken buttons and improving page speed have a limited horizon, and eventually, one must wade into the uncertainty to get value from the program (especially at scale).

Similarly, constantly chasing quick wins is an opportunity cost in many cases. By focusing only on well-worn patterns, one makes the tradeoff by running fewer large experiments, investing in infrastructure, and doing customer research to build up new bases of knowledge.

Experimentation is like portfolio allocation, and some of it should be aimed at “quick wins,” but some of it should be bigger projects.

2. Rely on Industry Conversion Rate Benchmarks

Many misunderstand metrics when it comes to conversion rate optimization.

A conversion rate is a proportion. It depends both on the number of people that convert, but also, the number (and the composition) of the people that come to the website in the first place.

And that composition of people is fucking contextual.

Knowing that an average landing page conversion rate is 10% does nothing for your program. Even know that your closest competitor converts at 5% is completely meaningless information (and I mean *completely* — there is zero value in this).

Here you are, getting win after win and increasing your conversion rate month over month (I’ll talk later about the problems with that KPI), and then…bam! You’re hit with this industry report and you realize that, while your conversion rate has improved from 2% to 4%, the industry average is 8%. What now?

Image Source

I’ll just copy and paste something from CXL’s blog here:

“The average conversion rate of a site selling $10,000 diamond rings vs an ecommerce site selling $2 trinkets is going to be vastly different. Context matters.

Even if you compare conversion rates of sites in the same industry, it’s still not apples to apples. Different sites have different traffic sources (and the quality of traffic makes all the difference), traffic volumes, different brand perception and different relationships with their audiences.”

Here’s Peep’s solution, which I agree with:

“The only true answer to “what’s a good conversion rate” is this: a good conversion rate is better than what you had last month.

You are running your own race, and you are your own benchmark. Conversion rate of other websites should have no impact on what you do since it’s not something that you control. But you do control your own conversion rate. Work hard to improve it across segments to be able to acquire customers cheaper and all that.

And stop worrying about ‘what’s a good conversion rate’. Work to improve whatever you have. Every month.”

Caring about your industry conversion rate is also an opportunity cost and a diversion.

Focus on learning more about your customers, running more and better experiments, improving your own metrics, and innovating on your own business.

3. Staff Your Crucial Roles with Mercenaries and Interns

This is so common that it has become a trope in the growth space.

Executive goes to a conference, hears the importance of growth, hires a growth person. Expects the world with no resource allocation:

I’m all for scrappiness, but one must calibrate expectations or face sure destruction of the program in the long run (and talent burnout, too).

This expectation flows to any experimentation-centric role.

That’s why it’s important not to hire an experimentation person too early. There’s a lot you need in place to get value from experimentation:

  • Traffic adequate for running experiments and getting ROI from the program
  • Data infrastructure adequate for tracking and quantifying effects
  • A tech stack capable of running experiments and integrating with your other tools
  • Design and development to actually run worthwhile tests

If you have none of that and expect your one sole experimentation person to fill in the gaps in all of those areas, you’re going to be disappointed in the results and the experimentation person is going to leave through burnout or because they found a more serious company and organization.

Just look at how much a conversion rate optimization process entails:

Image Source

Data is the heartbeat of experimentation. Get a professional to orchestrate your analytics.

Real designers and real developers open up new worlds when it comes to testable solutions. Don’t make your experimentation person hack together shitty javascript or just run copy tests all day.

Get serious with investment in the team, otherwise it will never hit escape velocity.

Or just augment your team with agencies. There are tons of good ones.

4. Document Nothing

I’ve had multiple experiences where I came into a company to run experiments, built out some hypotheses and a roadmap through customer research and heuristic analysis, and presented the plan.

“Oh, we’ve already run [X Test] before. It didn’t work.”

Alright, when was it run? What were the results? Do you have the statistics / creative / experiment doc?

“A few years ago. Nope, we don’t have any of that.”

Well, shit. What now?

Personally, I hate doing documentation. It feels like busywork, but it’s not.

Writing an experiment document out in advance helps you plan a test, from the statistical design to the creative to the limitations. Writing and storing the results helps you cement and communicate learnings at the time, and storing them in an archive or knowledge base helps everyone else (including you) remember what you tested and learned from the test.

And if you’re not learning from your tests — wins, losses, inconclusives — you’re not really testing.

And if you have any hopes of scaling your experimentation program you’ll eventually hire new people. Help them out. They weren’t here 4 years ago and have no idea what was tested. Give them an Airtable table or something (better yet, use a tool like Effective Experiments).

Image Source

Similarly, if marketers are testing something, product managers could learn something from that test. If you’re not documenting it, you’re probably doing redundant work.

The tools are free, the templates are available. If you don’t use them, you’re just lazy.

5. Goal (Only) On CVR Increases and Winning Tests

I’ll admit, experimentation program goals and KPIs are hard to determine.

Goal setting in general is hard. When setting goals, I look for KPIs that are:

  • Useful versus overly precise and complex.
  • Not easily gamed.
  • Not burdened with strategic trade offs.

Imagine a sales team goaled only on meetings booked.

Well, in that case, the metric is useful and clearly discernible, but has clear strategic trade offs. One can easily book a ton of worthless meetings that end up eating up sales reps’ time but produce no actual sales or ROI.

In experimentation, two of the most common metrics used to judge a program are:

  • Increases to baseline conversion rate
  • Number of winning tests

For the first metric, there are many, many problems.

Imagine your website conversion rate is 5%. We’ll ignore the fact that conversion rate data is non-stationary and might fluctuate by a percentage point depending on the month or season.

Now imagine your company raises $100 million. You’ll probably get a lot of media attention from Hacker News, Tech Crunch, Wall Street Journal, whatever. This will result in traffic, let’s say an extra 50,000 visitors.

These 50,000 visitors convert at 1/10 the rate of your normal traffic. This lowers your baseline conversion rate despite winning every test that quarter (also unlikely, we’ll get to that).

Because your conversion rate decreased, is your program failing?

Fuck no. You’re doing well. You’re winning tests, incrementally moving the needle. All that press actually brought marginally more leads, but at a cost to the proportion metric. It’s an ‘external validity factor,’ a confounding variable. It shouldn’t make executives disappointed in the wins you’ve gotten.

Similarly, you could probably increase your conversion rate by turning off campaigns and traffic sources that are producing lower than average conversion rates, at the cost of the leads those campaigns bring in. Is this a good thing? Nope. It’s costing your business.

Now, winning tests. Better metric because it has more signal and less noise than conversion rate increases.

However, winning tests have an incentive problem. Namely, if you’re only incentivized to produce winning tests, and losers are punished, two emergent behaviors are likely:

  • You’ll test “safer” items — “low hanging fruit” — at the cost of more innovative and risky experiments
  • In some cases, teams will cherry pick data and run tests in a way that makes them appear as “winners”

The latter can be mitigated through good processes and guardrails (i.e. setting uncertainty thresholds and experiment QA checklists, having an independent analysis with different goals, etc.).

But the first is a real concern, especially after you’ve passed the point of adequate optimization. How do you move the needle when you’ve already fixed all the broken shit? Well, you have to try some riskier shit. Which means some tests will lose big. And you have to be okay with that.

After all, that’s a core value of experimentation. You limit the downside, which enables uncapped upside through risk mitigation. A losing variant only loses for 2-4 weeks, but the learning resulting from that could be game changing.

As for program KPIs that work, it really depends on the program. Ben Labay, Managing Director at Speero, told me it’s really a matter of improving test velocity and test quality.

I also like program metrics, like number of tests run, win rate, and win per test.

But it depends at what scale your program is operating already and what your experimentation strategy is.

6. Ignore Experiment Design and Statistics

Here’s my thinking:

A bad test is worse than no test.

If you don’t run a test, you’re inherently saying that you’re okay using your gut and taking that risk. You can approach it with some humility. There is no data, therefore, let’s just go with our gut.

But when you run a bad test, you have none of the certainty involved with proper statistics, but you have all of the confidence that you’ve made a “data-driven decision.”

If you’re just starting out in experimentation, you’ll likely need to run a few bad tests. Learn as you go. Makes sense, especially if you don’t have data science resources.

But if you’re serious about your program and want to be an experimentation-centric company like booking.com or Netflix, it might pay to invest in some data literacy:

It’s a messy process, so you’ll never fully iron our statistical noise and mistakes. But you can do a lot to eliminate the most common mistakes.

7. Don’t Question or Invest in Your Data Infrastructure

I’ve done a lot of consulting and I’ve worked at several companies.

I’ve never seen a perfect Google Analytics setup.

In many of my roles, I’ve spent up to 50% of my time working on data infrastructure. If you can’t measure it, you can’t experiment on it. And if your data is not trustworthy, your experiments won’t be trustworthy.

And if you can’t trust your experiments, you probably shouldn’t run them. It costs time and money to run experiments, and if you can’t trust the results, what are you really gaining from running them?

Number one, expect to spend a lot of time and money investing in your data infrastructure. If you’re not prepared to do this, you’re not ‘data-driven,’ you’re just talking the talk (AKA lying, even if just to yourself).

And even after investing in data infrastructure and talent, question the numbers you see. If something looks wrong, chances are, it is. As my friend Mercer always says, “trust, but verify.”

8. Change Strategy Frequently

Sometimes companies say they value experimentation, but what they actually mean is they flip flop on strategy constantly and have no vision.

They misappropriate the word “experiment” as many people do. Sometimes people mean “try something new” when they say experiment. Sometimes people mean they don’t have a strategy, so they pivot fast.

Image Source

Regardless, experimentation is best used as scaffolding to support a cogent business strategy.

Experiments can and should help bolster, alter, or change strategies when necessary.

But if the strategy itself is constantly changing, your experimentation team will never have the runway it needs to invest in longer team projects, to accumulate incremental wins, or to learn enough about particular UI features or customer segments to exploit their knowledge.

Experimentation *does* decouple strategy from top down planning, in a way; or rather, it sets limits and checks on managerial decisions led by gut feel. It kills the HiPPO.

But it doesn’t replace the need for strategy and vision, which effectively communicates the long term game plan as well as what you’re not going to do.

Strategy should guide the direction of experimentation and vice versa. But to change strategy frequently in absence of very good justification just results in whiplash and disappointment.

9. Cherry Pick Results

If you do it right, an experiment can be interesting beyond the aggregate delta in conversion rate or your metric of choice.

Even if you’re trying to improve, say, average conversion rate on an ecommerce site, you’ll still learn other stuff.

You’ll learn if a given segment responds more or less favorably to the treatment. You’ll learn if there are any tradeoffs with an improved conversion rate (does it, for example, reduce average order value?). And you’ll learn what effects a treatment has on varying user behavior signals like repeat visits, engagement, and pageviews.

You cannot, however, pick and choose which of these metrics determines if an experiment was a winner *after the fact.* That’s called “HARKing” (hypothesizing after results are known). It’s the “Texas sharpshooter fallacy,” painting a bullseye only after the shots have landed in the side of the barn.

Image Source

It’s easy to make any experiment or campaign look like a winner if you search far and long enough for a metric that looks favorable.

Do this consistently, and you’ll reduce the value of experimentation to rubble. It will then be used as validation for what you already wanted to do, which increases the cost of experimentation, lowers the value, and in turn, destroys the expected value of the program.

I’ve given this example a few times before, and it’s an older one, so forgive me. Buffer wrote about A/B testing and gave this example:

Version A:

Version B:

Now, without the “top tweet” pin, I couldn’t tell you which of those was the winner. There are five metrics, and some are winners one variant A and some on variant B (sample size also looks somewhat small, but that’s besides the fact).

Anyway, they somehow chose Version B, I suppose because of higher retweets and mentions. But what if Clicks was the metric that mattered?

People cherry pick metrics for two reasons:

  • Ignorance
  • Incentives

The first example is simple enough to combat. Invest in data literacy and processes that mitigate these mistakes.

The second is a matter of goals and incentives. If you have a culture of “success theater” that promotes only winning campaigns, surely you’ll start only hearing about winning campaigns. People aren’t incentives to fail or show their failures, they’re incentives to become spin doctors, messengers with only good news.

This is bad news. Create a culture where failure is expected, but you learn from it and improve based on what you learn. That’s a huge value of an experimentation program.

Who hasn’t heard someone justify a campaign that failed to bring revenue by saying something like “yeah, but it was still worth it because it raised our brand awareness / engagement / etc.”

If brand awareness was the goal up front, fine. But choosing that as a justification after the fact just obfuscates the experiment design and therefore learning.

10. Keep a Tight Grip on Experimentation at Your Company

Unfortunately, the worst thing you can do for an experimentation program is dictate it entirely from the top down.

You probably have good intentions, and in the beginning days, *someone* needs to dictate what is tested.

But people are inspired by autonomy and mastery, and when you take those two things away from them, you demotivate the team and people start going through the motions. Not only that, but you bottleneck the idea and insights flow, resulting in a narrower range of experiments.

It may be tempting as the CEO, CMO, VP, whatever, to tell your team what they should and shouldn’t test. But resist. Give some ownership and trust.

Beyond that, experimentation ideally expands beyond your own team.

Here’s the path I often see experimentation programs follow: Decentralized > Centralized > Center of Excellence

Image Source

In the decentralized model, individual teams and people start running experiments with little or no oversight, guardrails, or strategy. Someone takes a Reforge or CXL class, wants to run tests, and starts doing it.

Executives then realize the value of experimentation, so they hire or spin up a specialized team. Sometimes this is called a CRO team, sometimes a growth team, sometimes an experimentation team. They’re focused on experimentation and optimization and own all efforts. They’re solely accountable and responsible for experiments, no one else can run them.

At this stage, you eliminate many errors with experiments, but you cap the value. To become an experimentation driven organization, you need to democratize, support, and enable other teams to run experiments.

This leads to the center of excellence model, where you have a centralized specialist team of experimenters and data scientists who own and manage the tools, processes, and culture around experimentation, and they enable and educate others. They become cheerleaders for experimentation in a way. Their focus moves away from individual experiments to helping others get up and running autonomously.

When I say that CRO is an operating system, this is what I mean. To unlock the value of experiments, it can’t be bottlenecked inside one person or team’s brain. It has to be a methodology by which any team can make better decisions using data and controlled trials.

This is how the best programs in the world – Microsoft, Booking.com, Netflix, Shopify, etc. — operate.

Image Source

Conclusion

Unfortunately, there are more ways to ruin an experimentation program than there are to build and maintain one.

To build an maintain one, you need just a couple things:

  • Highly motivated individuals
  • Executive buy-in and understanding
  • Sufficient traffic
  • Sufficient budget

Have those, and the rest will fall into place.

However, an experimentation program can be derailed by seemingly subtle things, like cherry picking results, rewarding only winning tests, choosing the wrong metrics, or underinvesting in resources and infrastructure.

You may think “I’ll hire a specialist and give them Google Optimize, and that’s enough,” but it’s not. Experimentation is inherently difficult and cross-functional. It’s a garden that requires nourishing, but if you water it and care for it, it’ll be a perennially productive asset for you.

The post 10 Ways to Wreck an Experimentation Program appeared first on Alex Birkett.

]]>
The 11 Best Personalization Platforms in 2025 https://www.alexbirkett.com/personalization-software/ Fri, 15 Oct 2021 15:45:07 +0000 https://www.alexbirkett.com/?p=2611 Statistics suggest that 72% of customers are likely to engage with brands and messages customized to their specific concerns. Automatically adapting your customer experience based on past behavior is the way to get new customers and get them to come back time and time again. It’s the golden rule of marketing. Personalized emails, personalized advertisements, ... Read more

The post The 11 Best Personalization Platforms in 2025 appeared first on Alex Birkett.

]]>
Statistics suggest that 72% of customers are likely to engage with brands and messages customized to their specific concerns.

Automatically adapting your customer experience based on past behavior is the way to get new customers and get them to come back time and time again.

It’s the golden rule of marketing.

Personalized emails, personalized advertisements, landing pages, email sequences…the list goes on. It’s all about targeting your prospects/customers with the right message via the medium they prefer at that time.

And personalization apps let you do just that.

The 11 Best Personalization Software

Here are my top picks for the best personalization software:

1. VWO

Best For: Identifying user behavior using A/B testing, heat maps, on-page surveys, and session recordings.

G2 Score: 4.2

A/B testing and then personalizing your landing page to improve its conversion rate and generate leads can be highly time-consuming and expensive.

And that’s where VWO, one of the most popular conversion optimization (CRO) and A/B testing tools, comes into the picture.

Marketers use the tool to carry out A/B split tests on landing pages, blogs, email campaigns, or even complete websites.

VWO helps you conduct all the following tests and experiments:

  • A/B testing
  • Multivariate testing
  • Split URL testing
  • Server-side testing
  • Mobile app testing

Aside from the ability to run different types of tests, VWO helps you gauge specific user behavior using heat maps, scroll maps, click maps, and even session recordings.

While click maps and scroll maps help you understand their scrolling and clicking patterns, session recordings allow you to track their precise movements on your website. You’ll also be able to identify their friction points, mouse trails, and the entire buyer journey.

It’s almost like you’re sitting right beside your audience while they’re browsing your website.

Nosey Parker, eh?

Well, all’s fair in love and marketing!

And it’s not like you’re privy to your customer’s most private thoughts. You just want to determine their interest area to provide them with the most personalized (hence, optimum) customer experience.

So, it’s all in good faith and legal!

Then you have on-page surveys and NPS scores that will help you ask direct questions and see what needs to be edited on your site.

VWO also provides detailed analytics and reporting of all the tests conducted. You can even filter results based on different segments and channels.

Cons:

  • It’s essentially an A/B testing and heat map platform – not a personalization app solely. Though, you can use it to identify user behavior and make changes based on their interactions.

Pricing:

Quote-based. They also offer a free plan.

2. RightMessage

Best For: All types of businesses and marketers.

G2 Score: N/A

RightMessage is a website personalization platform specializing in website design, digital marketing, and social media. It also improves the email conversation by automatically creating the right email at the right time based on recipient behavior.

They help you monitor your audience by giving insights about your website visitors, what they are looking for, where they come from, and what they are doing on your website.

You’ll also be able to uncover the conversion rates (based on different segments), what type of audience has the lowest conversion rates, and more.

I love how visual and easy to comprehend their statistics are.

You be the judge yourself:

Using the information unearthed, RightMessage helps you create personalized website elements like surveys, opt-in forms, quizzes, and even non-invasive CTAs to generate more leads.

RightMessage: Opt-in form

They use their behavioral segmentation engine that tracks your visitors’ activities and creates a unique visitor profile to create these website elements.

And if you’re big on case studies, this tool will prove to be especially convenient.

They have a “Dynamic Case Studies” feature that ​​personalizes the case studies based on your site’s audience. For instance, it will show testimonials, case studies, etc., aligning with the audience on your website.

They also enable Account-Based Marketing (ABM), which includes the ability to address a returning lead by their name. Or you can even swap the generic “Buy Now” CTA buttons with “Upgrade” offers for specific visitors.

Not just your website and landing pages, RightMessage helps you personalize your email messaging as well.

Emailing your website visitors is probably the next step in your sales funnel, after all. And RightMessage can do wonders for your email marketing strategy if you use it in tandem with a sales funnel platform.

What they do is, after collecting all the behavioral and survey segmentation of the users, they will save all the data to your email marketing database. You can then use this data to craft relevant onboarding welcome emails to the visitors.

And that’s not it.

RightMessage enables 2-way synchronizations with your email marketing software to gather information on visitors’ past purchases. They again use this data to provide a hyper-personalized experience.

Other features include:

  • Creates dynamic sales pages for each visitor.
  • Personalized testimonials and case studies.
  • Detailed statistics.
  • Unlimited sub-accounts and websites.
  • Auto-segment affiliates by behavior.
  • Craft product descriptions based on user behavior.
  • Creates landing page variations based on targeted data and ads.

Cons:

  • Limited integrations. It doesn’t work with Zapier either.
  • Personalization is only available with the most expensive plan.

Pricing:

Pricing starts at $79/month for upto 10,000 visitors per month for the CTA plan and goes up to $179/mo for the Personalized plan. There’s also a 14-day free trial.

3. Mutiny

Best For: Mid-sized B2B companies.

G2 Score: N/A

Have you ever wanted to figure out what’s going to get people engaged with your content? Mutiny HQ Personalization software is designed to identify your visitors, then take that data and create real-time personalization experiences based on their interests.

It has a streamlined, step-by-step process.

To start with, Mutiny integrates with multiple social media platforms and data analytics tools (including Salesforce, Marketo, Google Analytics, Clearbit, and more) to identify your website visitors.

They use natural language processing to identify and tag your audience based on their website activity, industry, size, ad campaign, and more.

And that’s just the first step.

Next, Mutiny leverages AI technology to recommend the optimum audience segments for personalization. The product recommendations depend on your site’s behavior and potential conversion rates.

Next, they’ll suggest proven strategies that have worked for other B2B companies and will even write personalized headlines for you.

The fourth step includes editing, adding, or deleting website elements like CTAs, anything on your website, including CTAs, modals, surveys, and more.

And it doesn’t require any rigorous work or coding know-how. Mutiny offers a visual editor and claims to support every CMS (Content Management System) and frameworks like React, Angular, and Vue.js.

Finally, you can analyze how your changes are performing using automatic hold-out testing. You can either let them optimize everything for you or test multiple variations manually.

You can also use Mutiny to create and customize personalized pages and ad campaigns for outbound campaigns.

Their integration with Slack is another bonus. For example, you and your team will directly get notified in Slack every time a target contact views your ad campaign or landing page.

Cons:

  • Limited customization capabilities.

Pricing:

Not available on the official site.

4. Intellimize

Best For: Mid-sized and large enterprises.

G2 Score: 4.9

Intellimize helps you create a dynamic and personalized learning website using machine learning. It simultaneously tests various market ideas for your website to see what content and messages work the best.

What I liked the best about Intellimize is its use of Artificial Intelligence and Machine Learning technology. They run all combinations of experiences and data to determine what converts maximum leads without any human intervention.

It eliminates the need for A/B testing and rule-based personalization. Both are greats ways to identify your audience and provide them with a personalized experience.

That said, marketers tend to bypass tens of essential rules in the process, and it all becomes a mess.

And apparently, that’s what made Intellimize look towards machine learning.

Intellimize doesn’t need any preliminary data – their machine learning automatically finds the best marketing strategy and then adapts to each visitor’s experience. They use different data points, such as location, device type, day, time, the previous behavior of the visitor on the website, etc.

To make their job easier, you can even share first or third-party data with Intellimize to personalize your site even better for unique visitors.

All of this ensues in personalized headlines on your website, messages, images, pages, layouts, and forms relevant to visitors.

However, note that Intellimize focuses solely on website optimization. You won’t find any options to supercharge your email marketing content.

Finally, they don’t cut any corners when it comes to reporting. You’ll get access to campaign reports to identify the performance of your website before and after the optimization. You’ll also be able to monitor parameters like traffic source, date and time, device, URL parameter, location, and more – from one dashboard.

Other key features include:

  • Features case studies relevant to the customer.
  • Shows relevant customer quotes, case studies, and reviews.
  • The ability to set optimization goals for your objectives.
  • Segments and filters your website visitors.
  • You can preview or pause your website optimization campaigns whenever you want.

Cons:

  • Steep learning curve.
  • Integration with third-party sites can be tricky.

Pricing:

Pricing is not available on the website. You can request a quote and a free demo.

5. Optimizely

Best For: A/B testing and multivariate testing.

G2 Score: 4.3

Optimizely is an all-in-one marketing platform for experimentation, recommendation, digital experience, digital marketing, and more.

It can be both a good thing and a bad thing.

Good, because you get so many functions under the ambit of a single platform.

Bad, because personalization is not their sole focus. However, they do offer everything you need to personalize your audience’s experience.

For starters, Optimizely takes not only your customer’s referral source into consideration but also what they are likely to do next.

How do they do it?

They set goals and use machine learning to predict customer behavior.

What’s more, they provide one-click integration, allowing you to connect your Optimizely dashboard with your data channels. The Optimizely platform will extract data from your current platforms and test multiple ideas and combinations.

Finally, they will turn these data models into comprehensible customer profiles. You can then engage with your customers on a one-to-one basis and personalize their experience.

It primarily uses A/B testing, multivariate testing, and AI-based technology to help you personalize the customer experience.

The entire process doesn’t seem as automated as Mutiny HQ and would require fair-share of human interference. However, Optimizely is a good option if you want to take advantage of its extensive suite of solutions.

All in all, you can use Optimizely to define your goals and set up awesome experiments that get more engagement, leads, or revenues.

Cons:

  • Various G2 reviews hinted at intermittent outages.
  • The UX could be more intuitive.

Pricing:

Quote-based.

6. OmniConvert

Best For: Large enterprises looking to enhance their conversion rate optimization.

G2 Score: 4.5

OmniConvert is a suite of tools for exploring, improving, and analyzing your marketing campaigns.

It performs A/B tests using multiple variations, segments audiences and optimizes customer journeys to help you optimize your website and increase conversion rate.

It also helps you unearth real-time data of your customers, including weather, geolocation, OS type, browser type, language, and more.

Another great part about OmniConvert is that it offers a built-in JS and CSS editor. The editor lets you create and modify website elements and even reuse previous codes between variations.

Other key features include:

  • CDN cache bypass.
  • Experiment debugger.
  • Advanced segmentation based on 40+ parameters.
  • Personalization of cart total value, product name, among other on-page variables.
  • 100 overlay and pop-up templates ready to use and customize.

You can even take their exclusive help, where they’ll assign a data analyst to your Analytics account. The analyst will understand your visitors and interact with email, search, and social channels. They’ll then perform the audit based on extracted data and results!

Additionally, they also have a suite of tools that makes complex ecommerce data easy to comprehend and visualize. You can also use it to generate insights and subsequently use the data to treat consumers differently on every channel.

Cons:

  • Some may find the tool a bit complex without inside help.
  • It requires extensive CSS knowledge at times.

Pricing:

Plans start from $167 per month, paid annually (or $320/month if you choose to pay monthly). The plan allows 50k views, A/B testing, web personalization, advanced segmentation, on-page surveys, and triggered overlays.

7. Proof

Best For: Adding social proof to your landing pages.

G2 Score: 4.4

Social proof is the best way to convince people to buy.

If someone told me that 5,000 industry experts had installed the eBook I was about to install before me – it would strengthen my resolve to install and read it myself.

However, you need REAL social proof. Not the kind where you pay some stranger on Fiverr to place some Tweets and Facebook posts on your behalf; I’m talking about some REAL numbers.

And true to its name, Proof helps you do just that!

Here are some examples:

Adding proof to your landing pages helps you build visitors’ trust and create urgency – leading to increased conversion rates.

You can use Proof to add the following elements to your site and landing pages:

  • The total audience that recently took action on your site.
  • Live visitor count.
  • Recent activity (live feed of visitors on your site).

Finally, you can run A/B tests to determine the impact of these “proof elements” on conversion. You’ll be able to see your conversion analytics on their intuitive dashboard.

Proof also provides live visitor count notifications, hot streaks notifications, recent activity notifications, A/B testing, live chat support, and more.

Proof also allows you to personalize website text, images, and CTAs using ready-to-use templates, A/B testing, and data-driven reports.

You can further personalize customer experience based on visitors’ traits and behavior data.

 

Other key features include:

  • No-code visual editor.
  • Personalize web applications.
  • Drag and drop elements like top bars and CTAs to your site.
  • Flexible API.
  • Works with every website builder and single-page apps.
  • Personalized content appears under 60ms.

There’s also a 14-day free trial, allowing you to see how the software works before making the payment.

Cons:

  • Limited personalization features.

Pricing:

Starts from $66 per month, when billed annually for 10,000 unique visitors, unlimited domains, and unlimited notifications.

8. HubSpot

Best For: Medium and large-sized enterprises.

G2 Score: 4.4

HubSpot’s Marketing Hub has a large set of features for marketers to personalize their website, web elements, and email campaigns.

You can run email campaigns that are specifically personalized to each visitor, use segment targeting to get a more diverse audience, personalize your website elements, and more.

HubSpot’s core personalization features include:

  • The ability to send personalized, time-optimized email campaigns.
  • Triggering lead capture pop-up (including exit-intent) forms based on customer behavior.
  • Customize CTAs and other website elements based on each customer’s journey.

There’s a “Smart Content” feature that experiments with different versions of your content based on specific consumers’ devices, referral sources, and more. For example, you could create variations for customers coming from different referral sources or devices.

In addition, HubSpot also provides marketing automation features and ready-to-use workflows to nurture and score leads, personalize email campaigns, automate cross-functional operations, and more.

Other key features include:

  • Account-based marketing.
  • The ability to run A/B tests.
  • SEO-optimized web pages and blog posts.
  • Campaign management tools.
  • Event-based segmentation.
  • Landing page builder and mobile-optimized templates.
  • The ability to track your performance after personalization with built-in analytics and custom reporting

Cons:

  • The knowledge base should be more extensive.

Pricing:

Pricing plans start from $45 per month for up to 1,000 marketing contacts.

9. Salesforce Interaction Studio (formerly Evergage).

Best For: Mid-sized enterprises.

G2 Score: 4.3

Interaction Studio (formerly Evergage) is a Salesforce product that provides real-time personalization and interaction management.

The tool helps you extract pertinent data on your customers and then use AI to deliver a personalized customer experience. It enables AI-driven optimization, cross-channel engagement, A/B testing, and analysis.

Once you have customer data, the tool automatically categorizes all products and content based on machine-learning recommendations. It segments data based on referring source, geo-location, weather, company, industry, and more.

Once you understand the business context, it recommends the most relevant products and content based on your customers’ characteristics and preferences.

It’s also an omnichannel personalization platform and helps you guide customers along the optimum journey. Evergage guides each customer along the most appropriate path, triggering interactions where they are or in the channel they prefer, including owned, social, and paid media.

And not just that, it also helps you streamline your consumer’s digital and offline behavior. Salesforce’s Interactive Studio assists you in interactions with call center agents, in-store associates, or at kiosks and ATMs.

Other key features include:

  • Real-time customer segmentation.
  • Gauge customer behavior and trigger personalized messages via mobile app.
  • A/B test algorithms and optimize experiences.
  • Track metrics like sign-ups, purchases, downloads, and more
  • Predict future customer behavior using data collected in a rich data warehouse environment.

Cons:

  • The platform is robust. However, it can be challenging to grasp all the information at once.
  • The user interface should also be more modern and easier to use.

Pricing:

Quote-based.

10. Unbounce

Best For: Individual users, small size and mid-sized businesses.

G2 Score: 4.4

Unbounce is a landing page builder that helps you create personalized, high-converting marketing campaigns without the need of a developer.

It offers various features to help you optimize and personalize your content and website.

For one, Smart Builder extracts customer data from over 1.5 billion conversions, allowing you to identify what layout, content, and headlines will help you convert your target audience.

The Smart Copy feature is an AI writing tool that can create content within minutes customized with your brand and target audience in mind.

Then there’s the Smart Traffic feature that identifies customer behavior and directs each visitor to the landing pages most likely to convert them.

Additionally, it lets you run A/B tests, integrate with your favorite CRM, and automate your follow-up emails using Unbounce’s easy drag-and-drop interface.

Cons:

  • You might need a little bit of HTML and CSS knowledge.

Pricing:

Starts from $90 per month for up to 20,000 visitors and 500 conversions. There’s also a 14-day free trial.

11. Instapage

Best For: Freelancers, marketers, small size and mid-sized businesses.

G2 Score: 4.4

Just like Unbounce, Instapage is a drag and drop website and landing page builder that lets you create personalized website pages.

The platform is ideal for people who don’t have time to create their own landing page because it allows you to design professional websites (and squeeze pages) in minutes.

When it comes to personalizing landing pages to cater to your audience’s requirements, Instapage enables A/B testing and dynamic content. It dynamically directs potential customers to a relevant landing page for each ad. The tool aligns the landing page elements based on visitor-level data like keywords, firmographics, and demographics.

Other prominent features include ad mapping, detailed analytics, experimentation, and more.

Cons:

  • Limited personalization features.
  • Not sufficient for creating a website with multiple pages.

Pricing:

Starts at $199 per year with no conversion limits.

What Features to Look For in Good Personalization Software?

Every marketer knows the value of good personalization software. Personalized and relevant content is proven to stun and amaze visitors and give you a leg up on the competition. But what do you look for in good personalization software?

There are some features that stand out, making them easier to identify.

  • Utilization of AI and Machine Learning – How can you ensure your digital platform is pumping out personalization on steroids? That’s where personalization software, including AI and ML, kicks in. These are just fancy terms that pretty much mean “it helps you figure out your target market better,” which is really what it all boils down to.
  • A/B Testing – A/B testing is something that will help you further personalize your site and increase your conversion rate.
  • The Ability to Collect User Data – Your app should have the ability to collect customer data so that you can understand what your customers are interested in, whether they are prone to purchasing, their future plans, etc. It will help you personalize customer service and enhance your operations.
  • Customer Segmentation – Your personalization app should be able to segment and target your audience based on their preferences, demographics, location, behavior, and more.

For example, if you sell mp3 players and accessories, the software should segment your market into teenagers, young adults, and oldies; or new-generation mp3 players and old-generation mp3 players; or those who buy mp3 accessories and those who don’t, etc. Understanding each of the most important types of customers is very valuable to you as a retailer because once you know who they are, you can tailor your business to suit them.

  • Built-in Editor – The editor will help you easily make changes to personalize your site, landing pages, ad campaigns, and more.

That’s a Wrap!

And that was my list of the 11 best personalization software that can help you boost your sales and conversion rates.

Personalization is crucial because today’s customers are used to having what they want. They are even more selective about the brands they buy from…they want something that has meaning for them.

And that’s where personalization apps enter the picture.

However, the personalization app you’ll pick should depend on your requirements.

For example, if you want to run A/B tests and personalize your web pages yourself, you might prefer Optimizely and VWO. To create personalized landing pages with dynamic content, pick either Unbounce or Instapage.

Review the aforementioned personalization solutions carefully and pick one that aligns with your requirements.

The post The 11 Best Personalization Platforms in 2025 appeared first on Alex Birkett.

]]>
The 5 Pillars You Need to Build an Experimentation Program https://www.alexbirkett.com/experimentation-program/ https://www.alexbirkett.com/experimentation-program/#comments Wed, 01 Sep 2021 16:12:18 +0000 https://www.alexbirkett.com/?p=2496 I believe in the power of experimentation. But most companies have stumbled tremendously in building powerful experimentation programs. See, the value of experimentation doesn’t rest upon the single hyperbolic A/B test win. The value is in building an experimentation program and culture that scales and helps cap risk, enable innovation and creative adaptation, and generate ... Read more

The post The 5 Pillars You Need to Build an Experimentation Program appeared first on Alex Birkett.

]]>
I believe in the power of experimentation.

But most companies have stumbled tremendously in building powerful experimentation programs.

See, the value of experimentation doesn’t rest upon the single hyperbolic A/B test win.

The value is in building an experimentation program and culture that scales and helps cap risk, enable innovation and creative adaptation, and generate insights and learnings to enable a data-driven company culture.

Oversold and misunderstood, conversion rate optimization specialists and experimentation experts are often expected to come into a company and magically boost performance through sheer will and experience.

I’ve worked on experimentation at several companies now, including some where I started or formed the foundations of their experimentation programs.

This article will cover the 5 critical pieces you need in place to build out a program. While this won’t be as useful to companies that already have successful programs, it should be useful for those who are struggling to get one started.

Importantly, I’m also going to outline what elements are overrated or unnecessary when building out an experimentation program.

Preamble: on Expected Value and Marginal Utility

You need a certain amount of traffic and scale to warrant experimentation.

There are a set amount of actions you can take within a finite time horizon, therefore any action incurs an opportunity cost by replacing something else you could have otherwise done.

If you don’t have enough traffic, the expected value of your experimentation program will almost certainly be negative. Think about it at a high level: experimentation costs money. You have to hire program managers, designers, and developers (or partition some of their time to experimentation). You have to invest in tooling to accomplish this stuff. And the experiments you run incur an opportunity cost as well.

Let’s assume you can win 40% of your experiments – an incredible win rate. If the value of those wins doesn’t supersede the costs of the program, the expected value is negative.

Additionally, low traffic experiences are incredibly hard to work on. Functionally, it means you need to either accept higher levels of uncertainty in your results or work much more slowly to build out “no-brainer tests” (in which case, you might as well just launch them and not run the experiment).

Many companies have been sold on the value of experimentation, which is great. But not every company needs to hire and build out a whole program – yet.

The one caveat: if you’ve got sufficient runway and traction and you want to start building the experimentation “muscle” and culture, your program can “operate at a loss” as long as you understand that’s what you’re doing and you’re building for the long term.

The 2 Bottlenecks to Building Experimentation Programs

Broadly, the two things you need to figure out are:

  • Technical challenges
  • Cultural challenges

For the first category, this is the functional ability to run and analyze experiments. The amount of traffic we have as well as the ability of our tools to functionally randomize units will set a threshold on how many tests we can run and analyze. I’d also put human resources into this category, because you need someone to ideate, run, and analyze your experiments.

Cultural challenges are somewhat more nebulous. They involve education and enablement and visibility, buy-in and evangelizing from leadership, and also, human resources (yes, humans bridge both challenges).

You can’t build an experimentation program without getting both of these in order. Think about these themes as you read about the 5 pillars of experimentation programs.

The 5 Pillars of an Experimentation Program

  1. Trustworthy Data
  2. Human Resources
  3. Leadership and Strategic Alignment
  4. Experimentation Technology
  5. Education and Cultural Buy-In

1. Trustworthy Data

The heartbeat of experimentation is data. Bad or no data means no experimentation.

The process of experimentation isn’t about knowing what works on a user interface and applying it. It’s not about sprinkling on some social proof here, some authority and trust symbols there, and slapping on an urgency-provoking headline (though those could all be great tactics).

No, experimentation is a process for reducing uncertainty in decision making, thus capping your downside risk and enabling creative innovation.

It’s a methodology that uses data as feedback to quickly determine the efficacy of a treatment and make a decision based on that feedback.

So, logically, if your data / feedback is flawed, your decisions will be, too. And if your decisions resulting from experiments are flawed, you’re better off not running them (remember, expected value. There’s always a cost to experimentation and the resulting reward needs to outweigh that).

Unfortunately, bad or missing data is the most common problem I’ve seen when working with companies, either full time in experimentation roles, consulting for CRO, or through running my content agency. Everyone’s got a messed up Google Analytics setup!

That should give you pause, but also give you solace. You’re not alone. You just need to spend the time and resources to clean up your analytics – aka, invest in infrastructure – which many companies are unwilling to do.

This is one of those “slow down to speed up” steps.

Hire an analyst or at least a consultant. Have them determine the following:

  1. Are you tracking everything you need to be?
  2. Is the tracking precise or is it flawed?
  3. Is the data accessible to the right people at the right times?
  4. Are you integrating your data to fulfill a more holistic picture of user behavior?

By the way, these data audits aren’t a one-and-done type of thing. I like to revisit at least once a year, but preferably quarterly. And the best thing you can do is hire a directly responsible individual (DRI) to own your website or product analytics setup.

2. Human Resources

Many companies will think first of which CRO tools you need, but first, you need to figure out the people.

Get the right people, and the technology (while still important) doesn’t matter as much.

Who do you need?

It will massively depend on your context – industry, company, stage, resources, etc.

For early stage tech companies, most experimentation efforts can live within the broader growth program. As such, you can get a well-rounded T Shaped Marketer, someone who has taken Reforge or CXL courses, and can get you from zero to one.

But for anyone hoping to build a robust and long term program, I don’t think it’s easy to do without at least one data scientist or analytical support.

Make no mistake: xperimentation is hard.

There’s the strategic and program side of things, which should be filled by a PM or experimentation leader. There’s the technical side, which should be filled by dedicated growth engineers and designers who work with the PM or experimentation leader. And there’s the analysis side of things, and this is where so many people underinvest.

It’s also why most A/B test results are total bullshit – smoke and mirrors. It’s not due to malevolence; it’s because statistics and experimentation analysis are hard skills that take a long time to learn and apply.

I’ve been down this rabbit hole for years. I’ve written articles about A/B testing statistics and analyzed hundreds of tests. But I still prefer to have an actual analyst or data scientist to guide my decisions, because there are a million things that I don’t know and couldn’t know unless I dedicated my entire career to this knowledge set.

You’ve also got to decide how you want experimentation to sit in your organization. There are three common models:

  • Centralized
  • Decentralized
  • Center of Excellence

Image Source

Most people I’ve talked to seem to converge on the belief that the Center of Excellence model is the ideal end point, but it’s likely you’ll have to start out either centralized or decentralized.

Image Source

I recommend reading Merrit Aho’s great article on how to structure an experimentation team. Optimizely also has a great article on this.

Whatever structure you choose, you’ll need to have some of the following people dedicated to experimentation (some can be outsourced or freelance if you don’t have in-house hires):

  • Program leader / PM
  • Analysts / data scientists
  • Growth designers
  • Growth engineers
  • Growth marketing partners

3. Leadership and Strategic Alignment

If leadership isn’t strongly involved and bought-in on experimentation, the program is doomed to fail.

This includes a few components:

  1. Leaders must understand the value of experimentation and how it works
  2. Leaders must align on KPIs and motivators for the experimentation program
  3. Leaders must create conditions of psychological safety to allow for failure and tinkering

Two of these are summed up in Ben Labay’s great graphic on experimentation culture (trust the team and trust the goals):

Let’s look at the hypothetical opposite of the above components (which is quite common).

You get recruited to work for a company. Your title is growth manager, or experimentation manager, or CRO specialist. They recruited you because they read an HBR article on how booking.com and Microsoft are running experiments, or because they saw case studies on landing page optimization producing 300% conversion lifts.

So you come in and you’re dropped into a situation with no context, no KPIs, unclean data, and the expectation that you’ll get wins immediately. Losing tests are looked at as failures and they’re to be avoided. There’s no time to do customer research or audit your data; no, just go look at landing pages and tell us what to do differently.

Again, I want to reiterate: a CRO or experimentation expert isn’t a magician or landing page ninja. What we know is far outweighed by what we don’t know, and experimentation’s true value is unlocking unknown wins and rewards through risk-capped tinkering.

Let’s look at a better hypothetical situation:

You come in with the same title, but your VP of growth understands experimentation is a long game.

You’ve got resources to build out a team or hire agency or freelance resources. You’re given a window to explore the business context, set appropriate KPIs with the leadership team, and map out technological and process gaps in the system (for instance, you don’t have proper data logging and orchestration, so you prioritize that before headline tests).

You work cross-functionally with teams like product marketing, sales, customer success, brand, and product to understand the customer. You build a customer research protocol that includes UX panels, message testing, exploratory data analysis, and surveys.

This gets funneled into your new prioritization model, which is adapted to your organization’s specific capacity and needs. You weight dimensions like ease and impact, which you slot into your process based on engineering and design resources and testing capacity (calculated by traffic, conversions, and resource constraints).

Through this, you come up with clear quantitative KPIs and projects that will generate the most impact, and you also build out input (program) KPIs, such as conclusive test rates, time to production, and win rates.

Now, as you scale, you’ve got the statistical context to understand your test results, the business and customer context to run appropriate tests, and the resources to properly plan your testing program.

Andrew Anderson, a mentor of mine and head of optimization at ZenBusiness, explained the power of experimentation as the ability to generate impact at high leverage points, which requires leadership alignment and the ability to uncover unknown problems and solutions:

“What is important is that you have the ability to change the user experience at your highest scale points (Landing pages, home page, product page) and that you can track behavior to the single end goal that matters (in almost all cases it is RPV or leads). As long as you can do those parts and you can think in terms of what is possible and not in terms of just what you think will work, you will achieve great results.

Optimization can help take you in directions you never knew were important. It can make channels valuable that never were before, it can change how and where you interact with users, it can change what products matter and what doesn’t.

The only key is to let your users and the data tell you where to go and to not get too caught up on specific tactics or visions.”

Experimentation can’t be isolated from the broader business context; there’s no universal formula for CTA buttons, headlines, or landing page design. Leadership has to have the buy-in and understanding of the process.

If you don’t have this yet, I recommend having these difficult conversations with your leaders and managers:

  • What are our resource constraints?
  • What are our strategic goals?
  • What do we know about the customer and what do we want to know?
  • What will our experimentation program look like in 6, 12, and 18 months?

Here are a few A/B testing books that are written for business folks (less technical) that can help build context and understanding for the program:

These two articles are also great:

Or just send them to a conference like CXL Live.

4. Experimentation Technology

Which specific A/B testing tool you choose isn’t important, but you’ve gotta have the tech stack to enable your efforts.

The experimentation tech stack, holistically, is quite important (and increasingly complex if you root it all the way down to your data collection). Nowadays, the tools that you’ll be using might entail:

  • Data collection tools (Google Analytics, Snowplow, Heap, Wynter, HotJar, etc.)
  • Data integration and database tools (Segment, Workato, Snowflake, etc.)
  • Data accessibility tools (Data Studio, Tableau)
  • Data cataloging tools (data.world, Confluence)
  • Statistical analysis tools (R, Python)
  • Experimentation platforms (Conductrics, Convert, Optimizely, VWO)
  • Knowledge sharing tools (Effective Experiments, Notion)
  • Project management tools (Airtable, Clickup, Notion)

I can’t pretend to be able to tell you which tools to choose – that’s why it’s so important to think through the people you need first. Hire the right program manager and they’ll be able to evaluate what you need for your specific situation.

Additionally, your experimentation technology should scale and mature with your program. It’s likely you’ll start out with minimal tooling, often using free or cheap tools (Google Optimize, Google Analytics, and HotJar are the common ones).

As you build trust and generate ROI, the experimentation program flywheel takes hold. You find bottlenecks in your stack, invest in a more robust system, and that helps you generate further ROI.

Image Source

5. Education and Cultural Buy-In

Finally, if you really want your experimentation program to hum and not sputter, the company needs to be excited about it.

Not just you, lone wolf experimentation expert — you need to get sales, support, product marketing, campaigns, etc. stoked on experimentation.

Why?

Experimentation in isolation has a capped ceiling. There are only so many ways you can reorganize a landing page before you hit a local maximum.

To truly unlock the power of experimentation, you need to break down the barriers and experiment everywhere. You can’t do it alone. This is where the shift happens towards a center of excellence.

But with this shift comes the need for education and enablement; otherwise your company’s experimentation sophistication will be heterogeneous. Some teams will crush it, and some teams will run weakly powered button color tests.

So I look at this pillar in three sections:

  • Cheerleading and evangelizing
  • Enablement and support
  • Education and improvement

Cheerleading and evangelizing

If a tree (experiment) falls in the forest (is run), but no one is around to hear it (no one knows the result), does it make a sound (does it matter)?

Problem: most people have no idea what the fuck experimentation means.

Solution: it’s your job to fix that.

Regular cross-functional meetings, weekly or monthly experimentation review newsletters, office hours and learning sessions, and really, really good reporting and data visualizations help.

So does just dropping the term “hypothesis” and “experiment” in your casual meetings.

You’ve gotta be a cheerleader for A/B testing, otherwise y’all will slink back into HiPPO-driven decisions and gut feel.

Enablement and support

Both when you’re running your centralized program and when you start to scale to supporting other teams, it’s important to build a standardized system for experimentation with guardrails and known processes.

Craig Sullivan added this comment on Ben’s LinkedIn thread:

“Democratisation (various things tied up in this but essentially freedom, autonomy, standardisation, shared methods, systems, data).”

Completely agree.

It’s one thing to house all this interesting experimentation in your own brain, but there’s a limit to how much you can get done. Scale yourself, and build repeatable processes that others can follow. Shift your thinking from being a player to being a coach.

Education and Improvement

Your team will grow, the company will hire new people, and new technologies and limitations will continually pop up.

The most adaptable teams with growth-mindsets as well as the tangible programs to fuel education win in the long term.

Personally, I think you can outsource a lot of this at this point. Sure, Airbnb has a Data University. But they’ve also got a ton of resources to spend on it.

You can probably just get a team account to CXL Institute.

You can also give an unlimited budget for books.

And you can set aside office hours and internal learning sessions.

But keep teaching and keep learning. That’s how you stay on top and become a top 1% experimentation program.

Ignore These Three Things (For Now…)

Everything comes down to expected value and marginal utility. Don’t run before you can walk. Eventually, the cool shit becomes valuable (or at least a cost you can afford). But first, get your room in order:

  1. Clean up your data.
  2. Hire the right people.
  3. Get your leadership bought-in and involved.
  4. Get your core tech stack in place.
  5. Educate and support those running experiments

Ignore the following until you’ve got that done:

Advanced Exploratory Data Analysis & Statistical Techniques

Matt Gershoff wrote an excellent article on what makes a useful A/B testing program.

He talks about what not to worry about, using the metaphor of mice and tigers:

“As for the mice, they are legion. hey have nests in all the corners of any business, whenever spotted causing people to rush from one approach to another in the hopes of not being caught out. Here are a few of the ‘mice’ that have scampered around AB Testing:

  • One Tail vs Two Tails (eek! A two tailed mouse – sounds horrible)
  • Bayes vs Frequentist AB Testing
  • Fixed vs Sequential designs
  • Full Factorial Designs vs Taguchi designs

There is a pattern here. All of these mice tend to be features or methods that were introduced by vendors or agencies as new and improved, frequently over-selling their importance, and implying that some existing approach is ‘wrong’. It isn’t that there aren’t often principled reasons for preferring one approach over the other. In fact, often, all of them can be useful (except for maybe Taguchi MVT – I’m not sure that was ever really useful for online testing) depending on the problem. It is just that none of them, or others, will be what makes or breaks a program’s usefulness.”

As you’re building up your program, it’s likely that there are fewer nodes or points of leverage than you think. Find them, and move the needle through a structured experimentation process.

Perhaps when you reach Facebook’s scale, you can worry about tracking literally everything, doing continuous exploratory data analysis, and uncovering every little insight through data analysis. Perhaps you can then worry about quasi-experimentation frameworks and the ideological differences between Bayesian and Frequentist statistics.

But in terms of cost vs benefits, it just doesn’t balance out for most programs.

Here’s how Anderson Anderson put it:

“​​There are a lot of things that are unnecessary or just not valuable at all to a program, no matter what phase it is in. These include a fancy testing tool, deep analysis of existing user patterns (which are only what the user is doing, not what the user should be doing), the greatest design in the world or even tracking on all parts of your website.

Micro conversions and empty analysis are just tools to make you feel better about a path, they don’t actually make the path that much more valuable. Even worse is thinking that optimization is just something you do once you have everything else squared away.”

As I’ve written about previously, too much data can be a real problem. It causes you to miss the forest for the trees and spend your time spinning your tires looking for insights.

Nassim Taleb actually summed it up best in Antifragile:

“More data – such as paying attention to the eye colors of the people around when crossing the street – can make you miss the big truck. When you cross the street, you remove data, anything but the essential threat.”

Personalization

If you haven’t locked away your A/B testing strategy, I’d put personalization on the backburner.

Personalization, the process at least, should be under the same strategic umbrella of your experimentation efforts. How you choose to target segments, design content and experiences, and adapt these overtime are all functionally parts of the same decision theoretic process you develop by running A/B tests.

So don’t run before you can crawl or walk.

Personalization introduces further complexity. Instead of managing a two part multiverse, you end up managing an N part multiverse made up of very small segments that are getting different treatments and experiences.

If you can’t get trustworthy data and run conclusive A/B tests, please try to ignore the hyperbolic marketing messages that personalization vendors are putting out. A/B testing isn’t dead, and no piece of personalization technology can save you from your existing weaknesses. In fact, when vendors use this kind of language, I’d use it as a negative signal.

Run the opposite direction when you hear news of a “silver bullet” solution.

The Latest Shiny Piece of Technology

In general, don’t get swept away by ephemeral hype cycles. Like content marketing, the fundamentals are what win ball games.

If you don’t have engineering resources to scope tests beyond button colors and headline tests, you probably shouldn’t invest in building a home brew A/B testing platform.

If you can’t learn what messaging resonates with your users, you probably don’t need predictive personalization or bandit algorithms.

The tool is an extension of the strategy, never the other way around. A lot of human resource opportunity cost is wasted on vendor demos for shit that won’t move the needle for you anyway.

Matt Gershoff puts it like this:

“At least to me, the biggest tiger in AB Testing is fixating on solutions or tools before having defined the problem properly. Companies can easily fall into the trap of buying, or worse, building a new testing tool or technology without having thought about: 1) exactly what they are trying to achieve; 2) the edge cases and situations where the new solution may not perform well; and 3) how the solution will operate within the larger organizational framework.”

Conclusion

I get that there’s no one way to build an experimentation program, that all companies differ in their scale, complexity, and context.

But you really can’t do experimentation without trustworthy data, human talent to run the program, and leadership and goal alignment. And while you need a tech stack that allows you to run experiments, chasing shiny new tools is a recipe for disappointment.

So I know the core foundational building blogs, and by process of elimination – via negativa – I know the things you should avoid when building up your program.

Truth is, a lot of success in the business world (and elsewhere) seems to come from really mastering the unsexy foundational building blocks. And I’ve outlined my ideas on those building blocks of experimentation here.

Do you disagree with me? Any additions? Please, comment below (or email me and debate me on my podcast about it).

The post The 5 Pillars You Need to Build an Experimentation Program appeared first on Alex Birkett.

]]>
https://www.alexbirkett.com/experimentation-program/feed/ 1
What’s the Ideal A/B Testing Strategy? https://www.alexbirkett.com/ab-testing-strategy/ Sun, 11 Oct 2020 20:28:56 +0000 https://www.alexbirkett.com/?p=1178 A/B testing is, at this point, widespread and common practice. Whether you’re a product manager hoping to quantify the impact of new features (and avoid the risk of negatively impacting growth metrics) or a marketer hoping to optimize a landing page or newsletter subject line, experimentation is the tried-and-true gold standard. It’s not only incredibly ... Read more

The post What’s the Ideal A/B Testing Strategy? appeared first on Alex Birkett.

]]>
A/B testing is, at this point, widespread and common practice.

Whether you’re a product manager hoping to quantify the impact of new features (and avoid the risk of negatively impacting growth metrics) or a marketer hoping to optimize a landing page or newsletter subject line, experimentation is the tried-and-true gold standard.

It’s not only incredibly fun, but it’s useful and efficient.

In the span of 2-4 weeks, you can try out an entirely new experience and approximate its impact. This, in and of itself, should allow creativity and innovation to flourish, while simultaneously capping the downside of shipping suboptimal experiences.

But even if we all agree on the value of experimentation, there’s a ton of debate and open questions as to how to run A/B tests.

A/B Testing is Not One Size Fits All

One set of open questions about A/B testing strategy is decidedly technical:

  • Which metric matters? Do you track multiple metrics, one metric, or build a composite metric?
  • How do you properly log and access data to analyze experiments?
  • Should you build your own custom experimentation platform or buy from a software vendor?
  • Do you run one-tailed or two-tailed T tests, bayesian A/B testing, or something else entirely (sequential testing, bandit testing, etc.)? [1]

The other set of questions, however, is more strategic:

  • What kind of things should I test?
  • What order should I prioritize my test ideas?
  • What goes into a proper experiment hypothesis?
  • How frequently should I test, or how many tests should I run?
  • Where do we get ideas for A/B tests?
  • How many variants should you run in a single experiment?

These are difficult questions.

It could be the case that there is a single, universal answers to these, but I personally doubt it. Rather, I think these answers can differ based on several factors, such as the culture of the company you work at, the size and scale of your digital properties, your tolerance for risk and reward, and your philosophy on testing and ideation. there’s some nuance based on the company you work at, where you are in terms of company size and resources, and your traffic and testing capabilities.

So this article, instead, will cover the various answers for how you could construct an A/B testing strategy — an approach at the program level — to drive consistent results for your organization.

I’m going to break this into two macro-sections:

  1. Core A/B testing strategy assumptions
  2. The three levers that impact A/B testing strategy success on a program level.

Here are the sections I’ll cover with regard to assumptions and a priori beliefs:

  1. A/B testing is inherently strategic (or, what’s the purpose of A/B testing anyway?)
  2. A/B testing always has costs
  3. The value and predictability of A/B testing ideas

Then I’ll cover the three factors that you can impact to drive better or worse results programmatically:

  1. Number of tests run
  2. Win rate
  3. Average win size per winning test

At the end of this article, you should have a good idea — based on your core beliefs and assumptions as well as the reality of your context — as to which strategic approach you should take with experimentation.

A/B Testing is Inherently Strategic

A/B testing is strategic in and of itself; by running A/B tests, you’re implicitly deciding that an aspect of your strategy is to spend the additional time and resources to reduce uncertainty in your decision making. A significance test is itself an exercise in quantifying uncertainty.

Image Source

This is a choice.

One does not need to validate features as they’re shipped or copy as its written. Neither do you need to validate changes as you optimize a landing page; you can simply change the button color and move on, if you’d like.

So, A/B testing isn’t a ‘tactic,’ as many people would suggest. A/B testing is a research methodology at heart – a tool in the toolkit – but by utilizing that tool, you’re making a strategic decision that data will decide, to a large extent, what actions you’ll take on your product, website, or messaging (as opposed to opinion or other methodologies like time series comparison).

How you choose to employee this tool, however, is another strategic matter.

For instance, you don’t have to test everything (but you can test everything, as well).

Typically, there’s some decision criteria as what we test, how often, and how we run tests.

This can be illustrated by a risk quadrant I made, where low risk and low certainty decisions can be decided with a coin flip, but higher risk decisions that require higher certainty are great candidates for A/B tests:

Even with A/B testing, though, you’ll never achieve 100% certainty on a given decision.

This is due to many factors, including experiment design (there’s functionally no such thing as 100% statistical confidence) but also things like perishability and how representative your test population is.

For example, macro-economic changes could alter your audience behavior, rendering a “winning” A/B test now a loser in the near future.

A/B testing Always Has Associated Costs

There ain’t no such thing as free lunch.

On the surface, you have to invest in the A/B testing technology or at least the human resources to set up an experiment. So you have fixed and visible costs already with technology and talent. An A/B test isn’t going to run itself.

You’ve also got time costs.

An A/B test typically takes 2-4 weeks to run. The period that you’re running that test is a time period in which you’re not ‘exploiting’ the optimal experience. Therefore, you incur ‘regret,’ or the “difference between your actual payoff and the payoff you would have collected had you played the optimal (best) options at every opportunity.”

Image Source

This is related to but still distinct from another cost: opportunity costs.

Image Source

The time you spent setting up, running, and analyzing an experiment could be spent doing something else. This is especially important and impactful at the startup stage, when ruthless prioritization is the difference between a sinking ship and another year above water.

An A/B test also usually has a run up period of user research that leads to a test hypothesis. This could include digital analytics analysis, on-site polls using Qualaroo, heatmap analysis, session replay video, or user tests (including Copytesting). This research takes time, too.

The expected value of an A/B test is the expected value of its profit minus the expected value of its cost (and remember, expected value is calculated by multiplying each of the possible outcomes by the likelihood each outcome will occur and then summing all of those values).

Image Source

If the expected value of an A/B test isn’t positive, it’s not worth running it.

For example, if the average A/B test costs $1,000 and the average expected value of an A/B test is $500, it’s not economically feasible to run the test. Therefore, you can reduce the costs of the experiment, or you can hope to increase the win rate or the average uplift per win to tip the scales in your favor.

A/B testing is a tool used to reduce uncertainty in decision making. User research is a tool used to reduce uncertainty in what you test with the hope that what you test has a higher likelihood of winning and winning big. Therefore, you want to know the marginal value of additional information collected (which is a cost) and know when to stop collecting additional information as you hit the point of diminishing returns. Too much cost outweighs the value of A/B testing as a decision making tool.

This leads to the last open question: can we predict which ideas are more likely to win?

What Leads to Better A/B Testing Ideas

It’s common practice to prioritize A/B tests. After all, you can’t run them all at once.

Prioritization usually falls on a few dimensions: impact, ease, confidence, or some variation of these factors.

  • Impact is quantitative. You can figure out based on the traffic to a given page, or the number of users that will be affected by a test, what the impact may be.
  • Ease is also fairly objective. There’s some estimation involved, but with some experience you can estimate the cost of setting up a test in terms of complexity, design and development resources, and the time it will take to run.
  • Confidence (or “potential” in the PIE model) is subjective. It takes into account the predictive capabilities of the individual proposing the test. “How likely is it that this test will win in comparison to other ideas,” you’re asking.

How does one develop the fingerspitzengefühl to reliably predict winners? Depends on your belief system, but some common methods include:

  • Bespoke research and rational evidence
  • Patterns, competitor examples, historical data (also rational evidence)
  • Gut feel and experience

In the first method, you conduct research and analyze data to come up with hypotheses based on evidence you’ve collected. Forms of data collection tend to be from user testing, digital analytics, session replays, polls, surveys, or customer interviews.

Image Source

Patterns, historical data, and inspiration from competitors are also forms of evidence collection, but they don’t presuppose original research is superior to meta-data collected from other websites or from historical data.

Here, you can group tests of similar theme or with similar hypotheses, aggregate and analyze their likelihood of success, and prioritize tests based on confidence using meta-analyses.

Image Source

For example, you could group a dozen tests you’ve run on your own site in the past year having to do with “social proof” (for example, adding micro-copy that says “trusted by 10,000 happy customers).

You could include data from competitors or from an experiment pattern aggregator like GoodUI. Strong positive patterns could suggest that, despite differences in context, the underlying idea or theme is strong enough to warrant prioritizing this test above others with weaker pattern-based evidence.

Patterns can also include what we call “best practices.” While we may not always quantify these practices through meta-analyses like GoodUI does, there are indeed many common practices that have been developed by UX experts and optimizers over time. [2]

Finally, some believe that you simply develop an eye for what works and what doesn’t through experience. After years of running tests, you can spot a good idea from a bad.

As much as I’m trying to objectively lay out the various belief systems and strategies, I have to tell you, I think the last method is silly.

As Matt Gershoff put it, predicting outcomes is basically a random process, so those who end up being ‘very good’ at forecasting are probably outliers or exemplifying survivorship bias (the same as covered in Fooled by Randomness by Nassim Taleb with regard to stock pickers)

Mats Einarsen adds that this will reward cynicism, as most tests don’t win, so one can always improve prediction accuracy by being a curmudgeon:

It’s also possible to believe that additional information or research does not improve your chance of setting up a winning A/B test, or at least not enough to warrant the additional cost in collecting it.

In this world of epistemic humility, prioritizing your tests based on the confidence you have in them doesn’t make any sense. Ideas are fungible, and anyway, you’d rather be surprised by a test you didn’t think would win than to validate your pre-conceived notions.

In this world, we can imagine ideas being somewhat random and evenly distributed, some winning big and some losing big, but most doing nothing at all.

This view has backing in various fields. Take, for instance, this example from The Mating Mind by Geoffrey Miller (bolding mine):

“Psychologist Dean Keith Simonton found a strong relationship between creative achievement and productive energy. Among competent professionals in any field, there appears to be a fairly constant probability of success in any given endeavor. Simonton’s data show that excellent composers do not produce a higher proportion of excellent music than good composers — they simply produce a higher total number of works. People who achieve extreme success in any creative field are almost always extremely prolific. Hans Eysenck became a famous psychologist not because all of his papers were excellent, but because he wrote over a hundred books and a thousand papers, and some of them happened to be excellent. Those who write only ten papers are much less likely to strike gold with any of them. Likewise with Picasso: if you paint 14,000 paintings in your lifetime, some of them are likely to be pretty good, even if most are mediocre. Simonton’s results are surprising. The constant probability-of-success idea sounds very counterintuitive, and of course there are exceptions to this generalization. Yet Simonton’s data on creative achieve are the most comprehensive ever collected, and in every domain that he studied, creative achievement was a good indicator of the energy, time, and motivation invested in creative activity.

So instead of trying to predict the winners before you run the test, you throw out the notion that that’s even possible, and you just try to run more options and get creative in the options you’ll run.

As I’ll discuss in the “A/B testing frequency” section, this accords to something like Andrew Anderson’s “Discipline Based Testing Methodology,” but also with what I call the “Evolutionary Tinkering” strategy [3]

Either you can try to eliminate or crowd out lower probability ideas, which implies you believe you can predict with a high degree of accuracy the outcome of a test.

Or you can iterate more frequently or run more options, essentially increasing the probability that you will find the winning variants.

Summary on A/B testing Strategy Assumptions

How you deal with uncertainty is one factor that could alter your A/B testing strategy. Another one is how you think about costs vs rewards. Finally, how you determine the quality and predictability of ideas is another factor that could alter your approach to A/B testing.

As we walk through various A/B testing strategies, keep these things in mind:

  • Attitudes and beliefs about information and certainty
  • Attitudes and beliefs about predictive validity and quality of ideas
  • Attitudes about costs vs rewards and expected value, as well as quantitative limitations on how many tests you can run and detectable effect sizes.

These factors will change one or both of the following:

  • What you choose to A/B test
  • How you run your A/B tests, singularly and at a program level

What Are the Goals of A/B Testing?

One’s goals in running A/B tests can differ slightly, but they all tend to fall under one or multiple of these buckets:

  1. Increase/improve a business metrics
  2. Risk management/cap downside of implementations
  3. Learn things about your audience/research

Of course, running an A/B test will naturally accomplish all of these goals. Typically, though, you’ll be more interested in one than the others.

For example, you hear a lot of talk around this idea that “learning is the real goal of A/B testing.” This is probably true in academia, but in business that’s basically total bullshit.

You may, periodically, run an A/B test solely to learn something about your audience, though this is typically done with the assumption that the learning will help you either grow a business metrics or cap risk later on.

Most A/B tests in a business context wouldn’t be run if there weren’t the underlying goal of improving some aspect of your business. No ROI expectation, no buy-in and resources.

Therefore, there’s not really an “earn vs learn” dichotomy (with the potential exclusion of algorithmic approaches like bandits or evolutionary algorithms); every test you run you’ll learn something, but more importantly, the primary goal is add business value.

So if we assume that our goals are either improvement or capping the downside, then we can use these goals to map onto different strategic approaches to experimentation.

The Three Levers of A/B Testing Strategy Success

Most companies want to improve business metrics.

Now, the question becomes, “what aspects of A/B testing can we control to maximize the business outcome we hope to improve?” Three things:

  1. The number of tests (or variants) you run (aka frequency)
  2. The % of winning tests (aka win rate)
  3. The effect size of winning tests (aka effect size)

1. A/B testing frequency – Number of Variants

The number of variants you test could be number of A/B tests or the number of variants in an A/B/n test – and there’s debate between the two approaches here – but the goal of either is to maximize the number of “at bats” or attempts at success.

This can be for two reasons.

First, to cap the downside and manage risk at scale, you should test everything you possibly can. No feature or experience should hit production without first making sure it doesn’t worsen your business metrics. This is common in large companies with mature experimentation programs, such as booking.com, Airbnb, Facebook, or Microsoft.

Second, tinkering and innovation requires a lot of attempts. The more attempts you make, the greater the chance for success. This is particularly true if you believe ideas are fungible — i.e. any given idea is not special or more likely than any other to move the needle. My above quote from Geoffrey Miller’s “The Mating Mind” illustrated why this is the case.

Image Source

Another reason for this approach is, according a shitload of studies (the appropriate scientific word for “a large quantity”) have shown that most A/B tests are inconclusive and the few wins tend to pay for the program as a whole, not unlike venture capital portfolios.

Take, for example, this histogram Experiment Engine (since acquired by Optimizely) put out several years ago:

Image Source

Most tests hover right around that 0% mark.

Now, it may be the case that all of these tests were run by idiots and you, as an expert optimizer, could do much better.

Perhaps.

But this sentiment is replicated by both data and experience.

Take, for example, VWO’s research that found 1 out of 7 tests are winners. A 2009 paper pegged Microsoft’s win rate at about 1 out of 3. And in 2017, Ronny Kohavi wrote:

“At Google and Bing, only about 10% to 20% of experiments generate positive results. At Microsoft as a whole, one-third prove effective, one-third have neutral results, and one-third have negative results.”

I’ve also seen a good amount of research that wins we do see are often illusory; false positives due to improper experiment design or simply lacking in external validity. That’s another issue entirely, though.

Perhaps your win rate will be different. For example, if your website has been neglected for years, you can likely get many quick wins using patterns, common sense, heuristics, and some conversion research. Things get harder when your digital experience is already good, though.

If we’re to believe that most ideas are essentially ineffective, then it’s natural to want to run more experiments. This increases your chance of big wins simply due to more exposure. This is a quote from Nassim Taleb’s Antifragile (bolding mine):

“Payoffs from research are from Extremistan; they follow a power-law type of statistical distribution, with big, near-unlimited upside but, because of optionality, limited downside. Consequently, payoff from research should necessarily be linear to number of trials, not total funds involved in the trials. Since the winner will have an explosive payoff, uncapped, the right approach requires a certain style of blind funding. It means the right policy would be what is called ‘one divided by n’ or ‘1/N’ style, spreading attempts in as large a number of trials as possible: if you face n options, invest in all of them in equal amounts. Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: “The payoff can be so large that you can’t afford not to be in everything.”

Maximizing the number of experiments run also deemphasizes ruthless prioritization based on subjective ‘confidence’ in hypotheses (though not entirely) and instead seeks to cheapen the cost of experimentation and enable a broader swath of employees to run experiments.

The number of variants you test is capped by the amount of traffic you have, your resources, and your willingness to try out and source ideas. These limitations can be represented by testing capacity, velocity, and coverage.

Image Source

Claire Vo, one of the sharpest minds in experimentation and optimization, gave a brilliant talk on this at CXL Live a few years ago:

2. A/B testing win rate

The quality of your tests matters, too. Doesn’t matter if you run 10,000 tests in a year if none of them move the needle.

While many people may think running a high tempo testing program is diametrically opposed to test quality, I don’t think that’s necessarily the case. All you need is to make sure your testing is efficient, your data is trustworthy, and you’re focusing on the impactful areas of your product, marketing, or website.

Still, if you’re focused on improving your win rate (and you believe you can predict the quality of ideas or improve the likelihood of success), it’s likely you’ll run fewer tests and place a higher emphasis on research and crafting “better” tests.

As I mentioned above, there are two general ways that optimizers try to increase their win rate: research and meta-analysis patterns.

Conversion research

Research includes both quantitative and qualitative research – surveys, heat maps, user tests and Google Analytics. One gathers enough data to diagnose what is wrong and potentially some data to build hypotheses as to why it is wrong.

See the “ResearchXL model” as well as mosts CRO agencies and in-house programs’ approach. This approach is what I’ll call the “Doctor’s Office Strategy.” Before you begin operating on a patient at random, you first want to take the time to diagnose what’s wrong with them.

Patterns, best practices, and observations

Patterns are another source of data.

You can find experiences that have been shown to work in other contexts and infer transferability onto your situation. Jakub Linowski, who runs GoodUI, is an advocate of this approach:

“There are thousands and thousands of experiments being run and if we just pay attention to all that kind of information and all those experiments, there’s most likely some things that repeat over and over that reproduced are largely generalizable. And those patterns I think are very interesting for reuse and exploitation across projects.”

Other patterns can be more qualitative. One can read behavioral psychology studies, Cialdini’s Influence, or just look at other company’s websites and take what they seem to be doing and try it on your own site.

Both the research and the patterns approach have this in common: they inherently belief that a certain quality and quantity of information you collect can lead to better experiment win rates.

Additionally, the underlying ‘why’ of a test (sometimes called the ‘hypothesis’) is very important in these strategies. In something like the Discipline-Based Testing Methodology, the narrative or the “why” doesn’t matter, only that it’s efficient and makes money. [4] [4.5]

3. Effect Size of A/B testing Wins

Finally, the last input is the effect size of a winning test. Patterns and research may help predict if a test will win, but not by how much.

This input, then, typically involves the most surprise and serendipity. It still requires that you diagnose the areas of exposure that have the highest potential for impact (e.g. running a test on a page with 1000 visitors is worse than running a test on a page with 1,000,000).

Searching for big wins also requires a bit of “irrational” behavior. As Rory Sutherland says, “Test counterintuitive things because no one else will!” [5]

The mark of a team working to increase the magnitude of a win is a willingness for trying out wacky, outside the box, creative ideas. Not only do you want more “at bats” (thus exposing yourself to more potential positive black swans), but you want to increase the beta of your options, or the diversity and range of feasible options you test. This is sometimes referred to as “innovative testing” vs. incremental testing. To continue the baseball analogy, you’re seeking home runs, not just grounders to get on base.

All of us want bigger wins as well as a greater win rate. How we go about accomplishing those things, though, differs.

CXL’s ResearchXL model seeks to maximize the likelihood of a winning test through understanding the users. Through research, one can hone in on high impact UX bottlenecks and issues with the website, and use further research to ideate treatments.

Andrew Anderson’s Discipline Based Testing Methodology also diagnoses high impact areas of the property, likely through quantitative ceilings. Though this approach ‘deconstructs’ the proposed treatments. Instead of taking research or singular experiences, this approach starts from the assumption that we don’t know what will work and that, in fact, being wrong is the best possible thing that can happen. As Andrew wrote:

“The key thing to think about as you build and design tests is that you are maximizing the beta (range of feasible options) and not the delta. It is meaningless what you think will win, it is only important that something wins. The quality of any one experience is meaningless to the system as a whole.

This means that the more things you can feasibly test while maximizing resources, and the larger the range you test, the more likely you are to get a winner and more likely to get a greater outcome. It is never about a specific test idea, it is about constructing every effort (test) to maximize the discovery of information.”

In this approach, then, you don’t just want to run more A/B tests; you want to run the maximum number of variants possible, including some that are potentially “irrational.” One can only hope that Comic Sans wins a font test, because we can earn money from the surprise.

Reducing the Cost of Experimentation Increases Expected Value, Always

To summarize, you can increase the value from your testing program in two ways: lower the cost, or increase the upside.

Many different strategies exist to increase the upside, but all cost reduction strategies look similar:

  • Invest in accessible technology
  • Make sure your data is accessible and trustworthy
  • Train employees on experimentation and democratize the ability to run experiments

The emphasis here isn’t primarily on predicting wins or win rate; rather, it’s on reducing the cost, organizationally and technically, of running experiments.

Sophisticated companies with data-driven culture usually have internal tools and data pipelines and centre of excellence programs that encourage, enable, and educate others to run their own experiments (think Microsoft, Airbnb, or booking.com)

When you seek to lower the cost of experimentation and run many attempts, I call that the “Evolutionary Tinkering Strategy.”

No one A/B tests will make or break you, but the process of testing a ton of things will increase the value of the program with time, and more importantly, will let you avoid shipping bad experiences.

This is different than the Doctor’s Office Strategy for two reasons: goals and resources.

Companies employing the Doctor’s Office Strategy are almost always seeking to improve business metrics, and they almost always have a very real upper limit on traffic. Therefore, it’s crucial to avoid wasting time and traffic testing “stupid” ideas (I use quotes because “stupid” ideas may end up paying off big, but it’s usually a surprise if so).  [5]

The “get bigger wins” strategy is often employed due to both technical constraints (limited statistical power to detect smaller wins) and opportunity costs (small wins not worth it from a business perspective).

Thus, I’ll call this the “Growth Home Run Strategy.”

We’re not trying to avoid a strikeout; we’re trying to hit a home run. Startups and growth teams often operate like this because they have limited customer data to do conversion research, patterns and best practices tend to be implement directly and not tested, and opportunity costs mean you want to spend your time making bigger changes and seeking bigger results.

This approach is usually decentralized and a bit messier. Ideas can come from anywhere — competitors, psychological studies, research, other teams, strikes of shower inspiration, etc. With greater scale, this strategy usually evolves into the Evolutionary Tinkering Strategy as the company becomes more risk averse as well as capable of experimenting more frequently and broadly.

Conclusion

This was a long article covering all the various approaches I’ve come across from my time working in experimentation. But at the end of the journey, you may be wondering, “Great, but what strategy does Alex believe in?”

It’s a good question.

For one, I believe we should be more pragmatic and less dogmatic. Good strategists know the rules but are also fluid. I’m willing to apply the right strategy for the right situation.

In an ideal world, I’m inclined towards Andrew Anderson’s Discipline-Based Testing Methodology. This would assume I have the traffic and political buy-in to run a program like that.

I’m also partial to strategies that democratize experimentation, especially large companies with large test capacity. I see no value in gatekeeping experimentation to a single team or to a set of approved ideas that “make sense.” You’re leaving a lot of money on the table if you always want to be right.

If I’m working with a new client or an average eCommerce website, I’m almost always going to employ the ResearchXL model. Why? I want to learn about the client’s business, the users, and I want to find the best possible areas to test and optimize.

However, I would also never throw away best practices, patterns, or even ideas from competitors. I’ve frustratingly sat through hours of session replays, qualitative polls, and heat maps, only to have “dumb” ideas I stole from other websites win big.

My ethos: experimentation is the lifeblood of a data-driven organization, being wrong should be celebrated, and I don’t care why something won or where the idea came from. I’m a pragmatist and just generally an experimentation enthusiast.

Notes

[1]

How to run an A/B test is subject for a different article (or several, which I’ve written about in the past for CXL and will link to in this parahraph). I’ve touched on a few variations here, including the question of whether you should run many subsequent tests or one single A/B/n tests with as many variants as possible. Other technical test methodologies alter the accepted levels of risk and uncertainty. Such differences include one-tail vs two-tail testing, multivariate vs A/B tests, bandit algorithms or evolutionary algorithms, or flexible stopping rules like sequential testing. Again, I’m speaking to the strategic aspects of experimentation here, less so on technical differences. Though, they do relate.

[2]

Best practices are either championed or derided, but something being considered a “best practice” is just one more data input you can use to choose whether or not to test something and how to prioritize it. As Justin Rondeau put it, a “best practice” is usually just a “common practice,” and there’s nothing wrong with trying to match customers’ expectations. In the early stages of an optimization program, you can likely build a whole backlog off of best practices, which some call low hanging fruit. However, if something is so obviously broken that fixing it introduces almost zero risk, then many would opt to skip the test and just implement the change. This is especially true of companies with limited traffic, and thus, higher opportunity costs.

[3]

This isn’t precisely true. Andrew’s framework explicitly derides “number of tests” as an important input. He, instead, optimizes for efficiency and wraps up as many variants in a single experiment as is feasible. The reason I wrap these two approaches up is, ideologically at least, they’re both trying to increase the “spread” of testable options. This is opposed to an approach that seeks to find the “correct” answer before running the test, and then only using the test to “validate” that assumption

[4]

Do you care why something won? I’d like to argue that you shouldn’t. In any given experiment, there’s a lot more noise than there is signal with regard to the underlying reasons for behavior change. A blue button could win against a red one because blue is a calming hue and reduces cortisol. It could also win because the context of the website is professional, and blue is prototypically associated with professional aesthetic. Or perhaps it’s because blue contrasts better with the background, and thus, is more salient. It could be because your audiences like the color blue better. More likely, no one knows or can ever know why blue beat red. Using a narrative to spell out the underlying reason is more likely to lead you astray, not to mention waste precious time storytelling. Tell yourself too many stories, and you’re liable to limit the extent of your creativity and the options you’re willing to test in the future. See: narrative fallacy.

[4.5]

Do we need to have an “evidence-based hypothesis”? I don’t think so. After reading Against Method, I’m quite convinced that the scientific method is much messier than we were all taught. We often stumble into discoveries by accident. Rory Sutherland, for instance, wrote about the discovery of aspirin:

“Scientific progress is not a one-way street. Aspirin, for instance, was known to work as an analgesic for decades before anyone knew how it worked. It was a discovery made by experience and only much later was it explained. If science didn’t allow for such lucky accidents, its record would be much poorer – imagine if we forbade the use of penicillin, because its discovery was not predicted in advance? Yet policy and business decisions are overwhelmingly based on a ‘reason first, discovery later’ methodology, which seems wasteful in the extreme.”

More german to A/B testing, he summarized this as follows:

“Perhaps a plausible ‘why’ should not be a pre-requisite in deciding a ‘what,’ and the things we try should not be confined to those things whose future success we can most easily explain in retrospect.”

[5]

An Ode to “Dumb Ideas”

“To reach intelligent answers, you often need to ask really dumb questions.” – Rory Sutherland

Everyone should read Alchemy by Rory Sutherland. It will shake up your idea of where good ideas (and good science) comes from.

Early in the book, Sutherland tells of a test he ran with four different envelopes used by a charity to solicit donations. They randomize the delivery of four different sample groups: 100,000 announce that the envelopes had been delivered by volunteers, 100,000 encouraged people to complete a form that meant their donation would be boosted by a 25% tax rebate, 100,000 were in better quality envelopes, and 100,000 were in portrait format. The only “rational” one of these was the “increase donation by 25%'” option, yet that reduced contributions by 30% compared to the plain control. The other three tests increased donations by over 10%.

As Sutherland summarized:

“To a logical person, there would have been no point in testing three of these variables, but they are the three that actually work. This is an important metaphor for the contents of this book: if we allow the world to be run by logical people, we will only discover logical things. But in real life, most things aren’t logical – they are psycho-logical.”

The post What’s the Ideal A/B Testing Strategy? appeared first on Alex Birkett.

]]>
The 7 Pillars of Data-Driven Company Culture https://www.alexbirkett.com/data-driven-company-culture/ Fri, 31 Jul 2020 17:25:58 +0000 https://www.alexbirkett.com/?p=1139 “Data-driven culture” is a phrase you hear thought leaders speak about at conferences and executives fondly bestow upon their organizations. But like “freedom,” “morality,” and “consciousness,” this elusive phrase seems to evade universal understanding. That’s to say: what the hell does a “data-driven company culture” even mean? What is a “Data-Driven Company Culture,” Anyway? A ... Read more

The post The 7 Pillars of Data-Driven Company Culture appeared first on Alex Birkett.

]]>
“Data-driven culture” is a phrase you hear thought leaders speak about at conferences and executives fondly bestow upon their organizations. But like “freedom,” “morality,” and “consciousness,” this elusive phrase seems to evade universal understanding.

That’s to say: what the hell does a “data-driven company culture” even mean?

What is a “Data-Driven Company Culture,” Anyway?

A data-driven company, in simple terms, is a company whose implicit hierarchy of values leads individuals within the company to made decisions using data. (1)

Now, there’s a lot of nuance here.

What kind of data? Who gets to use data and make decisions? Which decisions are made with data — all of them?

How Data-Driven Companies Cross the Street

Imagine I’m crossing the street, and I need to use some input or inputs to determine when and how to cross.

I could be data-driven by looking at any single data point and using that to anchor (or justify) my decisions. For instance, maybe my data point is what color the light is (green means I go, red means I wait).

I could also be data-driven by including further variables such as the speed and direction of the wind, the position of the sun, the color of the eyes of the people on the other side of the street, or perhaps most importantly, whether or not there is a vehicle careening into the intersection and putting my street crossing in danger.

Perhaps I’m not the only one crossing the street, and in fact, I’ve got to consult with a small group of friends about when we decide to cross. We each contribute our various data points as well as a heavy dose of persuasion and storytelling to convince the group of our idea on when to cross. Only when we reach an agreement do we cross the street.

Or maybe one friend of mine has much more experience crossing streets, so he takes in his data points and blends that with his experience in order to come to a conclusion. In this case, I just follow the directions of my wise friend and hope that his leadership is truly driven by good data (and not something whimsical or poorly structured, such as his being driven by the desire to get the the destination as fast as possible without regard for data points like incoming traffic).

Now crossing the street is starting to resemble Dilbert cartoon.

I could also use data to consider which street I want to cross in the first place. If I want to get to my gym on 45th street, it doesn’t make much sense crossing a street in the other direction, even if the weather is pleasant and the street is empty.

So I say this: there’s no unified definition of a ‘data-driven company’ — it means something different to everyone.

Airbnb leads by design by clearly runs tons of experiments as well. Google famously tested 41 shades of blue. Booking.com lets every single employee run website and product experiments and they’ve built an internal database so anyone can search archived experiments. Your local startup might consider it data-driven that they talk to customers before shipping features; and they’d be right. Any of these can be called ‘data-driven.’

While that leads us to an impasse and a sense of cultural relativism (who’s to critique another’s data-driven culture?!), I believe some companies are deluding themselves and their employees when they say they’re ‘data-driven.’ (2)

There are certain pillars a true data-driven company must have in order to implicitly and explicitly elevate data-driven decision making to the most revered importance in a culture.

The 7 Pillars of a Data-Driven Company Culture

There are two types of ‘data-driven companies’ – those who say they’re data-driven and those who actually are.

In fake data-driven companies:

  • Decisions are made top down by HiPPOs
  • Data is used to justify decisions, never to invalidate or disprove preconceived notions
  • Data integrity is never questions and validity is presumed in all cases
  • Dashboards, reports, and charts are used for storytelling and success theater, not to drive decisions or improve decision making

I asked Andrew Anderson about what makes a company truly data-driven vs. fake data-driven, and he explained well what most companies mean:

“What most companies mean when they say they are “data driven” is that they have analytics/Business Intelligence (BI) and that they grab data to justify any action they want to do. It is just another layer on top of normal business operations which is used to justify actions. In almost all cases the same people making the decisions then use whatever data they can manipulate to show how valuable their work was.

In other words data is used as a shield for people to justify actions and to show they were valuable.”

So what’s a real data-driven culture look like? In my opinion, you need these pillars in place:

  1. Ensure Data is Precise, Accessible, and Trustworthy
  2. Invest in Data Literacy for Everyone
  3. Define Key Success Metrics
  4. Kill Success Theater
  5. Be Comfortable with Uncertainty (Say “I Don’t Know”)
  6. Build and Invest in Tooling
  7. Empower Autonomy and Experimentation

Andrew explains further:

“Actual data-driven culture is one where data is used as a measure of all possible actions. Teams are driven by how many options they can execute on, how they use resources, and how big of a change they can create to predetermined KPIs. It is used as a sword to cut through opinion and “best practices” and people are measured based on how many actions they cut through and how far they move the needle.”

Now let’s walk through each of these data-driven company pillars.

1. Ensure Data is Precise, Accessible, and Trustworthy

As with many areas of life, the fundamentals are what matters. And if you can’t trust your data quality, it’s totally worthless.

This is true both directly and indirectly.

  • Directly, if you don’t have data precision (as opposed to data accuracy, a pipe dream), your data-driven decisions will be hindered because of that (worse yet, you’ll be driving highly confidently in the wrong direction because of the use of data. At least with opinions you have to admit epistemic humility).
  • Indirectly, imprecise data erodes cultural trust in data-driven decisions, so with time your company will revert to an opinion-driven hierarchy.

Precise data is one facet in the foundational layer of a good data culture, but you also want to have complete data. If, for instance, you can only track the behavior of a subset of users, your decisions will be based on a sampling bias, and thus still may lead you to poorer decisions.

Finally, data access: data-driven companies have accessible data. Now, there’s a whole field of data management or data governance that seeks to delineate responsibility for data infrastructure. Perhaps not everyone should be able to write new rows to a database, but in my opinion everyone should be able to query it.

Beyond that, accessing data should be made as clear and straightforward as possible. Large companies especially should look into data cataloging, good infrastructure resources, and data literacy.

2. Invest in Data Literacy for Everyone

CFO asks CEO: “What happens if we invest in developing our people and they leave us?”

CEO: “What happens if we don’t, and they stay?”

While hiring deeply trained specialists can help spur data-driven decision making, in reality you want everyone who is using data to understand how to use it.

Most data malpractices are probably not done out of malevolence, but rather ignorance. Without proper education and data literacy training, you can only fault the organization for such a heterogenous distribution of data skills in the company.

For example, in an HBR article titled “Building a Culture of Experimentation,” Stefan Thomke explains how Booking.com educates everyone at the company and empowers them to run experiments by putting new hires through a rigorous onboarding process which includes experimentation training (in addition to giving them access to all testing tools).

In the same article, he covered IBM’s then head of marketing analytics, Ari Sheinkin, who brought the company from running only 97 experiments in 2015 to running 2,822 in 2018.

How’d they make the change? In addition to good tooling, it was a lot of education and support:

“He installed easy-to-use tools, created a center of excellence to provide support, introduced a framework for conducting disciplined experiments, offered training for everyone, and made online tests free for all business groups. He also conducted an initial ‘testing blitz’ during which the marketing units had to run a total of 30 online experiments in 30 days. After that he held quarterly contests for the most innovative or most scalable experiments.”

Data-driven companies invest in education for their employees. I know anecdotally that Facebook, at least at one point in time, put their growth employees through a rigorous data analytics training during onboarding. And the famous example here is Airbnb, who runs Data University to train employees in the data-driven arts.

Image Source

3. Define Key Success Metrics

Even if you have all the data you could care to access and everyone knows how to use it, people can pull vastly different conclusions from the same data if you haven’t defined your desired outcomes.

In specific instances, this can muddy the results of an A/B test. Imagine, for instance, that you run a test on a landing page flow that walks through three pages: a pricing page to a signup page and then a thank you page.

You change a variable on the pricing page and you want to track that through to increase overall signups, measured by users that reach the thank you page.

Because you want a ‘full picture’ of the data, you log multiple metrics in addition to “conversions” (or users who reached the thank you page). These include bounce rate, click through rate on the pricing page, session duration, and engagement rate on the signup page.

The experiment doesn’t lift conversions, but it lifts click through rate. What do you do?

Or it does lift conversions, but bounce rate actually increases. Does this mean it messed up the user experience?

This muddiness is why you must, before you run the experiment, define an Overall Evaluation Criterion. In other words, what metric will ultimately decide the fate of the experiment?

In broader contexts, many teams can have different incentives, sometimes piecemeal towards a similar end goal (like believing increasing traffic or CTR will downstream increase conversions), but sometimes goals are diametrically opposed. In the latter case, you’ll waste more time and energy figuring out which way to go instead of actually making progress in that direction. What you want to do is define your key success metrics and align your vectors in a way that everyone is working towards the same goals and has clear indications of progress towards them.

Image Source

4. Kill Success Theater

A culture that celebrates nothing would be soulless; a culture that only celebrates and talks about success is insidious and subtly toxic.

Success theater is, at its core, an informal operating system that says to employees: “you’re expected to win, and you should only discuss wins. Failures need not be exemplified.”

What happens when employees aren’t incentivized to honestly share negative news or results? A cascading torrent of bad stuff:

  • You limit innovation and creativity due to fear of failure.
  • You cover up potentially disruptive and damaging problems in order to save face.
  • You incentive data cherry-picking and intellectual dishonesty, which erodes cultural trust in data and each other.
  • You cut corners and make poor long term decisions (or even unethical decisions) in order to hit your numbers.

Again, don’t fear the champagne, but don’t punish the messenger if you see numbers that don’t look great.

Further, stop incentivizing everyone to be right and successful 100% of the time. Your deepest learnings and biggest light bulb moments come from shock, surprise, disappointment, and being “wrong.” Embrace it. The best data-driven companies would never expect to bat .1000.

5. Be Comfortable with Uncertainty (Say “I Don’t Know”)

The opposite of a data-driven culture is one where the decision-making process is driven by HiPPOs (or worse, committee).

In “Building a Culture of Experimentation,” Stefan Thomke wrote of a radical experimentation idea at Booking.com: redesigning the entire home page. This excerpt says it all (bolding is mine):

“Gillian Tans, Booking.com‘s CEO at the time, was skeptical. She worried that the change would cause confusion among the company’s loyal customers. Lukas Vermeer, then the head of the firm’s core experimentation team, bet a bottle of champagne that the test would ‘tank’ — meaning it would drive down the company’s critical performance metric: customer conversion, or how many website visitors made a booking. Given that pessimism, why didn’t senior management just veto the trial? Because doing so would have violated one of Booking.com‘s core tenets: Anyone at the company can test anything — without management’s approval.”

Some companies want you to know up front what’s going to work and what isn’t. They won’t run an experiment if there’s not a valid reason or ‘evidence’ that suggests it has high probability of winning. Similarly, you should know ahead of the experiment which segment you want to send a personalized experience to and what the content should look like.

If this is the case, you’re leaving a lot of revenue on the table by avoiding the ‘discovery’ or ‘exploration’ phase of experimentation and data-driven decision making. In pursuit of “evidence-based decision making,” we forget that we don’t always have historical data to support or refute a case, and if we do, it doesn’t always extrapolate the situation at hand.

Most of the time, we fear the discovery phrase because the “wrong” result might win. But as Andrew Anderson wrote of personalization, “Be open to permutation winning that you never thought of. Being wrong is always going to provide the greatest return.”

Another quote I loved from the HBR article on experimentation culture:

“Everyone in the organization, from the leadership on down, needs to value surprises, despite the difficulty of assigning a dollar figure to them and the impossibility of predicting when and how often they’ll occur. When firms adopt this mindset, curiosity will prevail and people will see failures not as costly mistakes but as opportunities for learning.”

In the same article, David Vismans, CPO at Booking.com, warns that if you don’t value being wrong you’re unlikely to successfully maintain a data-driven culture:

“You need to ask yourself two big questions: How willing are you to be confronted every day bu how wrong you are? And how much autonomy are you willing to give to the people who work for you? and if the answer is that you don’t like to be proven wrong and don’t want employees to decide the future of your products, it’s not going to work. You will never reap the full benefits of experimentation.”

The ability to say “I don’t know” and embrace being wrong is the mark of a strong leader.

6. Build and Invest in Tooling

Tools are nothing without the human resources to manage them and the knowledge and education to use them.

However, you need tools, too.

For example, without an experimentation platform, how many tests can you feasibly run per year? Even if you’re hard coding tests ad-hoc each time and have the technical resources to do so, you’re clearly going to miss out on marketing experiments.

Infrastructure is massively important when it comes to data integrity, accessibility, and decision making. That HBR article on experimentation culture explains that any employee at Booking.com can launch an experiment on millions of customers without management’s permission. They say roughly 75% of its 1,800 technology and product staffers use actively run experiments.

How do they accomplish this? Making tools that are easy to use by everyone:

“Scientifically testing nearly every idea requires infrastructure: instrumentation, data pipelines, and data scientists. Several third-party tools and services make it easy to try experiments, but to scale things up, senior leaders must tightly integrate the testing capability into company processes…

…Standard templates allow them to set up tests with minimal effort, and processes like user recruitment, randomization, the recording of visitors’ behavior, and reporting are automated.”

In addition to the structural tools needed to run and analyze experiments, I admire their commitment to openness and knowledge sharing. For that, they’ve built a searchable repository of past experiments with full descriptions of successes, failures, iterations, and final decisions.

7. Empower Autonomy and Experimentation

At the end of the day, data analysis is a research tool for reducing uncertainty and making better decisions that improve future outcomes. Experimentation is one of the best ways to do that.

Not only do you cap your downside by limiting the damage of a bad variant, but that risk mitigation also leads to increased creativity and therefore innovation.

If you’re able to test an idea with little to no downside, theoretically that means more and better ideas will eventually be tested.

If you’re able to decouple what you test from the clusterfuck of meetings, political persuasion and cajoling, and month long buy-in process that usually precedes any decision at a company, then you’ll also ship faster.

This makes your company both more efficient and more effective. In essence, you’ll ship less bad stuff and more good stuff, reducing losses from bad ideas and exploiting gains from good ones.

No one can predict which good ideas are good and which bad ideas are bad with certainty. Most of us are no better than a coin flip (and those with better odds should re-read Fooled by Randomness lest they get too confident)

Experimentation solves that, but culturally, it also raises the average employee’s decision making ability to the level of an executive’s by way of the great equalizer: the hypothesis test.

That’s scary for most and exciting for some, which is why everyone talks about A/B testing but very few fully embrace it.

To do so would effectively devalue the years of experience that have presumably up to this point meant that your judgement was worth much more than others’ judgement. In an A/B test, it doesn’t matter which variant you thought was going to win, it just matters what value you’re able to derive from an experiment, and how that hits the top line.

Image Source

As a director at Booking.com said in that wonderful HBR article, “If the test tells you that the header of the website should be pink, then it should be pink.”

Unlike the other pillars I’ve listed in this article, this one isn’t actually about the technical capabilities or even the educational resources you’ve built. It’s about letting go of the need to control every decision by nature of opinion, judgement, and conjecture, and instead empowering employees to run experiments and to let the data lead you to an answer (ahem, to be “data-driven” is to drive with data).

Obviously, you can still choose what to test and you can encase your experiments within principles. For example, dark patterns may win tests, but you can set up rules that state not to test dark patterns in the first place.

If it accords to your principles, though, it’s fair game. I would guide you not to limit the scope of options too much. Quote from the HBR article:

“Many organizations are also too conservative about the nature and amount of experimentation. Overemphasizing the importance of successful experiments may encourage employees to focus on familiar solutions or those that they already know will work and avoid testing ideas that they fear might actually fail. And it’s actually less risky to run a large number of experiments than a small number.”

One of my favorite illustrations of this is Andrew Anderson’s story where he ran a font style test. You’ll never guess which font won.

Image Source

As Andrew explained:

“From an optimization standpoint, Comic Sans could just as easily be called “Font Variant #5,” but because we all have a visceral hatred of Comic Sans and that does not mesh with our notions of aesthetic beauty, good design, or professional pages, we must come up with an explanation to our cognitive dissonance.

Is there anything inherently wrong with comic sans? No. But from a design perspective it challenges the vision of so many. Did testing make comic sans the better option? No. It just revealed that information to us and made us face that knowledge head-on.

If you are testing in the most efficient way possible, you are going to get these results all the time.”

In any case, I won’t pressure you to test comic sans. If you hate comic sans, don’t test it. But the point here is that a culture of experimentation is the true data-driven culture.

Conclusion

There are gradations of maturity with regards to data-driven company cultures, but the basics need to be in place: if you can’t access trustworthy data, you can’t make data-driven decision. And if data is overridden by the opinions of tenured executives, what value is it to your company? Other than providing cover for the opinions of HiPPOs, of course.

I want to sum up with what I think is a great definition of a data-driven culture from Andrew Anderson:

“In a true data driven organization the team focuses on what the measure is they want to change. They come up with different actions that can be done to impact it, they then align resources around what can accomplish the most ways to accomplish that action. They then measure each way against each other and the best performer is picked. They then continue to align resources and re-focus after each action. There is no single person picking the action nor is there the same person measuring success. Everyone can have an idea and whatever performs best wins, no matter who backed it or what they are trying to do politically.”

1

First off, what do we mean when we say “company culture?”

Highest level definition from BuiltIn.com: “Company culture can be defined as a set of shared values, goals, attitudes and practices that characterize an organization.”

However, I don’t think this adequately describes it.

Culture is the implicit hierarchy of value in an organization. It’s the unwritten handbook of what behaviors are rewarded and admired within a company.

Some companies reward collaboration and treating coworkers like family. Some reward dry language and the absence of personality from conversation (no happy hours here). Some reward cajoling, persuasion, slide decks, and storytelling, and some reward data.

Most importantly, culture is restrictive and limiting. I love how Mihaly Csikszentmihalyi put it in Flow:

“Cultures are defensive constructions against chaos designed to reduce the impact of randomness on experience. They are adaptive responses, just as features are for birds and fur is for mammals. Cultures prescribe norms, evolve goals, build beliefs that help us tackle the challenges of existence. In so doing, they must rule out many alternative goals and beliefs, and thereby limit possibilities; but this channeling of attention to a limited set of goals and means is what allows effortless action within self-created boundaries.”

So just as much as what is rewarded, a company culture can be defined by what it explicitly outlaws as well as what it subtly frowns upon and discourages. Just to be incredibly clear, if your company frowns upon experimentation, you don’t have a data-driven company or culture.

2

The Lady Doth Protest Too Much”

Most companies that are actually data-driven don’t incessantly and loudly talk about how data-driven they are. Just as a rich many doesn’t need to tell you he’s rich, be very wary of companies whose HR and advertising materials overly emphasize a certain cultural trait, whether that’s transparency, data-driven decision making, or creativity. Be particularly wary of anyone in a suit talking loudly about big data, data science, advanced analytics, artificial intelligence, or *shudder* digital transformation.

While there is some signal in this messaging (at the very least, it says something that they’re aspiring to these things), it’s often a bigger sign that the company is perhaps striving towards that trait but absent of it presently. This is especially rampant of bleeding edge companies who spend a lot of time speaking at or attending conferences. This puts them in a position to say the right words and phrases to attract good talent without actually developing or investing in a culture that enables those behaviors.

Caveat emptor. Talk is cheap.

The post The 7 Pillars of Data-Driven Company Culture appeared first on Alex Birkett.

]]>
What is Conversion Rate Optimization? https://www.alexbirkett.com/conversion-rate-optimization/ Wed, 03 Jun 2020 03:07:56 +0000 https://www.alexbirkett.com/?p=1069 You’ve landed on this article for one of two reasons. The first (most probable) reason is that you heard the term “conversion rate optimization” somewhere and want to know what exactly it means and maybe even want to find out how you can get involved in conversion rate optimization (CRO). The second reason is that ... Read more

The post What is Conversion Rate Optimization? appeared first on Alex Birkett.

]]>
You’ve landed on this article for one of two reasons.

The first (most probable) reason is that you heard the term “conversion rate optimization” somewhere and want to know what exactly it means and maybe even want to find out how you can get involved in conversion rate optimization (CRO).

The second reason is that you’re a reader of my blog and want to know what *I* think about conversion optimization. More so, I’m guessing you want to know if my understanding differs from conventional wisdom, and if so, how. If you’re in this camp, you’re probably okay reading a couple thousand words, but the former camp probably wants 500, so I’ll tackle the simple definition first.

Then, latter group, let’s explore what I think conversion rate optimization is all about and why so many people seem to misunderstand it.

Table of contents:

Primero! The definition:

What is Conversion Rate Optimization (CRO)?

Conversion Rate Optimization is the systematic process of increasing the rate at which a digital population will take a desired action.

If you Google “conversion rate optimization,” almost very blog post about CRO says pretty much that same definition. What does it mean? Let’s dive into the individual components of that definition.

What’s a Conversion Rate?

A conversion rate is the amount of unique desired actions divided by the overall of visitors. In simple terms, let’s say you sell candles. 1000 people come to your web page per day to look at candles, but only 50 of them buy from you. Your ecommerce conversion rate is 5%.

Conversion rate is the number of conversions divided by the number of visitors. It’s a proportion, usually represented by a percentage.

Screen Shot 2020 06 02 at 9.39.22 PM

Things get a bit more complicated when you realize you can define your numerator (and denominator) differently. For instance, are you counting total sessions, total pageviews, or total users? Each produces a different conversion rate.

Similarly, in a lead generation system, you might be measuring the total number of unique visitors who hit a landing page for an offer, and you might consider it a conversion when they fill out a form.

However, you may want to track that conversion further down the funnel to when they buy your product. If you have 1000 unique website visitors, and 100 fill out the form and 10 buy the product, then your conversion rate could be 10% (visitor to lead form) or 1% (visitor to purchase) depending on how you define it.

How Does Conversion Optimization Relate to Conversion Rate?

If you know your average conversion rate (however you define), the next obvious question is, “how can I improve it?” A higher conversion rate means you make more money with the same amount of website traffic, so it’s a logical aspiration.

How you answer that question is, in essence, conversion rate optimization.

There are many ways to approach optimization, though it can typically be described or structured using a repeatable process or system instead of randomly trying different ideas. As Peep Laja put it, “Conversion optimization – when done right – is a systematic, repeatable, teachable process.”

What’s the CRO Process?

There are many different ways to visualize the conversion rate optimization process, but most processes boil down to these steps:

  • Identify your goals + implement data tracking
  • Conversion research and audit
  • Run experiments
  • Analyze the data and make a decision
  • Repeat

Again, there are many different ways to visual this process, but most look kind of like this:

conversion rate optimization process1

Image Source

The conversion research process could include many components (session replays, digital analytics like Google Analytics, heatmaps, customer surveys, etc.), but in most cases you’re just trying to ‘diagnose’ the problem areas and opportunities on your website that you could improve to increase conversions.

What Actually *Is* a Conversion?

By the way, your “desired action” or “conversion” can be any discrete event that occurs on your website (actually, it could be a continuous variable like revenue per visitor, too, but we’ll leave that out of the discussion for now).

For instance, if you’re operating a content-based website that monetizes via affiliate links, your conversion event might be signing up for an affiliate offer. It could also be signing up for an email list.

For an ecommerce conversion, an obvious “conversion event” would be “transactions,” but you may also want to optimize for a composite metric like “average revenue per visitor,” or “average transaction value.”

You conversion goals will vary based on the type of business you run.

Macro-Conversions vs. Micro-Conversions

There’s a debate around two types of conversions:

  • Macro-conversions
  • Micro-conversions

This debate is mostly a non-issue. Here’s what it boils down to:

Macro-conversions are basically conversion events that are actually important to the business, whereas micro-conversions signal engagement. These, of course, are up to you to specifically define in your own business. There’s no universal definition of a ‘macro-conversion,’ which in my opinion, makes the debate pretty trite.

A macro-conversion could be buying something from your ecommerce site, and in this context, you might define a micro-conversion as something contributing to that ‘final’ conversions – click throughs on a promotional banner, viewing a product page, adding an item to a shopping cart (add to cart conversion rate), completing a checkout step, etc.

Anyway, here’s what you should know about macro and micro-conversions: optimize for what matters. No one gives a shit how many people added items to their cart if they don’t purchase, so just avoid time-wasting conversations and optimize for business metrics that matter to your bottom line.

Does the term “Conversion Rate Optimization” Accurately Describe What We Do?

The “conversion” part of “conversion rate optimization,” then, isn’t so straightforward.

That, in fact, is the gist of most social media debates on the terminology of conversion rate optimization.

Many argue that the acronym “CRO” isn’t actually ideal, as most optimizers aren’t only optimizing for conversion rates (and actually, it would be rather myopic to do so in many circumstances).

Screen Shot 2020 05 25 at 9.52.04 AM

Despite the lossiness the term “CRO” entails, I find it’s a mostly appropriate denomination that at least provides an envelope for what we all do.

Sure, we optimize more than conversion rates. But SEOs work on more than just search engines (optimizing, in many cases, similar real estate that CROs work on). And God only knows what a growth marketer does…

Again, what we do is build systems to increase the rate at which a population completes a desired action. The fact that this definition is simultaneously vague and specific is its power, but is also why so many people are confused about the term.

Confusion about the term goes in two directions – either people have a far too specific idea as to what CRO is, or it is far too expansive and starts to creep into purposefully-delineated business units.

For the pedantic among us, here’s a list of things conversion rate optimization is *not*

CRO Myths: What Conversion Rate Optimization Isn’t

Sometimes, the best way to define something is by what it’s not, via negativa style.

CRO is not a list of tactics or best practices

Most crafts have their informally ordained set of ‘best practices.’

In SEO, one might run through a checklist when auditing a website to look for proper internal linking structure, descriptive meta-descriptions, and title tags that are shorter than 60 characters. Copywriters can follow formulas like AIDA and audit copy for vapid gobbledygook and meaningless jargon.

Conversion rate optimizers, then, are expected to have a heuristic-based framework they can apply to any given landing pages, a veritable toolkit of “things that work” they can use anywhere.

This myth is pernicious and damaging for a few reasons.

First, while there are certainly patterns that work more often than not, there’s clearly heterogeneity and variance in how they apply.

Even if the median uplift of, say, removing social media share icons from product pages is positive, that doesn’t mean it’s positive in all cases. In fact, that median uplift could be balanced out by some massive losses on some sites. You’ll never know unless you test it for yourself.

Second, one must question *where* these best practices come from. Many of them are backed up by substantial qualitative literature (usually originated via psychology research or from a usability company like Baymard). This is great and a lovely use of research to form the basis of hypotheses.

Sometimes, however, best practices originate from hyperbolic blog posts and BS case studies (“How we increased conversions by 923% with this one tactic”). These are often based on faulty statistics and are more similar to a PR campaign than peer-reviewed evidence.

Finally, if you’re always chasing the median, you’ll always be behind the leaders.

Looking at historical data only tells you what worked in the past (among the total data set of what has been tried). Net new ideas don’t often show up in historical data, so if you’re chasing a list of best practices, you’re never likely to lead or innovate.

CRO is not the application of persuasion tactics

Similarly, CRO isn’t a synonym for digital persuasion. Lots of SaaS companies have popped up that claim to be “CRO software,” because they allow you to easily sprinkle social proof on your landing pages.

This belief, I think, originated because of the thought leadership and prominence of Booking.com, one of the world’s greatest experimentation companies.

They test everything. They run thousands of tests per year, allowing everyone at the company to run experiments. They’re a model of what a good experimentation program should look like.

But on the surface, if you’re a consumer, all you see are their (sometimes frustrating) persuasion triggers that play on FOMO, urgency, social proof, and the fact that dumb-old-you is about to miss out on the deal of the century because you hesitated for too long:

Screen Shot 2020 05 25 at 10.18.23 AM

Tactics like this can be an emergent property of the conversion rate optimization process. But by themselves, isolated, they are not CRO.

And just because you work in CRO doesn’t mean you know, at a glance, how to apply persuasion methods like social proof in a way that will actually work better than the control. Expertise can help form better ideas, but we still need to test them. Better yet: open the floor to let *anyone* submit and test their ideas.

A CRO professional doesn’t have a monopoly on web persuasion insight, and your best ideas are probably going to come from unexpected sources.

CRO is not adding a CTA to a blog post

Content marketers have begun using the term “conversion rate optimization” to mean building out a conversion funnel: land on a blog post, click CTA, sign up for ebook → converted.

Adding a CTA button could be an action item that comes up during the CRO process, but adding a CTA by itself isn’t CRO. It’s just good online marketing practice. It’s just inbound marketing. If you don’t currently have a call to action, adding one will quite likely increase conversions.

Now, if you had a hundred blog posts of varying conversion rates, dug into the data to figure out which ones could be optimized, and tested out various conversion pathways….that would be CRO!

But just planning to put some CTAs or pop-ups on blog posts is just an extension of a content marketer’s job.

CRO is not SEO

For some reason, people confuse CRO and SEO, but they’re different. SEO seeks to drive traffic via search engines. CRO seeks to improve the rate at which traffic – from any source – converts.

Screen Shot 2020 05 29 at 12.20.53 PM

There’s overlap, but they’re obviously different functions. Additionally, I look at SEO as a channel and CRO as a methodology. CRO can be applied to SEO, but SEO cannot be applied to CRO.

CRO is not A/B Testing

Experimentation is the gold standard of quantitative research, but it’s one of many tools in the CRO toolkit. Often a testing roadmap is the centerpiece of one’s efforts, but A/B testing (or multivariate testing) and conversion rate optimization aren’t synonymous (if only because you can do CRO without running A/B tests).

That said, conversion optimization strategy generally includes experimentation as a central research tool. Split testing helps take the guesswork out of which different versions of your site work better.

CRO is not an ever expansive set of optimization related prerogatives spanning an organization and beyond

Concept creep is a psychological phenomenon where slowly and unconsciously, the boundaries that define concepts and categories erode and expand. I’ve seen this happen in the CRO community.

‘Our role is valuable in pricing, go to market, value proposition definition, and positioning, so the name “CRO” should really be “business strategy optimization.”’

‘Your job as a CRO is to root out analytics implementation errors, fix technical bottlenecks across digital properties, write stunning copy on landing pages and Facebook ads, connect our offline and online data, and perhaps even optimize our sales process and scripts.’

Oof.

When I hear expansionist definitions like this, I get anxiety.

I get that a role like CRO is cross-functional and the insights gleaned through experiments and conversion research can help other teams, but a CRO is not some all-encompassing digital jedi that touches every part of one’s online business. That’s a recipe for burnout and a half-assed job in the million places you try to apply the process.

CRO isn’t the only sphere where this happens. I’ve heard thought leaders and gurus proclaim that growth marketers should be experts in everything from finance and accounting to HR and of course every marketing channel plus SQL.

Maybe you should just pony up to hire more people?

Screen Shot 2020 05 29 at 12.27.10 PM

If you have the above skillset, you’re a founder, or at the very least a VP – not an individual contributor.

I’m all for learning and growing and being ambitious, but let’s reign ourselves in a bit, yeah?

CRO is not updating a web experience or copy

Redesigning your landing pages and changing your website copy to reflect a new product launch isn’t really conversion rate optimization. Business priorities change, so of course the content of your website and your website design will change.

However, you can indeed perform a website redesign using a CRO methodology. You can definitely update copy and positioning using a CRO process. And you definitely should!

How is CRO Different From UX (or Growth Marketing/Growth Hacking/Web Strategy/Experimentation/Product Management/etc)?

User experience design and conversion rate optimization are super similar: both use user research to inform digital experience decisions. Both disciplines seek to improve the shopping experience.

The main difference is tautological: a conversion rate optimization professional seeks to optimize conversion rates and the user experience professional seeks to optimize the user experience.

The rub? How you define “conversion rate” and “user experience” can and should converge in many contexts. Here enters the importance of defining our north star metric (or the Overall Evaluation Criterion). Where we set our aims defines the tactics we use to accomplish it.

If we’re seeking to maximize business value, there may be a trade-off in the utility of a great user experience. As my friend Chad Sanderson put it, you could increase the user experience of Bing by removing all ads, but that wouldn’t be a very good business decision.

In the end, UX and CRO are typically two intertwined systems that can act as a push and pull on each other, but they both use research-based systems to improve digital experience.

I won’t touch heavily on growth marketers, because I think that’s one of the most useless designations in marketing today (I’m saying that with “growth marketer” on my official job title).

“Growth” is a function that overlaps product and marketing, but “growth marketing” in practice is usually either your run of the mill performance marketer (running paid, direct response campaigns at scale) or is sort of a blend between a generalist marketer who uses experimentation and ideas from CRO to accomplish their goals.

I think the term is bloated and has mostly lost its initial value at this point.

Finally, I believe the closest role-based relative of a conversion rate optimization professional is a product manager, specifically a “website product manager” or a “web strategy manager.”

This role typically blends SEO, UX, and experimentation into a product management skillset to apply the CRO methodology to a website. Instead of building a product feature roadmap, you’re building a testing roadmap. It’s basically the same skillset applied to a different context.

In the end, these roles tend to goal themselves on similar things and use similar systems to get there: capturing and utilizing data in order to make better decisions that improve business metrics.

Now, for some more contrarian CRO opinions:

  • It’s not something a specific person *does” but rather something that an operations team enables
  • The best organizations build this into their culture instead of stapling it on top of things they already do.

CRO is an Operating System (Not a Tactic)

Alright, if you’ve made it this far, you’ll spare me a few paragraphs to dive into the minutia: I don’t believe a “CRO specialist” should be a singular role at an organization.

I believe CRO is a methodology or an operating system, something that a centralized team can and should enable, but not something that should be the exclusive domain of a centralized team or especially of an individual.

Typically, CRO in an organization is either structured as a centralized or decentralized function:

image8

In a centralized function, you have one team who owns the infrastructure, the program management, and the tactical execution. They run the research and run all the tests.

In a decentralized structure, there’s not center – everyone conducts their own analyses and runs their own experiments.

There are clear pros and cons to each, but the best model is a “Center of Excellence” model. This blends the best of both worlds, and instead of controlling all CRO and experimentation, the function acts as an enablement, education, and infrastructure system. Here’s how Ronny Kohavi describes it:

“A third option is to have some data scientists in a centralized function and others within the different business units. (Microsoft uses this approach.) A center of excellence focuses mostly on the design, execution, and analysis of controlled experiments. It significantly lowers the time and resources those tasks require by building a companywide experimentation platform and related tools. It can also spread best testing practices throughout the organization by hosting classes, labs, and conferences.”

In essence, I don’t think that a single person can or should “own” CRO at a company. Rather, CRO is a system designed to input data and feedback and output better decisions and more money.

image1 1

I’ll admit this is sort of an idealistic vision. CRO, instead of being layered on top of your existing tactics, channels, and roles as an additional action, should simply be a part of the culture and company operating system. CRO isn’t a channel or a tactic, it’s a way of performing your job.

I wrote a full article on this topic on MarketingLand if you’d like to further explore this idea.

Conclusion

Conversion Rate Optimization is the systematic process of increasing the rate at which a digital population takes a desired action.

It’s not only fun, but it’s efficient. You take the same number of traffic or users or ad spend, and squeeze out more results using the scientific method.

If you want to get really good at CRO, the best place to go is CXL Institute. They’ve got a CRO mini-degree that will make you a master of the craft.

Further reading and resources:

The post What is Conversion Rate Optimization? appeared first on Alex Birkett.

]]>
The 21 Best CRO Tools in 2025 https://www.alexbirkett.com/cro-tools-conversion-optimization/ Fri, 08 May 2020 20:31:51 +0000 https://www.alexbirkett.com/?p=1055 What we call conversion rate optimization is actually an expansive suite of distinct functions that blend together to form this art-and-science craft of CRO. CRO includes components of digital analytics, experimentation (A/B testing), user psychology, project management, copywriting, design, and UX research. Nowadays, I look at it as “website product management.” We’ve all got our ... Read more

The post The 21 Best CRO Tools in 2025 appeared first on Alex Birkett.

]]>
What we call conversion rate optimization is actually an expansive suite of distinct functions that blend together to form this art-and-science craft of CRO.

CRO includes components of digital analytics, experimentation (A/B testing), user psychology, project management, copywriting, design, and UX research.

Nowadays, I look at it as “website product management.”

We’ve all got our preferred products, and this list is no different: it’s largely based on my own extensive experience optimizing website experiences.

Some of these will include affiliate links, which if you click and sign up for the product, might result in me getting paid. This is a win-win, because you get a good new tool and I get paid. I promise I won’t change my list based on how well these affiliate programs pay.

The 21 Best CRO (Conversion Rate Optimization) Tools

  1. Google Analytics
  2. Google Tag Manager
  3. Amplitude
  4. R & SQL
  5. HotJar
  6. Qualaroo
  7. TypeForm
  8. Google Forms
  9. Balsamiq
  10. Convert
  11. Optimizely
  12. Conductrics
  13. Evolv
  14. Instapage
  15. Unbounce
  16. UserTesting
  17. Wynter
  18. 5 Second Test
  19. Pingdom
  20. CXL Institute
  21. CRO Books

1. Google Analytics (GA4)

At the core of optimization lies measurement. While you can get a good read on weight loss by looking in the mirror, website optimization benefits from a bit more precision, a proverbial scale.

Digital analytics is an old industry with a graveyard of historical solutions, most of which lead to the nearly ubiquitous use of Google Analytics today. It’s used widely because, in its basic form, it’s free, and the free version offers an immense level of value.

Beyond that, it’s also somewhat easy to understand out of the box, and the advanced features can satisfy the esoteric end of analytics purists.

If you’re a conversion optimizer, it would be foolish not to learn and understand Google Analytics. Learn its data model. Learn the lexicon and how the data is being tracked. Learn the basic building blocks of a set up, like goal tracking, event tracking, and advanced segmentation.

Not only will Google Analytics give you a good quantitative basis to diagnose website problems and opportunity areas, it’ll likely be the solution where you eventually analyze your experiments and treatments.

You can’t go wrong taking a Google Analytics course or two and setting up the free version on your website.

And look, everyone seems to hate GA4, but it’s primarily because they haven’t learned how to use it. The interface is wonky, but if you learn how to port your data into BigQuery and Looker Studio, it’s still amazing.

2. Google Tag Manager

Screen Shot 2020 05 08 at 12.48.10 PM

Google Tag Manager is Google’s tag management solution.

A tag manager is basically what it sounds like: it lets you manage and deploy various ‘tags’ or scripts you execute on your website. These could be simple third party tools that you deploy with a javascript snippet (for instance, HotJar). You can also set up advanced tracking solutions in Google Analytics using Tag Manager.

My former boss and mentor Peep Laja told me early on in my CXL days, “if you want to 10x your value as a growth marketer, learn Google Tag Manager.” He wasn’t wrong.

GTM, wielded by someone with the skill level of Simo Ahava, grants a level of near digital omniscience. The fringe cases are unlimited and expanding continuously. But even if you just deploy your 3rd party tooling and manage it via Tag Manager, you’ll get more than your requisite value from the tool.

If you’re new to tag managers, it can take some learning to ramp up on terminology and how things are set up, but there are a variety of great courses, including some basic materials from Google themselves.

3. Amplitude

Amplitude specializes in product analytics, everything that happens post sign up. This is one of the more popular tools in SaaS, and I’ve spoken to some consumer marketing leaders who use the tool for their product analytics as well.

Amplitude is wonderful because it “productizes analysis,” or in other words, it builds common analyst techniques into the platform itself so you don’t have to go through leaps and bounds to export, transform, load, clean, and analyze your data using other tools. You can view cohorts and run regressions right in the tool.

Product analysts can easily find correlative events that predict desired goals, view the success rates of various cohorts, and run what basically amounts to SQL queries within the tool itself. Good “level up” on granularity past the typical Google Analytics setup.

Similar solutions to Amplitude include KissMetrics, MixPanel, and Woopra.

4. R & SQL

Despite the abundance of analytics tooling, I’ve found more value from learning R and SQL than anything else on this list.

Screen Shot 2020 05 08 at 12.50.15 PM

That Maslow quote above about every problem looking the same if you only have one tool? That’s wildly common in CRO and analytics. If all you have is Google Analytics, well, you’re going to have a lot of session-based waterfall charts and channel grouping pie charts.

R lets you break free from the paradigms of a tool’s data model and clean, organize, and analyze data your own way. It’s got great built in statistics libraries, which are particularly appropriate for A/B testing analysis. It’s also a fully fledged programming language, so you can use it to scrape web data, automate boring tasks, build data visualizations, and even host interactive applications using Shiny.

SQL is the lingua franca of data. One of the smartest data scientists I know, Begli Nursahedov, told me learning SQL is the highest leverage skill you can learn. It’s useful at any organization, and at its core, it will help you better understand the data you’re collecting at a foundational level.

Clearly, these are “CRO tools” in the same sense as the other SaaS solutions on this list, but I can’t pass them up in importance.

5. HotJar

HotJar is my go-to qualitative data analysis tool. Where Google Analytics and the above tools help you diagnose the “what” on your website, HotJar’s suite of qualitative tools can help you add some color to the quantitative trends. Typically, people refer to this as helping to answer the “why.”

  • Why are website visitors struggling to finish the checkout experience?
  • Why are mobile users first time visitors underperforming?
  • Why aren’t our CTAs being clicked?
  • What does the customer journey and user behavior look like from the first pageviews through to the end purchase?

While no tool can fully answer these questions, HotJar has several features – heat maps, session replays, form analytics, surveys and polls – that help you look in the right direction for experiment ideas and solutions.

I love that it’s a full solution, a sort of all-in-one qualitative analytics platform. Before HotJar you had to buy Crazy Egg, SurveyMonkey, KissMetrics, and ClickTale for session recording data just to get the basics. It’s just fun to use as well; great user experience.

6. Qualaroo

Quick confession: I’m not a huge fan of heat maps. Mostly noise and colorful illustrations to tell stories in my mind. My favorite qualitative tools are the ones that allow you to better probe on business questions that matter.

Sure, surveys and polls can be misused as well, but a well-worded survey can crack into that Rumsfeldian sphere of “you don’t know what you don’t know.”

To that end, Qualaroo is the best in breed software to design and deploy on-site polls and surveys. They’ve got the highest end targeting, integrations, segmentation capabilities, and logic jumps. Many alternative tools exist, but Qualaroo is my favorite.

7. TypeForm

Screen Shot 2020 05 08 at 12.51.02 PM

I’m a TypeForm fanboy and power user. Their product itself is a delight, perhaps one of the only true “product led growth” companies despite everyone now claiming to operate as such.

Clearly they care deeply about customer experience and the way you feel when you use the product. That’s true of both the survey designer and the survey taker.

For some reason, taking a TypeForm survey is an order of magnitude easier than any other tool.

Anyway, not to gush too much more: TypeForm is the best survey design product I know of. It’s so flexible, and I run almost all of my user research through a TypeForm when doing CRO work.

8. Google Forms

Google Forms is free, so I use it sometimes. It’s great when you don’t need to layer on elaborate targeting and logic parameters or for internal form submission needs.

For me, Google Forms is quick and dirty; TypeForm is for when you want to do it right.

However, Google Forms does benefit from native integrations with other Google Drive products. So you can easily set up Google Sheets to receive submissions (you can do this in other tools, but typically it requires some set up).

9. Balsamiq

Screen Shot 2020 05 08 at 12.52.31 PM

I’m not a visual designer, but Balsamiq gives me an easy and effective platform to design wireframes that communicate my vision for landing pages, home pages, checkout flows, or other digital experiences.

Simply put, it’s the easiest wireframing tool I’ve found for a non-designer to use.

Wireframes are all about communication, not pristine detail. I usually sketch something out on a whiteboard or pen and paper, then draw it up in Balsamiq, and then send it to designers who bring it to life (and then send it to A/B test developers to get that up and running).

If you’re more on the design side of CRO, you’ll likely explore more robust prototyping solutions like Invision or Figma or design tools like Adobe Creative Suite or Sketch.

For me, I get tons of ROI from Balsamiq.

10. Convert

A/B testing tools!

This is the meat and potatoes of conversion optimization, isn’t it?

Yes, and no. It’s clearly a myth that CRO = A/B testing, but for most programs with sufficient traffic volume, A/B testing is the gold standard for determining the effectiveness of a given user experience.

Convert is my favorite “all purpose” testing platform, for a few reasons:

  • It’s feature rich and goes toe to toe with Optimizely and other higher priced solutions in most features that companies typically use
  • It’s much more affordable
  • The team and customer service are leagues above other products
  • Great documentation and education materials
  • Transparent in their tracking and how they operate
  • Privacy focused and forward thinking.

Convert may not have some of the more advanced features of Optimizely or other personalization tools, but for the vast majority of companies, it will satisfy your experimentation needs.

In addition to Convert, I really love VWO (also known as Visual Website Optimizer). VWO has the bonus of including other CRO and analytics tools like session replays, heat and scroll maps, polls etc.

11. Optimizely

Optimizely is the biggest name in A/B testing nowadays, and for good reason: they’ve trained a generation of people on how to run A/B tests (for better or for worse – many have rightly argued that they’ve botched their statistics education and made it seem far too easy to simply set up and analyze an experiment with no statistics knowledge).

They used to have a free tier and several more affordable options, although they’ve since drastically moved up-market. This move is prohibitive for many companies in terms of pricing, but it has also brought more advanced features like server side experimentation, predictive targeting and personalization, and feature flags for product teams.

12. Conductrics

Conductrics is actually my favorite experimentation platform, though it’s probably best reserved for more advanced practitioners.

To start, Conductrics gives you options to design, deploy, and analyze experiments exactly how you’d like to, whether that’s client or server side, using a WYSIWYG editor or not, or analyzing the experiment using a one or two tailed t test or bayesian statistics. You can also run multi-armed bandit experiments, an interesting option with different use cases than your typical fixed time horizon A/B test.

It’s also got powerful predictive pooling and targeting. In other words, when you’re running new variants, it will detect segments of your user base that respond particularly favorably and you can run arms of that experience to target that population.

It’s one of the more powerful experimentation platforms, my go-to choice all things considered.

13. Google Optimize

Google Optimize is one of my least favorite A/B testing platforms in direct comparison with all the others, but it’s free, so it’s a great learning tool or way to get tests live if you don’t have the budget to spend.

Despite my smack talk, it does the basics. You can safely randomize users, stamp them with custom dimensions to analyze the data in Google Analytics, and even use these dimensions to do interesting integrative campaigns with Adwords or display ads. The native integrations with other Google tools are the real treat.

One note is that you should absolutely pull the data to a separate platform to analyze; the statistics in play are quite black box/opaque.

Note: Google Optimize no longer available Sept 2023. I compiled GO alternatives here.

13. Evolv

Evolv is another experimentation tool but of a different flavor. They deploy ‘evolutionary algorithms’ in order to splice together the ‘genes’ from your different creative and variants. These undergo transformations over ‘generations’ and evolve to produce the highest performing combination of creative.

That’s a really simplistic explanation, but for the most part pretty accurate. It’s a machine learning based optimization tool that is designed to rapidly explore different patterns and ideas.

I love it. Especially when you’re in the “discovery’ or “exploration” state of optimization, this tool can let you throw ideas together much faster and more efficiently than subsequent A/B tests or even more advanced design of experiments like factorial design.

Screen Shot 2020 05 06 at 8.56.19 PM

14. Instapage

Not necessarily a conversion rate optimization tool, but landing page builders are clearly part of the arsenal of web strategists. Few CRO conversations occur without the words “landing page” thrown in.

Landing pages are just dedicated website pages. They exist to serve a conversion-oriented purpose, be it lead generation or simply a product purchase.

At scale, landing pages allow you to test your messaging and creative beyond the website, since you can tie-in ad targeting and testing. In fact, that’s why I love Instapage – it was built for high output advertisers and optimizers.

With sophisticated personalization features, easy templatization and creative management, and a fairly easy to use editor, this thing can get marketers really cranking on campaigns without the bottlenecks apparent in most developer-heavy environments.

15. Unbounce

Unbounce is my other favorite landing page builder, and in fact, I use it much more frequently. It’s got integrations with most popular marketing technology solutions, so you can pipe your leads directly into Mailchimp or whatever email tool you use.

It’s also got a lot of the same templatization features that allows for high scale creative testing.

I find Unbounce pretty easy to use, though the WYSIWYG editor does get buggy. I’d almost prefer it to be a little bit *more* developer friendly, as the marketer-friendliness seems to bring a lack of precision.

16. UserTesting

UserTesting is the best, you guessed it, user testing platform in the world! You could conduct a poor man’s user test and go to a coffee shop and have people try your website. Or, you could just do the easy thing and pay UserTesting to find you a qualified panel of users to run through your digital experience.

Of course, you can run moderated or unmoderated user tests.

User testing is an absolutely critical component of website strategy and conversion rate optimization, and I wouldn’t start a job without this tool in my arsenal.

17. Wynter

Wynter is a new player, a user testing software specifically designed to help you optimize website copywriting.

Copywriting is the last bastion of “I feel like ___,” mainly because it’s hard to get quantified data at a granular level (caveat, of course you can run a controlled experiment, but you’re still choosing *which components* to test by gut). This tool lets you known which phrases and words to look into and gives you insight on how to improve the copy on a page.

18. 5 Second Test

Another qualitative research staple, Five Second Test is an old tried and true piece of software that helps you test the clarity of your messaging. It is what it sounds like: you flash your page in front of a panel for 5 seconds and they explain what it is trying to say. You’d be surprised at how unclear your copy is (or at least, I’m surprised at how unclear my copy is much of the time).

Simple tool, profound impact. Try it on your homepage and on your value proposition.

19. Pingdom

Page speed is clearly important. Some studies at Microsoft and other juggernauts have pinpointed the value in mere millisecond page speed increases (however, other studies have not shown such sensitivity, though perhaps due to less statistical power).

Anyway, it’s simple logic: the faster a page loads, the better the user experience. The better the user experience, the more money you make. Heuristics, but mostly true.

Pingdom is a site speed tester, among other performance monitoring products. It also gives you suggestions on how to fix your page speed.

20. CXL Institute

CXL Institute is an education platform that trains digital marketers, product managers, analytics, and UX professionals.

Biased, I was in the room before, during, and after the CXL Institute launch and helped coordinate a lot of the early education programs. However, I believe it’s without peers and the absolute best place you can learn about CRO. Nothing compares. There are other programs that may dive deep on specialties (e.g. statistics), but taken as a whole, nothing will set you up for a CRO career better than CXL.

I still keep up to date on their courses because those that stagnate fall behind, and I don’t want to fall behind. If you don’t want to fall behind, check out the Institute.

21. CRO Books

Cheesy chorus at this point, but CRO isn’t about the tools, it’s about the people and their know-how. Give an amateur Conductrics and an Adobe Analytics setup, and it won’t amount to much. But a master optimizer could make do with freemium tools and still kick back and ROI.

I like courses, but I really like books. Here are some of my favorites to get you started:

I also wrote two entire blog posts outlining my favorite CRO books and my favorite A/B testing books.

Conclusion

Conversion rate optimization is an art, a science, an operating system, and a good reason to go down the never ending rabbit hole of marketing technology.

I’ve got my preferred solution, but – and I genuinely mean this – send me your new and underrated tools that I missed. Throw a comment below. Email me. Doesn’t matter. I wanna know what’s going on in this space that I might be missing.

Otherwise, hope you enjoyed this list! Now go read my article on A/B testing guidelines.

The post The 21 Best CRO Tools in 2025 appeared first on Alex Birkett.

]]>
The 10 Best A/B Testing Books for Practitioners of All Levels https://www.alexbirkett.com/ab-testing-books/ https://www.alexbirkett.com/ab-testing-books/#comments Tue, 31 Dec 2019 15:37:17 +0000 https://www.alexbirkett.com/?p=950 A/B testing is an important skill for anyone to learn, whether you’re a marketer, product person, designer, or an analyst. It’s also a great framework for managers, as Elliot Shmukler has noted. Once you learn about A/B testing, you look at decision making, probability, innovation, and risk differently. It’s an attitude shift as much as ... Read more

The post The 10 Best A/B Testing Books for Practitioners of All Levels appeared first on Alex Birkett.

]]>
A/B testing is an important skill for anyone to learn, whether you’re a marketer, product person, designer, or an analyst.

It’s also a great framework for managers, as Elliot Shmukler has noted. Once you learn about A/B testing, you look at decision making, probability, innovation, and risk differently. It’s an attitude shift as much as a tactical application in marketing or UX.

But as simple as the concept is (pit one online experience – the control – against a new one – the variant), the execution is filled with frustration and nuance.

I’ve written extensively about A/B testing on this blog, so you can definitely check that out. Additionally, if you prefer courses, CXL Institute has the best ones.

However, as a heavy reader, I’ve gone through dozens and dozens of books directly and indirectly related to A/B testing. Some are really good, more aren’t. This list contains the really good ones.

The Best 10 A/B Testing Books for Practitioners of All Levels

  1. The Innovator’s Hypothesis
  2. Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
  3. Statistical Methods in Online A/B Testing
  4. Statistics for Experimenters
  5. Bandit Algorithms
  6. The Drunkard’s Walk
  7. The Black Swan
  8. Antifragile
  9. Don’t Make Me Think
  10. Lean Analytics

Note: I’m using affiliate links here, so I’ll make money if you buy these books. A win/win really as you’ll get some great knowledge and I’ll get some tiny percentage from Amazon!

1. The Innovator’s Hypothesis

By Michael Schrage

The Innovator’s Hypothesis is one of the better A/B testing books that focuses on the actual strategy behind experimentation.

First off, it puts forth an excellent business case for experimentation. If you need executive buy-in, this is the book to read. It’s short, and it gets into the meat of the matter pretty quickly.

Ideas are overrated, and good ideas are unpredictable. You’re best off making it cheap and easy for employees to run and analyze experiments. It mitigates the risk of bad ideas and decisions, and perhaps more importantly, allows everyone to capture the upside of great ideas that otherwise wouldn’t have been tried.

This is one of my overall favorite books on experimentation, as it doesn’t rely on overly industry-specific terminology like the CRO specific books. It’s also not incredibly technical, so everyone should be able to grok this one.

2. Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

By Ron Kohavi, Diane Tang, Ya Xu

This is my favorite A/B testing book that specifically covers digital experimentation – controlled experiments on the web or on your product/application. It’s not actually released yet at the time of writing, but should be soon in 2020.

It’s excellent for many reasons:

  • Ronny Kohavi is a trusted authority, having built up Microsoft’s experimentation platform and program to a few hundred team members.
  • The book reaches technical depths when it comes to statistics and also on the sections describing proper testing platforms.
  • However, it also covers high level case studies on why to invest in proper experimentation in the first place.

The scale at which companies like Microsoft, booking.com, and Google are running experiments is incredibly impressive, and the cultures they have built around testing are equally inspiring. This book cracks open the secrets of these big companies, and it doesn’t shy away from advanced and technical topics. True experts and practitioners will love this one.

3. Statistical Methods in Online A/B Testing

by Georgi Georgiev

Statistical Methods in Online A/B Testing is my favorite book specifically centered on the statistical aspects of experimentation. It’s super comprehensive, though it’s not impossible to read as a layperson (you will, however, have to read some sections quite slowly or re-read them to truly grok).

This book is hyper-focused on digital experiments, particularly so in the field of ecommerce conversion rate optimization. As such, it won’t be as broadly applicable as others on this list, but if you’re reading this article, it’s very likely you’re interested in the specific applications for conversion rate optimization.

If you want to dive deeply into the statistics of A/B testing, you really just need this book. It’s got the depth of a textbook, but it’s actually readable. And Georgi Georgiev really knows his shit.

4. Statistics for Experimenters

By George E.P Box

Now, if Georgi’s book were like the accessible version of a statistics textbook, this is the actual textbook – it’s super dense but filled with technical information useful to serious analysts. I’m going to wager that few people will want to read and finish this one, but the most serious experimenters, analysts, and data scientists really should pick up a copy of this one.

You don’t have to read it through like you would a novel, but you should have a copy for reference. It’s wildly comprehensive and practical, albeit very dense for the layperson.

5. Bandit Algorithms for Website Optimization

By John Myles White.

I like to think of bandit algorithms both as a good framework for decision making and optimization and as a literal technical solution for certain optimization problems.

The multi-armed bandit problem is generally described like this: you have a set of slot machines, and they have varying reward systems. Some give out more money over time than the others, but you don’t know which ones they are. What’s your ideal strategy for figuring out which machine gives the highest rewards, preferably while limiting the amount of time spent pulling suboptimal machines?

Bandit algorithms (of which there are many flavors/types) are used to balance exploration (the pulling over many different levers) with exploitation (pulling the optimal arm more frequently). In online optimization, this means adapting the traffic allocation between the control and the variant(s) in real time as the algorithm learns more about which variant is optimal.

This book is the best explanation as well as manual on technical implementation of bandits.

6. The Drunkard’s Walk

By Leonard Mlodinow.

The Drunkard’s Walk is an entertaining book that covers a ton of different topics on probability and randomness.

If you don’t have the patience (or desire) to sit through a textbook, but you still want to intuitively grasp difficult topic matter in statistics and probability, this is an awesome book. The writing and storytelling is phenomenal.

I read this over the course of a few flights and have recommended it to several people in conversion optimization. I haven’t heard any negative reviews back yet.

7. The Black Swan

By Nassim Nicholas Taleb.

I’m currently re-reading all of Nassim Taleb’s books, and I can say with full confidence that this one is one of the most influential books I’ve ever read. It has had a deep and lasting impact on how I look at experimentation and conversion rate optimization, but it has also impacted how I look at life and decision making more generally.

You won’t regret picking this one up. It will introduce you to all sorts of useful ideas and heuristics (one of my favorites being the narrative fallacy).

8. Antifragile

By Nassim Nicholas Taleb.

I could have put all of Nassim Taleb’s books on this list, but I’m just listing The Black Swan and Antifragile because they have direct applications to A/B testing.

Antifragile in particular has had an influence on how I build CRO systems and how I build strategy around optimization and decision making. We operate in informationally-opaque arenas for the most part, and experimentation can help us capture the upside to randomness (or the optionality involved in experimentation). With this in mind, it becomes apparent that you’re often best off a) lowering the cost of running experiments (which by nature increases the ROI) b) being open (and welcoming) surprising results, even if (and especially if) they don’t conform to your previous world-views and c) testing unintuitive and wide ranging ideas, without cornering yourself into predisposed patterns.

Doing this can have big wins over the long course of an experimentation program.

9. Don’t Make Me Think

By Steve Krug

This is the best book I’ve read on usability and user experience. UX and experimentation go hand-in-hand, of course. Some of your best insights for conversion rate optimization opportunities will come from running user tests. This book is an awesome introduction and manual for how to do it right.

10. Lean Analytics

By Alistair Croll and Benjamin Yoskovitz

Lean Analytics covers tons of analytics and data topics, mostly aimed at startups. While analytics (like user testing) isn’t necessarily the same thing as A/B testing, it’s something absolutely necessary to know about. How are events logged? How can you discover optimization opportunities? What does cohort analysis have to do with it? This book helps you answers some of these questions and many more.

Analytics is a big subject, but this book will give you a good overview and practical guide to getting started.

Conclusion

Books are great, and you should buy everything on this list and read each book twice. But at a certain point, you just have to start diving in and learning by doing.

I’d particularly recommend reading the statistics-heavy books here; that’s the type of knowledge where you really do benefit from an academic and theoretical underpinning. Other than that, N.N. Taleb’s books will help you in areas beyond A/B testing, but they’ll definitely help you with testing and strategy as well.

This is just a short sampling of all the books that have helped me with A/B testing, but I feel if you read all of the above, you’ll be far beyond most people who are running A/B tests today.

The post The 10 Best A/B Testing Books for Practitioners of All Levels appeared first on Alex Birkett.

]]>
https://www.alexbirkett.com/ab-testing-books/feed/ 4