Growth Archives - Alex Birkett https://www.alexbirkett.com/category/growth/ Organic Growth & Revenue Leader Mon, 30 Dec 2024 04:12:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://i2.wp.com/www.alexbirkett.com/wp-content/uploads/2016/02/cropped-mustache-.png?fit=32%2C32&ssl=1 Growth Archives - Alex Birkett https://www.alexbirkett.com/category/growth/ 32 32 The 11 Best Personalization Platforms in 2025 https://www.alexbirkett.com/personalization-software/ Fri, 15 Oct 2021 15:45:07 +0000 https://www.alexbirkett.com/?p=2611 Statistics suggest that 72% of customers are likely to engage with brands and messages customized to their specific concerns. Automatically adapting your customer experience based on past behavior is the way to get new customers and get them to come back time and time again. It’s the golden rule of marketing. Personalized emails, personalized advertisements, ... Read more

The post The 11 Best Personalization Platforms in 2025 appeared first on Alex Birkett.

]]>
Statistics suggest that 72% of customers are likely to engage with brands and messages customized to their specific concerns.

Automatically adapting your customer experience based on past behavior is the way to get new customers and get them to come back time and time again.

It’s the golden rule of marketing.

Personalized emails, personalized advertisements, landing pages, email sequences…the list goes on. It’s all about targeting your prospects/customers with the right message via the medium they prefer at that time.

And personalization apps let you do just that.

The 11 Best Personalization Software

Here are my top picks for the best personalization software:

1. VWO

Best For: Identifying user behavior using A/B testing, heat maps, on-page surveys, and session recordings.

G2 Score: 4.2

A/B testing and then personalizing your landing page to improve its conversion rate and generate leads can be highly time-consuming and expensive.

And that’s where VWO, one of the most popular conversion optimization (CRO) and A/B testing tools, comes into the picture.

Marketers use the tool to carry out A/B split tests on landing pages, blogs, email campaigns, or even complete websites.

VWO helps you conduct all the following tests and experiments:

  • A/B testing
  • Multivariate testing
  • Split URL testing
  • Server-side testing
  • Mobile app testing

Aside from the ability to run different types of tests, VWO helps you gauge specific user behavior using heat maps, scroll maps, click maps, and even session recordings.

While click maps and scroll maps help you understand their scrolling and clicking patterns, session recordings allow you to track their precise movements on your website. You’ll also be able to identify their friction points, mouse trails, and the entire buyer journey.

It’s almost like you’re sitting right beside your audience while they’re browsing your website.

Nosey Parker, eh?

Well, all’s fair in love and marketing!

And it’s not like you’re privy to your customer’s most private thoughts. You just want to determine their interest area to provide them with the most personalized (hence, optimum) customer experience.

So, it’s all in good faith and legal!

Then you have on-page surveys and NPS scores that will help you ask direct questions and see what needs to be edited on your site.

VWO also provides detailed analytics and reporting of all the tests conducted. You can even filter results based on different segments and channels.

Cons:

  • It’s essentially an A/B testing and heat map platform – not a personalization app solely. Though, you can use it to identify user behavior and make changes based on their interactions.

Pricing:

Quote-based. They also offer a free plan.

2. RightMessage

Best For: All types of businesses and marketers.

G2 Score: N/A

RightMessage is a website personalization platform specializing in website design, digital marketing, and social media. It also improves the email conversation by automatically creating the right email at the right time based on recipient behavior.

They help you monitor your audience by giving insights about your website visitors, what they are looking for, where they come from, and what they are doing on your website.

You’ll also be able to uncover the conversion rates (based on different segments), what type of audience has the lowest conversion rates, and more.

I love how visual and easy to comprehend their statistics are.

You be the judge yourself:

Using the information unearthed, RightMessage helps you create personalized website elements like surveys, opt-in forms, quizzes, and even non-invasive CTAs to generate more leads.

RightMessage: Opt-in form

They use their behavioral segmentation engine that tracks your visitors’ activities and creates a unique visitor profile to create these website elements.

And if you’re big on case studies, this tool will prove to be especially convenient.

They have a “Dynamic Case Studies” feature that ​​personalizes the case studies based on your site’s audience. For instance, it will show testimonials, case studies, etc., aligning with the audience on your website.

They also enable Account-Based Marketing (ABM), which includes the ability to address a returning lead by their name. Or you can even swap the generic “Buy Now” CTA buttons with “Upgrade” offers for specific visitors.

Not just your website and landing pages, RightMessage helps you personalize your email messaging as well.

Emailing your website visitors is probably the next step in your sales funnel, after all. And RightMessage can do wonders for your email marketing strategy if you use it in tandem with a sales funnel platform.

What they do is, after collecting all the behavioral and survey segmentation of the users, they will save all the data to your email marketing database. You can then use this data to craft relevant onboarding welcome emails to the visitors.

And that’s not it.

RightMessage enables 2-way synchronizations with your email marketing software to gather information on visitors’ past purchases. They again use this data to provide a hyper-personalized experience.

Other features include:

  • Creates dynamic sales pages for each visitor.
  • Personalized testimonials and case studies.
  • Detailed statistics.
  • Unlimited sub-accounts and websites.
  • Auto-segment affiliates by behavior.
  • Craft product descriptions based on user behavior.
  • Creates landing page variations based on targeted data and ads.

Cons:

  • Limited integrations. It doesn’t work with Zapier either.
  • Personalization is only available with the most expensive plan.

Pricing:

Pricing starts at $79/month for upto 10,000 visitors per month for the CTA plan and goes up to $179/mo for the Personalized plan. There’s also a 14-day free trial.

3. Mutiny

Best For: Mid-sized B2B companies.

G2 Score: N/A

Have you ever wanted to figure out what’s going to get people engaged with your content? Mutiny HQ Personalization software is designed to identify your visitors, then take that data and create real-time personalization experiences based on their interests.

It has a streamlined, step-by-step process.

To start with, Mutiny integrates with multiple social media platforms and data analytics tools (including Salesforce, Marketo, Google Analytics, Clearbit, and more) to identify your website visitors.

They use natural language processing to identify and tag your audience based on their website activity, industry, size, ad campaign, and more.

And that’s just the first step.

Next, Mutiny leverages AI technology to recommend the optimum audience segments for personalization. The product recommendations depend on your site’s behavior and potential conversion rates.

Next, they’ll suggest proven strategies that have worked for other B2B companies and will even write personalized headlines for you.

The fourth step includes editing, adding, or deleting website elements like CTAs, anything on your website, including CTAs, modals, surveys, and more.

And it doesn’t require any rigorous work or coding know-how. Mutiny offers a visual editor and claims to support every CMS (Content Management System) and frameworks like React, Angular, and Vue.js.

Finally, you can analyze how your changes are performing using automatic hold-out testing. You can either let them optimize everything for you or test multiple variations manually.

You can also use Mutiny to create and customize personalized pages and ad campaigns for outbound campaigns.

Their integration with Slack is another bonus. For example, you and your team will directly get notified in Slack every time a target contact views your ad campaign or landing page.

Cons:

  • Limited customization capabilities.

Pricing:

Not available on the official site.

4. Intellimize

Best For: Mid-sized and large enterprises.

G2 Score: 4.9

Intellimize helps you create a dynamic and personalized learning website using machine learning. It simultaneously tests various market ideas for your website to see what content and messages work the best.

What I liked the best about Intellimize is its use of Artificial Intelligence and Machine Learning technology. They run all combinations of experiences and data to determine what converts maximum leads without any human intervention.

It eliminates the need for A/B testing and rule-based personalization. Both are greats ways to identify your audience and provide them with a personalized experience.

That said, marketers tend to bypass tens of essential rules in the process, and it all becomes a mess.

And apparently, that’s what made Intellimize look towards machine learning.

Intellimize doesn’t need any preliminary data – their machine learning automatically finds the best marketing strategy and then adapts to each visitor’s experience. They use different data points, such as location, device type, day, time, the previous behavior of the visitor on the website, etc.

To make their job easier, you can even share first or third-party data with Intellimize to personalize your site even better for unique visitors.

All of this ensues in personalized headlines on your website, messages, images, pages, layouts, and forms relevant to visitors.

However, note that Intellimize focuses solely on website optimization. You won’t find any options to supercharge your email marketing content.

Finally, they don’t cut any corners when it comes to reporting. You’ll get access to campaign reports to identify the performance of your website before and after the optimization. You’ll also be able to monitor parameters like traffic source, date and time, device, URL parameter, location, and more – from one dashboard.

Other key features include:

  • Features case studies relevant to the customer.
  • Shows relevant customer quotes, case studies, and reviews.
  • The ability to set optimization goals for your objectives.
  • Segments and filters your website visitors.
  • You can preview or pause your website optimization campaigns whenever you want.

Cons:

  • Steep learning curve.
  • Integration with third-party sites can be tricky.

Pricing:

Pricing is not available on the website. You can request a quote and a free demo.

5. Optimizely

Best For: A/B testing and multivariate testing.

G2 Score: 4.3

Optimizely is an all-in-one marketing platform for experimentation, recommendation, digital experience, digital marketing, and more.

It can be both a good thing and a bad thing.

Good, because you get so many functions under the ambit of a single platform.

Bad, because personalization is not their sole focus. However, they do offer everything you need to personalize your audience’s experience.

For starters, Optimizely takes not only your customer’s referral source into consideration but also what they are likely to do next.

How do they do it?

They set goals and use machine learning to predict customer behavior.

What’s more, they provide one-click integration, allowing you to connect your Optimizely dashboard with your data channels. The Optimizely platform will extract data from your current platforms and test multiple ideas and combinations.

Finally, they will turn these data models into comprehensible customer profiles. You can then engage with your customers on a one-to-one basis and personalize their experience.

It primarily uses A/B testing, multivariate testing, and AI-based technology to help you personalize the customer experience.

The entire process doesn’t seem as automated as Mutiny HQ and would require fair-share of human interference. However, Optimizely is a good option if you want to take advantage of its extensive suite of solutions.

All in all, you can use Optimizely to define your goals and set up awesome experiments that get more engagement, leads, or revenues.

Cons:

  • Various G2 reviews hinted at intermittent outages.
  • The UX could be more intuitive.

Pricing:

Quote-based.

6. OmniConvert

Best For: Large enterprises looking to enhance their conversion rate optimization.

G2 Score: 4.5

OmniConvert is a suite of tools for exploring, improving, and analyzing your marketing campaigns.

It performs A/B tests using multiple variations, segments audiences and optimizes customer journeys to help you optimize your website and increase conversion rate.

It also helps you unearth real-time data of your customers, including weather, geolocation, OS type, browser type, language, and more.

Another great part about OmniConvert is that it offers a built-in JS and CSS editor. The editor lets you create and modify website elements and even reuse previous codes between variations.

Other key features include:

  • CDN cache bypass.
  • Experiment debugger.
  • Advanced segmentation based on 40+ parameters.
  • Personalization of cart total value, product name, among other on-page variables.
  • 100 overlay and pop-up templates ready to use and customize.

You can even take their exclusive help, where they’ll assign a data analyst to your Analytics account. The analyst will understand your visitors and interact with email, search, and social channels. They’ll then perform the audit based on extracted data and results!

Additionally, they also have a suite of tools that makes complex ecommerce data easy to comprehend and visualize. You can also use it to generate insights and subsequently use the data to treat consumers differently on every channel.

Cons:

  • Some may find the tool a bit complex without inside help.
  • It requires extensive CSS knowledge at times.

Pricing:

Plans start from $167 per month, paid annually (or $320/month if you choose to pay monthly). The plan allows 50k views, A/B testing, web personalization, advanced segmentation, on-page surveys, and triggered overlays.

7. Proof

Best For: Adding social proof to your landing pages.

G2 Score: 4.4

Social proof is the best way to convince people to buy.

If someone told me that 5,000 industry experts had installed the eBook I was about to install before me – it would strengthen my resolve to install and read it myself.

However, you need REAL social proof. Not the kind where you pay some stranger on Fiverr to place some Tweets and Facebook posts on your behalf; I’m talking about some REAL numbers.

And true to its name, Proof helps you do just that!

Here are some examples:

Adding proof to your landing pages helps you build visitors’ trust and create urgency – leading to increased conversion rates.

You can use Proof to add the following elements to your site and landing pages:

  • The total audience that recently took action on your site.
  • Live visitor count.
  • Recent activity (live feed of visitors on your site).

Finally, you can run A/B tests to determine the impact of these “proof elements” on conversion. You’ll be able to see your conversion analytics on their intuitive dashboard.

Proof also provides live visitor count notifications, hot streaks notifications, recent activity notifications, A/B testing, live chat support, and more.

Proof also allows you to personalize website text, images, and CTAs using ready-to-use templates, A/B testing, and data-driven reports.

You can further personalize customer experience based on visitors’ traits and behavior data.

 

Other key features include:

  • No-code visual editor.
  • Personalize web applications.
  • Drag and drop elements like top bars and CTAs to your site.
  • Flexible API.
  • Works with every website builder and single-page apps.
  • Personalized content appears under 60ms.

There’s also a 14-day free trial, allowing you to see how the software works before making the payment.

Cons:

  • Limited personalization features.

Pricing:

Starts from $66 per month, when billed annually for 10,000 unique visitors, unlimited domains, and unlimited notifications.

8. HubSpot

Best For: Medium and large-sized enterprises.

G2 Score: 4.4

HubSpot’s Marketing Hub has a large set of features for marketers to personalize their website, web elements, and email campaigns.

You can run email campaigns that are specifically personalized to each visitor, use segment targeting to get a more diverse audience, personalize your website elements, and more.

HubSpot’s core personalization features include:

  • The ability to send personalized, time-optimized email campaigns.
  • Triggering lead capture pop-up (including exit-intent) forms based on customer behavior.
  • Customize CTAs and other website elements based on each customer’s journey.

There’s a “Smart Content” feature that experiments with different versions of your content based on specific consumers’ devices, referral sources, and more. For example, you could create variations for customers coming from different referral sources or devices.

In addition, HubSpot also provides marketing automation features and ready-to-use workflows to nurture and score leads, personalize email campaigns, automate cross-functional operations, and more.

Other key features include:

  • Account-based marketing.
  • The ability to run A/B tests.
  • SEO-optimized web pages and blog posts.
  • Campaign management tools.
  • Event-based segmentation.
  • Landing page builder and mobile-optimized templates.
  • The ability to track your performance after personalization with built-in analytics and custom reporting

Cons:

  • The knowledge base should be more extensive.

Pricing:

Pricing plans start from $45 per month for up to 1,000 marketing contacts.

9. Salesforce Interaction Studio (formerly Evergage).

Best For: Mid-sized enterprises.

G2 Score: 4.3

Interaction Studio (formerly Evergage) is a Salesforce product that provides real-time personalization and interaction management.

The tool helps you extract pertinent data on your customers and then use AI to deliver a personalized customer experience. It enables AI-driven optimization, cross-channel engagement, A/B testing, and analysis.

Once you have customer data, the tool automatically categorizes all products and content based on machine-learning recommendations. It segments data based on referring source, geo-location, weather, company, industry, and more.

Once you understand the business context, it recommends the most relevant products and content based on your customers’ characteristics and preferences.

It’s also an omnichannel personalization platform and helps you guide customers along the optimum journey. Evergage guides each customer along the most appropriate path, triggering interactions where they are or in the channel they prefer, including owned, social, and paid media.

And not just that, it also helps you streamline your consumer’s digital and offline behavior. Salesforce’s Interactive Studio assists you in interactions with call center agents, in-store associates, or at kiosks and ATMs.

Other key features include:

  • Real-time customer segmentation.
  • Gauge customer behavior and trigger personalized messages via mobile app.
  • A/B test algorithms and optimize experiences.
  • Track metrics like sign-ups, purchases, downloads, and more
  • Predict future customer behavior using data collected in a rich data warehouse environment.

Cons:

  • The platform is robust. However, it can be challenging to grasp all the information at once.
  • The user interface should also be more modern and easier to use.

Pricing:

Quote-based.

10. Unbounce

Best For: Individual users, small size and mid-sized businesses.

G2 Score: 4.4

Unbounce is a landing page builder that helps you create personalized, high-converting marketing campaigns without the need of a developer.

It offers various features to help you optimize and personalize your content and website.

For one, Smart Builder extracts customer data from over 1.5 billion conversions, allowing you to identify what layout, content, and headlines will help you convert your target audience.

The Smart Copy feature is an AI writing tool that can create content within minutes customized with your brand and target audience in mind.

Then there’s the Smart Traffic feature that identifies customer behavior and directs each visitor to the landing pages most likely to convert them.

Additionally, it lets you run A/B tests, integrate with your favorite CRM, and automate your follow-up emails using Unbounce’s easy drag-and-drop interface.

Cons:

  • You might need a little bit of HTML and CSS knowledge.

Pricing:

Starts from $90 per month for up to 20,000 visitors and 500 conversions. There’s also a 14-day free trial.

11. Instapage

Best For: Freelancers, marketers, small size and mid-sized businesses.

G2 Score: 4.4

Just like Unbounce, Instapage is a drag and drop website and landing page builder that lets you create personalized website pages.

The platform is ideal for people who don’t have time to create their own landing page because it allows you to design professional websites (and squeeze pages) in minutes.

When it comes to personalizing landing pages to cater to your audience’s requirements, Instapage enables A/B testing and dynamic content. It dynamically directs potential customers to a relevant landing page for each ad. The tool aligns the landing page elements based on visitor-level data like keywords, firmographics, and demographics.

Other prominent features include ad mapping, detailed analytics, experimentation, and more.

Cons:

  • Limited personalization features.
  • Not sufficient for creating a website with multiple pages.

Pricing:

Starts at $199 per year with no conversion limits.

What Features to Look For in Good Personalization Software?

Every marketer knows the value of good personalization software. Personalized and relevant content is proven to stun and amaze visitors and give you a leg up on the competition. But what do you look for in good personalization software?

There are some features that stand out, making them easier to identify.

  • Utilization of AI and Machine Learning – How can you ensure your digital platform is pumping out personalization on steroids? That’s where personalization software, including AI and ML, kicks in. These are just fancy terms that pretty much mean “it helps you figure out your target market better,” which is really what it all boils down to.
  • A/B Testing – A/B testing is something that will help you further personalize your site and increase your conversion rate.
  • The Ability to Collect User Data – Your app should have the ability to collect customer data so that you can understand what your customers are interested in, whether they are prone to purchasing, their future plans, etc. It will help you personalize customer service and enhance your operations.
  • Customer Segmentation – Your personalization app should be able to segment and target your audience based on their preferences, demographics, location, behavior, and more.

For example, if you sell mp3 players and accessories, the software should segment your market into teenagers, young adults, and oldies; or new-generation mp3 players and old-generation mp3 players; or those who buy mp3 accessories and those who don’t, etc. Understanding each of the most important types of customers is very valuable to you as a retailer because once you know who they are, you can tailor your business to suit them.

  • Built-in Editor – The editor will help you easily make changes to personalize your site, landing pages, ad campaigns, and more.

That’s a Wrap!

And that was my list of the 11 best personalization software that can help you boost your sales and conversion rates.

Personalization is crucial because today’s customers are used to having what they want. They are even more selective about the brands they buy from…they want something that has meaning for them.

And that’s where personalization apps enter the picture.

However, the personalization app you’ll pick should depend on your requirements.

For example, if you want to run A/B tests and personalize your web pages yourself, you might prefer Optimizely and VWO. To create personalized landing pages with dynamic content, pick either Unbounce or Instapage.

Review the aforementioned personalization solutions carefully and pick one that aligns with your requirements.

The post The 11 Best Personalization Platforms in 2025 appeared first on Alex Birkett.

]]>
What’s the Ideal A/B Testing Strategy? https://www.alexbirkett.com/ab-testing-strategy/ Sun, 11 Oct 2020 20:28:56 +0000 https://www.alexbirkett.com/?p=1178 A/B testing is, at this point, widespread and common practice. Whether you’re a product manager hoping to quantify the impact of new features (and avoid the risk of negatively impacting growth metrics) or a marketer hoping to optimize a landing page or newsletter subject line, experimentation is the tried-and-true gold standard. It’s not only incredibly ... Read more

The post What’s the Ideal A/B Testing Strategy? appeared first on Alex Birkett.

]]>
A/B testing is, at this point, widespread and common practice.

Whether you’re a product manager hoping to quantify the impact of new features (and avoid the risk of negatively impacting growth metrics) or a marketer hoping to optimize a landing page or newsletter subject line, experimentation is the tried-and-true gold standard.

It’s not only incredibly fun, but it’s useful and efficient.

In the span of 2-4 weeks, you can try out an entirely new experience and approximate its impact. This, in and of itself, should allow creativity and innovation to flourish, while simultaneously capping the downside of shipping suboptimal experiences.

But even if we all agree on the value of experimentation, there’s a ton of debate and open questions as to how to run A/B tests.

A/B Testing is Not One Size Fits All

One set of open questions about A/B testing strategy is decidedly technical:

  • Which metric matters? Do you track multiple metrics, one metric, or build a composite metric?
  • How do you properly log and access data to analyze experiments?
  • Should you build your own custom experimentation platform or buy from a software vendor?
  • Do you run one-tailed or two-tailed T tests, bayesian A/B testing, or something else entirely (sequential testing, bandit testing, etc.)? [1]

The other set of questions, however, is more strategic:

  • What kind of things should I test?
  • What order should I prioritize my test ideas?
  • What goes into a proper experiment hypothesis?
  • How frequently should I test, or how many tests should I run?
  • Where do we get ideas for A/B tests?
  • How many variants should you run in a single experiment?

These are difficult questions.

It could be the case that there is a single, universal answers to these, but I personally doubt it. Rather, I think these answers can differ based on several factors, such as the culture of the company you work at, the size and scale of your digital properties, your tolerance for risk and reward, and your philosophy on testing and ideation. there’s some nuance based on the company you work at, where you are in terms of company size and resources, and your traffic and testing capabilities.

So this article, instead, will cover the various answers for how you could construct an A/B testing strategy — an approach at the program level — to drive consistent results for your organization.

I’m going to break this into two macro-sections:

  1. Core A/B testing strategy assumptions
  2. The three levers that impact A/B testing strategy success on a program level.

Here are the sections I’ll cover with regard to assumptions and a priori beliefs:

  1. A/B testing is inherently strategic (or, what’s the purpose of A/B testing anyway?)
  2. A/B testing always has costs
  3. The value and predictability of A/B testing ideas

Then I’ll cover the three factors that you can impact to drive better or worse results programmatically:

  1. Number of tests run
  2. Win rate
  3. Average win size per winning test

At the end of this article, you should have a good idea — based on your core beliefs and assumptions as well as the reality of your context — as to which strategic approach you should take with experimentation.

A/B Testing is Inherently Strategic

A/B testing is strategic in and of itself; by running A/B tests, you’re implicitly deciding that an aspect of your strategy is to spend the additional time and resources to reduce uncertainty in your decision making. A significance test is itself an exercise in quantifying uncertainty.

Image Source

This is a choice.

One does not need to validate features as they’re shipped or copy as its written. Neither do you need to validate changes as you optimize a landing page; you can simply change the button color and move on, if you’d like.

So, A/B testing isn’t a ‘tactic,’ as many people would suggest. A/B testing is a research methodology at heart – a tool in the toolkit – but by utilizing that tool, you’re making a strategic decision that data will decide, to a large extent, what actions you’ll take on your product, website, or messaging (as opposed to opinion or other methodologies like time series comparison).

How you choose to employee this tool, however, is another strategic matter.

For instance, you don’t have to test everything (but you can test everything, as well).

Typically, there’s some decision criteria as what we test, how often, and how we run tests.

This can be illustrated by a risk quadrant I made, where low risk and low certainty decisions can be decided with a coin flip, but higher risk decisions that require higher certainty are great candidates for A/B tests:

Even with A/B testing, though, you’ll never achieve 100% certainty on a given decision.

This is due to many factors, including experiment design (there’s functionally no such thing as 100% statistical confidence) but also things like perishability and how representative your test population is.

For example, macro-economic changes could alter your audience behavior, rendering a “winning” A/B test now a loser in the near future.

A/B testing Always Has Associated Costs

There ain’t no such thing as free lunch.

On the surface, you have to invest in the A/B testing technology or at least the human resources to set up an experiment. So you have fixed and visible costs already with technology and talent. An A/B test isn’t going to run itself.

You’ve also got time costs.

An A/B test typically takes 2-4 weeks to run. The period that you’re running that test is a time period in which you’re not ‘exploiting’ the optimal experience. Therefore, you incur ‘regret,’ or the “difference between your actual payoff and the payoff you would have collected had you played the optimal (best) options at every opportunity.”

Image Source

This is related to but still distinct from another cost: opportunity costs.

Image Source

The time you spent setting up, running, and analyzing an experiment could be spent doing something else. This is especially important and impactful at the startup stage, when ruthless prioritization is the difference between a sinking ship and another year above water.

An A/B test also usually has a run up period of user research that leads to a test hypothesis. This could include digital analytics analysis, on-site polls using Qualaroo, heatmap analysis, session replay video, or user tests (including Copytesting). This research takes time, too.

The expected value of an A/B test is the expected value of its profit minus the expected value of its cost (and remember, expected value is calculated by multiplying each of the possible outcomes by the likelihood each outcome will occur and then summing all of those values).

Image Source

If the expected value of an A/B test isn’t positive, it’s not worth running it.

For example, if the average A/B test costs $1,000 and the average expected value of an A/B test is $500, it’s not economically feasible to run the test. Therefore, you can reduce the costs of the experiment, or you can hope to increase the win rate or the average uplift per win to tip the scales in your favor.

A/B testing is a tool used to reduce uncertainty in decision making. User research is a tool used to reduce uncertainty in what you test with the hope that what you test has a higher likelihood of winning and winning big. Therefore, you want to know the marginal value of additional information collected (which is a cost) and know when to stop collecting additional information as you hit the point of diminishing returns. Too much cost outweighs the value of A/B testing as a decision making tool.

This leads to the last open question: can we predict which ideas are more likely to win?

What Leads to Better A/B Testing Ideas

It’s common practice to prioritize A/B tests. After all, you can’t run them all at once.

Prioritization usually falls on a few dimensions: impact, ease, confidence, or some variation of these factors.

  • Impact is quantitative. You can figure out based on the traffic to a given page, or the number of users that will be affected by a test, what the impact may be.
  • Ease is also fairly objective. There’s some estimation involved, but with some experience you can estimate the cost of setting up a test in terms of complexity, design and development resources, and the time it will take to run.
  • Confidence (or “potential” in the PIE model) is subjective. It takes into account the predictive capabilities of the individual proposing the test. “How likely is it that this test will win in comparison to other ideas,” you’re asking.

How does one develop the fingerspitzengefühl to reliably predict winners? Depends on your belief system, but some common methods include:

  • Bespoke research and rational evidence
  • Patterns, competitor examples, historical data (also rational evidence)
  • Gut feel and experience

In the first method, you conduct research and analyze data to come up with hypotheses based on evidence you’ve collected. Forms of data collection tend to be from user testing, digital analytics, session replays, polls, surveys, or customer interviews.

Image Source

Patterns, historical data, and inspiration from competitors are also forms of evidence collection, but they don’t presuppose original research is superior to meta-data collected from other websites or from historical data.

Here, you can group tests of similar theme or with similar hypotheses, aggregate and analyze their likelihood of success, and prioritize tests based on confidence using meta-analyses.

Image Source

For example, you could group a dozen tests you’ve run on your own site in the past year having to do with “social proof” (for example, adding micro-copy that says “trusted by 10,000 happy customers).

You could include data from competitors or from an experiment pattern aggregator like GoodUI. Strong positive patterns could suggest that, despite differences in context, the underlying idea or theme is strong enough to warrant prioritizing this test above others with weaker pattern-based evidence.

Patterns can also include what we call “best practices.” While we may not always quantify these practices through meta-analyses like GoodUI does, there are indeed many common practices that have been developed by UX experts and optimizers over time. [2]

Finally, some believe that you simply develop an eye for what works and what doesn’t through experience. After years of running tests, you can spot a good idea from a bad.

As much as I’m trying to objectively lay out the various belief systems and strategies, I have to tell you, I think the last method is silly.

As Matt Gershoff put it, predicting outcomes is basically a random process, so those who end up being ‘very good’ at forecasting are probably outliers or exemplifying survivorship bias (the same as covered in Fooled by Randomness by Nassim Taleb with regard to stock pickers)

Mats Einarsen adds that this will reward cynicism, as most tests don’t win, so one can always improve prediction accuracy by being a curmudgeon:

It’s also possible to believe that additional information or research does not improve your chance of setting up a winning A/B test, or at least not enough to warrant the additional cost in collecting it.

In this world of epistemic humility, prioritizing your tests based on the confidence you have in them doesn’t make any sense. Ideas are fungible, and anyway, you’d rather be surprised by a test you didn’t think would win than to validate your pre-conceived notions.

In this world, we can imagine ideas being somewhat random and evenly distributed, some winning big and some losing big, but most doing nothing at all.

This view has backing in various fields. Take, for instance, this example from The Mating Mind by Geoffrey Miller (bolding mine):

“Psychologist Dean Keith Simonton found a strong relationship between creative achievement and productive energy. Among competent professionals in any field, there appears to be a fairly constant probability of success in any given endeavor. Simonton’s data show that excellent composers do not produce a higher proportion of excellent music than good composers — they simply produce a higher total number of works. People who achieve extreme success in any creative field are almost always extremely prolific. Hans Eysenck became a famous psychologist not because all of his papers were excellent, but because he wrote over a hundred books and a thousand papers, and some of them happened to be excellent. Those who write only ten papers are much less likely to strike gold with any of them. Likewise with Picasso: if you paint 14,000 paintings in your lifetime, some of them are likely to be pretty good, even if most are mediocre. Simonton’s results are surprising. The constant probability-of-success idea sounds very counterintuitive, and of course there are exceptions to this generalization. Yet Simonton’s data on creative achieve are the most comprehensive ever collected, and in every domain that he studied, creative achievement was a good indicator of the energy, time, and motivation invested in creative activity.

So instead of trying to predict the winners before you run the test, you throw out the notion that that’s even possible, and you just try to run more options and get creative in the options you’ll run.

As I’ll discuss in the “A/B testing frequency” section, this accords to something like Andrew Anderson’s “Discipline Based Testing Methodology,” but also with what I call the “Evolutionary Tinkering” strategy [3]

Either you can try to eliminate or crowd out lower probability ideas, which implies you believe you can predict with a high degree of accuracy the outcome of a test.

Or you can iterate more frequently or run more options, essentially increasing the probability that you will find the winning variants.

Summary on A/B testing Strategy Assumptions

How you deal with uncertainty is one factor that could alter your A/B testing strategy. Another one is how you think about costs vs rewards. Finally, how you determine the quality and predictability of ideas is another factor that could alter your approach to A/B testing.

As we walk through various A/B testing strategies, keep these things in mind:

  • Attitudes and beliefs about information and certainty
  • Attitudes and beliefs about predictive validity and quality of ideas
  • Attitudes about costs vs rewards and expected value, as well as quantitative limitations on how many tests you can run and detectable effect sizes.

These factors will change one or both of the following:

  • What you choose to A/B test
  • How you run your A/B tests, singularly and at a program level

What Are the Goals of A/B Testing?

One’s goals in running A/B tests can differ slightly, but they all tend to fall under one or multiple of these buckets:

  1. Increase/improve a business metrics
  2. Risk management/cap downside of implementations
  3. Learn things about your audience/research

Of course, running an A/B test will naturally accomplish all of these goals. Typically, though, you’ll be more interested in one than the others.

For example, you hear a lot of talk around this idea that “learning is the real goal of A/B testing.” This is probably true in academia, but in business that’s basically total bullshit.

You may, periodically, run an A/B test solely to learn something about your audience, though this is typically done with the assumption that the learning will help you either grow a business metrics or cap risk later on.

Most A/B tests in a business context wouldn’t be run if there weren’t the underlying goal of improving some aspect of your business. No ROI expectation, no buy-in and resources.

Therefore, there’s not really an “earn vs learn” dichotomy (with the potential exclusion of algorithmic approaches like bandits or evolutionary algorithms); every test you run you’ll learn something, but more importantly, the primary goal is add business value.

So if we assume that our goals are either improvement or capping the downside, then we can use these goals to map onto different strategic approaches to experimentation.

The Three Levers of A/B Testing Strategy Success

Most companies want to improve business metrics.

Now, the question becomes, “what aspects of A/B testing can we control to maximize the business outcome we hope to improve?” Three things:

  1. The number of tests (or variants) you run (aka frequency)
  2. The % of winning tests (aka win rate)
  3. The effect size of winning tests (aka effect size)

1. A/B testing frequency – Number of Variants

The number of variants you test could be number of A/B tests or the number of variants in an A/B/n test – and there’s debate between the two approaches here – but the goal of either is to maximize the number of “at bats” or attempts at success.

This can be for two reasons.

First, to cap the downside and manage risk at scale, you should test everything you possibly can. No feature or experience should hit production without first making sure it doesn’t worsen your business metrics. This is common in large companies with mature experimentation programs, such as booking.com, Airbnb, Facebook, or Microsoft.

Second, tinkering and innovation requires a lot of attempts. The more attempts you make, the greater the chance for success. This is particularly true if you believe ideas are fungible — i.e. any given idea is not special or more likely than any other to move the needle. My above quote from Geoffrey Miller’s “The Mating Mind” illustrated why this is the case.

Image Source

Another reason for this approach is, according a shitload of studies (the appropriate scientific word for “a large quantity”) have shown that most A/B tests are inconclusive and the few wins tend to pay for the program as a whole, not unlike venture capital portfolios.

Take, for example, this histogram Experiment Engine (since acquired by Optimizely) put out several years ago:

Image Source

Most tests hover right around that 0% mark.

Now, it may be the case that all of these tests were run by idiots and you, as an expert optimizer, could do much better.

Perhaps.

But this sentiment is replicated by both data and experience.

Take, for example, VWO’s research that found 1 out of 7 tests are winners. A 2009 paper pegged Microsoft’s win rate at about 1 out of 3. And in 2017, Ronny Kohavi wrote:

“At Google and Bing, only about 10% to 20% of experiments generate positive results. At Microsoft as a whole, one-third prove effective, one-third have neutral results, and one-third have negative results.”

I’ve also seen a good amount of research that wins we do see are often illusory; false positives due to improper experiment design or simply lacking in external validity. That’s another issue entirely, though.

Perhaps your win rate will be different. For example, if your website has been neglected for years, you can likely get many quick wins using patterns, common sense, heuristics, and some conversion research. Things get harder when your digital experience is already good, though.

If we’re to believe that most ideas are essentially ineffective, then it’s natural to want to run more experiments. This increases your chance of big wins simply due to more exposure. This is a quote from Nassim Taleb’s Antifragile (bolding mine):

“Payoffs from research are from Extremistan; they follow a power-law type of statistical distribution, with big, near-unlimited upside but, because of optionality, limited downside. Consequently, payoff from research should necessarily be linear to number of trials, not total funds involved in the trials. Since the winner will have an explosive payoff, uncapped, the right approach requires a certain style of blind funding. It means the right policy would be what is called ‘one divided by n’ or ‘1/N’ style, spreading attempts in as large a number of trials as possible: if you face n options, invest in all of them in equal amounts. Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: “The payoff can be so large that you can’t afford not to be in everything.”

Maximizing the number of experiments run also deemphasizes ruthless prioritization based on subjective ‘confidence’ in hypotheses (though not entirely) and instead seeks to cheapen the cost of experimentation and enable a broader swath of employees to run experiments.

The number of variants you test is capped by the amount of traffic you have, your resources, and your willingness to try out and source ideas. These limitations can be represented by testing capacity, velocity, and coverage.

Image Source

Claire Vo, one of the sharpest minds in experimentation and optimization, gave a brilliant talk on this at CXL Live a few years ago:

2. A/B testing win rate

The quality of your tests matters, too. Doesn’t matter if you run 10,000 tests in a year if none of them move the needle.

While many people may think running a high tempo testing program is diametrically opposed to test quality, I don’t think that’s necessarily the case. All you need is to make sure your testing is efficient, your data is trustworthy, and you’re focusing on the impactful areas of your product, marketing, or website.

Still, if you’re focused on improving your win rate (and you believe you can predict the quality of ideas or improve the likelihood of success), it’s likely you’ll run fewer tests and place a higher emphasis on research and crafting “better” tests.

As I mentioned above, there are two general ways that optimizers try to increase their win rate: research and meta-analysis patterns.

Conversion research

Research includes both quantitative and qualitative research – surveys, heat maps, user tests and Google Analytics. One gathers enough data to diagnose what is wrong and potentially some data to build hypotheses as to why it is wrong.

See the “ResearchXL model” as well as mosts CRO agencies and in-house programs’ approach. This approach is what I’ll call the “Doctor’s Office Strategy.” Before you begin operating on a patient at random, you first want to take the time to diagnose what’s wrong with them.

Patterns, best practices, and observations

Patterns are another source of data.

You can find experiences that have been shown to work in other contexts and infer transferability onto your situation. Jakub Linowski, who runs GoodUI, is an advocate of this approach:

“There are thousands and thousands of experiments being run and if we just pay attention to all that kind of information and all those experiments, there’s most likely some things that repeat over and over that reproduced are largely generalizable. And those patterns I think are very interesting for reuse and exploitation across projects.”

Other patterns can be more qualitative. One can read behavioral psychology studies, Cialdini’s Influence, or just look at other company’s websites and take what they seem to be doing and try it on your own site.

Both the research and the patterns approach have this in common: they inherently belief that a certain quality and quantity of information you collect can lead to better experiment win rates.

Additionally, the underlying ‘why’ of a test (sometimes called the ‘hypothesis’) is very important in these strategies. In something like the Discipline-Based Testing Methodology, the narrative or the “why” doesn’t matter, only that it’s efficient and makes money. [4] [4.5]

3. Effect Size of A/B testing Wins

Finally, the last input is the effect size of a winning test. Patterns and research may help predict if a test will win, but not by how much.

This input, then, typically involves the most surprise and serendipity. It still requires that you diagnose the areas of exposure that have the highest potential for impact (e.g. running a test on a page with 1000 visitors is worse than running a test on a page with 1,000,000).

Searching for big wins also requires a bit of “irrational” behavior. As Rory Sutherland says, “Test counterintuitive things because no one else will!” [5]

The mark of a team working to increase the magnitude of a win is a willingness for trying out wacky, outside the box, creative ideas. Not only do you want more “at bats” (thus exposing yourself to more potential positive black swans), but you want to increase the beta of your options, or the diversity and range of feasible options you test. This is sometimes referred to as “innovative testing” vs. incremental testing. To continue the baseball analogy, you’re seeking home runs, not just grounders to get on base.

All of us want bigger wins as well as a greater win rate. How we go about accomplishing those things, though, differs.

CXL’s ResearchXL model seeks to maximize the likelihood of a winning test through understanding the users. Through research, one can hone in on high impact UX bottlenecks and issues with the website, and use further research to ideate treatments.

Andrew Anderson’s Discipline Based Testing Methodology also diagnoses high impact areas of the property, likely through quantitative ceilings. Though this approach ‘deconstructs’ the proposed treatments. Instead of taking research or singular experiences, this approach starts from the assumption that we don’t know what will work and that, in fact, being wrong is the best possible thing that can happen. As Andrew wrote:

“The key thing to think about as you build and design tests is that you are maximizing the beta (range of feasible options) and not the delta. It is meaningless what you think will win, it is only important that something wins. The quality of any one experience is meaningless to the system as a whole.

This means that the more things you can feasibly test while maximizing resources, and the larger the range you test, the more likely you are to get a winner and more likely to get a greater outcome. It is never about a specific test idea, it is about constructing every effort (test) to maximize the discovery of information.”

In this approach, then, you don’t just want to run more A/B tests; you want to run the maximum number of variants possible, including some that are potentially “irrational.” One can only hope that Comic Sans wins a font test, because we can earn money from the surprise.

Reducing the Cost of Experimentation Increases Expected Value, Always

To summarize, you can increase the value from your testing program in two ways: lower the cost, or increase the upside.

Many different strategies exist to increase the upside, but all cost reduction strategies look similar:

  • Invest in accessible technology
  • Make sure your data is accessible and trustworthy
  • Train employees on experimentation and democratize the ability to run experiments

The emphasis here isn’t primarily on predicting wins or win rate; rather, it’s on reducing the cost, organizationally and technically, of running experiments.

Sophisticated companies with data-driven culture usually have internal tools and data pipelines and centre of excellence programs that encourage, enable, and educate others to run their own experiments (think Microsoft, Airbnb, or booking.com)

When you seek to lower the cost of experimentation and run many attempts, I call that the “Evolutionary Tinkering Strategy.”

No one A/B tests will make or break you, but the process of testing a ton of things will increase the value of the program with time, and more importantly, will let you avoid shipping bad experiences.

This is different than the Doctor’s Office Strategy for two reasons: goals and resources.

Companies employing the Doctor’s Office Strategy are almost always seeking to improve business metrics, and they almost always have a very real upper limit on traffic. Therefore, it’s crucial to avoid wasting time and traffic testing “stupid” ideas (I use quotes because “stupid” ideas may end up paying off big, but it’s usually a surprise if so).  [5]

The “get bigger wins” strategy is often employed due to both technical constraints (limited statistical power to detect smaller wins) and opportunity costs (small wins not worth it from a business perspective).

Thus, I’ll call this the “Growth Home Run Strategy.”

We’re not trying to avoid a strikeout; we’re trying to hit a home run. Startups and growth teams often operate like this because they have limited customer data to do conversion research, patterns and best practices tend to be implement directly and not tested, and opportunity costs mean you want to spend your time making bigger changes and seeking bigger results.

This approach is usually decentralized and a bit messier. Ideas can come from anywhere — competitors, psychological studies, research, other teams, strikes of shower inspiration, etc. With greater scale, this strategy usually evolves into the Evolutionary Tinkering Strategy as the company becomes more risk averse as well as capable of experimenting more frequently and broadly.

Conclusion

This was a long article covering all the various approaches I’ve come across from my time working in experimentation. But at the end of the journey, you may be wondering, “Great, but what strategy does Alex believe in?”

It’s a good question.

For one, I believe we should be more pragmatic and less dogmatic. Good strategists know the rules but are also fluid. I’m willing to apply the right strategy for the right situation.

In an ideal world, I’m inclined towards Andrew Anderson’s Discipline-Based Testing Methodology. This would assume I have the traffic and political buy-in to run a program like that.

I’m also partial to strategies that democratize experimentation, especially large companies with large test capacity. I see no value in gatekeeping experimentation to a single team or to a set of approved ideas that “make sense.” You’re leaving a lot of money on the table if you always want to be right.

If I’m working with a new client or an average eCommerce website, I’m almost always going to employ the ResearchXL model. Why? I want to learn about the client’s business, the users, and I want to find the best possible areas to test and optimize.

However, I would also never throw away best practices, patterns, or even ideas from competitors. I’ve frustratingly sat through hours of session replays, qualitative polls, and heat maps, only to have “dumb” ideas I stole from other websites win big.

My ethos: experimentation is the lifeblood of a data-driven organization, being wrong should be celebrated, and I don’t care why something won or where the idea came from. I’m a pragmatist and just generally an experimentation enthusiast.

Notes

[1]

How to run an A/B test is subject for a different article (or several, which I’ve written about in the past for CXL and will link to in this parahraph). I’ve touched on a few variations here, including the question of whether you should run many subsequent tests or one single A/B/n tests with as many variants as possible. Other technical test methodologies alter the accepted levels of risk and uncertainty. Such differences include one-tail vs two-tail testing, multivariate vs A/B tests, bandit algorithms or evolutionary algorithms, or flexible stopping rules like sequential testing. Again, I’m speaking to the strategic aspects of experimentation here, less so on technical differences. Though, they do relate.

[2]

Best practices are either championed or derided, but something being considered a “best practice” is just one more data input you can use to choose whether or not to test something and how to prioritize it. As Justin Rondeau put it, a “best practice” is usually just a “common practice,” and there’s nothing wrong with trying to match customers’ expectations. In the early stages of an optimization program, you can likely build a whole backlog off of best practices, which some call low hanging fruit. However, if something is so obviously broken that fixing it introduces almost zero risk, then many would opt to skip the test and just implement the change. This is especially true of companies with limited traffic, and thus, higher opportunity costs.

[3]

This isn’t precisely true. Andrew’s framework explicitly derides “number of tests” as an important input. He, instead, optimizes for efficiency and wraps up as many variants in a single experiment as is feasible. The reason I wrap these two approaches up is, ideologically at least, they’re both trying to increase the “spread” of testable options. This is opposed to an approach that seeks to find the “correct” answer before running the test, and then only using the test to “validate” that assumption

[4]

Do you care why something won? I’d like to argue that you shouldn’t. In any given experiment, there’s a lot more noise than there is signal with regard to the underlying reasons for behavior change. A blue button could win against a red one because blue is a calming hue and reduces cortisol. It could also win because the context of the website is professional, and blue is prototypically associated with professional aesthetic. Or perhaps it’s because blue contrasts better with the background, and thus, is more salient. It could be because your audiences like the color blue better. More likely, no one knows or can ever know why blue beat red. Using a narrative to spell out the underlying reason is more likely to lead you astray, not to mention waste precious time storytelling. Tell yourself too many stories, and you’re liable to limit the extent of your creativity and the options you’re willing to test in the future. See: narrative fallacy.

[4.5]

Do we need to have an “evidence-based hypothesis”? I don’t think so. After reading Against Method, I’m quite convinced that the scientific method is much messier than we were all taught. We often stumble into discoveries by accident. Rory Sutherland, for instance, wrote about the discovery of aspirin:

“Scientific progress is not a one-way street. Aspirin, for instance, was known to work as an analgesic for decades before anyone knew how it worked. It was a discovery made by experience and only much later was it explained. If science didn’t allow for such lucky accidents, its record would be much poorer – imagine if we forbade the use of penicillin, because its discovery was not predicted in advance? Yet policy and business decisions are overwhelmingly based on a ‘reason first, discovery later’ methodology, which seems wasteful in the extreme.”

More german to A/B testing, he summarized this as follows:

“Perhaps a plausible ‘why’ should not be a pre-requisite in deciding a ‘what,’ and the things we try should not be confined to those things whose future success we can most easily explain in retrospect.”

[5]

An Ode to “Dumb Ideas”

“To reach intelligent answers, you often need to ask really dumb questions.” – Rory Sutherland

Everyone should read Alchemy by Rory Sutherland. It will shake up your idea of where good ideas (and good science) comes from.

Early in the book, Sutherland tells of a test he ran with four different envelopes used by a charity to solicit donations. They randomize the delivery of four different sample groups: 100,000 announce that the envelopes had been delivered by volunteers, 100,000 encouraged people to complete a form that meant their donation would be boosted by a 25% tax rebate, 100,000 were in better quality envelopes, and 100,000 were in portrait format. The only “rational” one of these was the “increase donation by 25%'” option, yet that reduced contributions by 30% compared to the plain control. The other three tests increased donations by over 10%.

As Sutherland summarized:

“To a logical person, there would have been no point in testing three of these variables, but they are the three that actually work. This is an important metaphor for the contents of this book: if we allow the world to be run by logical people, we will only discover logical things. But in real life, most things aren’t logical – they are psycho-logical.”

The post What’s the Ideal A/B Testing Strategy? appeared first on Alex Birkett.

]]>
Brand Awareness is Basically a Meaningless Metric. Here’s Why. https://www.alexbirkett.com/brand-awareness/ https://www.alexbirkett.com/brand-awareness/#comments Sun, 18 Aug 2019 15:22:37 +0000 https://www.alexbirkett.com/?p=814 “Kmart has plenty of awareness, so what?” -Purple Cow by Seth Godin I hear the term “brand awareness” all the time, but to be honest, I don’t really know what it means. On its surface, it’s somewhat obvious: it’s the amount of people who know about your brand. But that simple, stupid Google search definition ... Read more

The post Brand Awareness is Basically a Meaningless Metric. Here’s Why. appeared first on Alex Birkett.

]]>

“Kmart has plenty of awareness, so what?”

-Purple Cow by Seth Godin

I hear the term “brand awareness” all the time, but to be honest, I don’t really know what it means.

On its surface, it’s somewhat obvious: it’s the amount of people who know about your brand.

But that simple, stupid Google search definition doesn’t do it for me.

Screen Shot 2019 06 28 at 8.55.36 PM

I have more questions:

  • Which people?
  • Is “brand awareness” a relative metric that you need to compare to others to make sense like IQ, or is it an absolute metric like average order value?
  • How do you determine their awareness? Is it based on recall, recognition, something else? What does “awareness” actually mean?

From a common sense perspective, of course brand awareness as a concept matters, if we’re defining it as “knowing about your brand” (though that seems properly circular to me – of course you need to “know” a brand before you buy it).

What perplexes me is the metric known as brand awareness. When it comes to measurement, I know what conversion rate means. I can explain how it is logged and what its significance is. Heck, I can even talk about bounce rates with enough clarity to know they don’t really matter.

0 6

Image Source

If I’m putting in the effort to measure and analyze a metric (even taking action on it), then I want to get crystal clear what the metric actually means, both as a construct and as a practical consideration for business decisions. “Brand awareness” suffers from a lack of clarity to me.

So is brand awareness one of those important things that isn’t measurable (these things do exist!)? Or is there an actual way to measure it, and it’s just very ambiguous how to do it?

And if we measure it, can we do anything with it, or is just a “nice to know” kind of metric?

I went down a rabbit hole and read a lot of academic papers as well as shitty blog posts that rank well on Google [1], and asked a lot of respectable marketers, to try to figure out.

What is Brand Awareness?

Brand awareness is a measure of a brand’s relative cognitive representation in a given category in relation to its competitors.

That’s it. It’s measured by how many people, when asked, know your brand.

Now, you can measure this a few different ways (which we’ll get into).

You can measure it actively by asking people to pick which brands they have heard of from a list. You can ask actively by asking them to tell you which brands they’ve heard of. You can pull brand awareness from passively available data on search, analytics, and social.

Brand awareness is not:

  • Brand equity
  • Brand preference
  • Brand loyalty
  • Corporate identity
  • Brand engagement

All of those things are distinct measures, though all normally fall roughly under the brand marketing department.

Of all the above, the two that are most commonly confused with brand awareness are brand equity and brand loyalty.

Brand equity is basically the value of your brand in relation to other brands in your space. It’s why people will pay a bunch of money for Apple products, and it’s why a posh wine will actually subjectively taste better to an unsuspecting wine drinker.

Brand loyalty is the tendency for customers to keep buying from a brand instead of switching to a competitor. I like to frame that one in the inverse, actually, and instead look at it as the unwillingness of a customer to switch to another brand despite attractive feature parity or pricing.

Mind share (or share of voice or consumer awareness) is a similar term to brand awareness. There are legitimately too many jargon-y terms in this space, so it’s no wonder so many people are confused about it all means.

In any case, someone needs to be aware of your brand before it becomes valuable, and it needs to become valuable before they’re loyal to it.

Let’s talk about measuring that brand awareness.

How to Measure Brand Awareness

The way you measure this depends on your industry as well as your market.

If you’re P&G advertising general consumer packaged goods, brand recall surveys aren’t the worst thing in the world. [2]

If you’re selling live chat software, it’s probably a horrible way to measure brand awareness, unless your surveys are incredibly well-targeted as well as conducted longitudinally and in relation to a similar cohort of competitors (after all, the context is what matters with these surveys, not a generic metric).

In my opinion, how you measure brand awareness should have two main criteria:

  1. It should be sampled from those who are actually in your target market
  2. It should be actionable. Knowing your number should help you make better decisions.

I’ll outline all the ways people measure brand awareness below, and I’ll explain why most of them aren’t actually measuring what we think of as “brand awareness” at all.

0 5

Image Source

The metrics below aren’t inherently good or bad, but the important thing is to have the discussion internally of what the metric actually means to your business before you conduct the campaign.

If everyone thinks it’s something different, it’s a bad metric, and if you’re post-hoc storytelling, than you’re causing collateral damage to your company’s culture.

Direct Traffic

First and foremost, if you primarily operate digitally (i.e. people come to your website to complete most important actions, rather than in-person), direct website traffic is a common indicator of brand awareness.

The logic behind this is pretty tight: the number of people who type in [yourwebsite.com] reflects the number of people who are “aware” of your brand (and they show that by remembering to type in your brand specifically).

It’s common for performance campaigns (e.g. paid ads) to have an effective on this direct traffic as well, which supports the idea that there are at least some partially untrackable second order effects to many types of campaigns.

As an aggregate metric, this is a pretty good directional indicator of brand awareness.

Screen Shot 2019 08 08 at 8.52.57 AM

(Acquisition > Channels > “Direct”)

The problem, however, lies with how direct traffic is attributed on most analytics platform.

Essentially, Google Analytics attributes traffic that it can’t otherwise attribute to “direct.” This means your numbers are likely inflated, even more so if you aren’t properly tagging campaign links, especially via channels like email and potentially social as well.

Similarly, you don’t necessarily know that everyone who comes to your site directly is a potential customer in your target market. They could be:

  • Employees (if you haven’t set up IP filters)
  • Competitors doing research
  • Current users coming to logging in (though a higher number of these isn’t a bad thing either)
  • Random voyeurs checking out your site after some press

In essence, direct traffic is like a thumb in the wind or a weather vane, but it’s not a very precise metric (and you can’t do much with the number once you know it – it’s not an actionable metric).

Track direct traffic, but don’t worry too much about its brand awareness implication or ever use it as an argument to back up a failed campaign.

Branded Search Terms

Branded search volume is like direct traffic, except instead of tracking people who directly type in your URL, you’re tracking people who search for your branded terms in Google or other search engines.

E.g. it’s the difference between someone typing in alexbirkett.com and someone searching Alex BIrkett.

In my opinion, branded search is a better way to measure brand awareness over time, specifically because there is less muddiness around attribution. You’re isolating people who are specifically searching for your brand, and you’re not including all kinds of different channels due to failed analytics attribution.

Still, it’s a very directional metric, and search volume data is often only an estimate, never truly granular. A good portion of brand searches can also be attributed to product users looking to sign in (if you’re in Saas). Of course, seeing that number go up is never a bad thing – it’s just not tracking what you think of as top of the funnel brand awareness.

I do find it to be a particularly good thermometer or gut check when doing competitive analysis though.

For instance, if you’re looking at the top email marketing software vendors, you can very quickly see how much search volume each brand gets:

Screen Shot 2019 08 08 at 9.05.35 AM

At the very least, this is actionable in that it helps you determine which competition is the biggest threat, how to position yourself in the market, and how much space you have to overcome to catch up to the biggest names in the space.

These numbers aren’t going to change in the course of a month or a quarter, and you might see a little wiggle over the course of a year. So as far as actionability goes, it’s best used as a periodic audit or marketing research tool.

If you’re tracking branded search of your own company you can just use Google Search Console and get much more accurate numbers.

Brand Owned Terms

Often when you launch a campaign, it is with a new “brand owned term.”

This could be an advertising tag line, or it could be a new industry framework, such that you find frequently used in B2B.

For the former, think something like “Got Milk?” and for the latter, something like “Inbound Marketing” or “Skyscraper Technique.”

I like these because, rather than the amorphous concept of your entire brand or company, brand owned terms hone in on a specific idea or campaign.

My rule of thumb is that the narrower the scope of a metric, the more useful it is as a predictive or decision making tool.

You can’t do much with the idea that X number of people search “HubSpot,” but you can infer a lot about the effectiveness of an ad campaign if people start searching for the phrase, “grow better.”

The problem with brand owned terms is that, unlike your actual company name/brand, the volume for these phrases tends to be unnervingly low.

It’s like when social media managers pick a hashtag and start using it all the time, only to analyze several months later and discover that, lo and behold, barely anyone else used the hashtag.

Screen Shot 2019 08 08 at 9.18.19 AM

We get trapped in our own bubbles as marketers, so we assume that everyone in the world is talking about “conversational marketing” or the “skyscraper technique.”

However, in the broader world, these campaigns tend not to make a big splash.

So unless your brand and your campaigns are truly mainstream, the amount of search data you’ll have on your brand owned terms will probably be very low, and highly variable, and thus very difficult to extract value from.

That doesn’t mean you shouldn’t track them; they correlate highly with brand search terms as well. And if you’re going to work on making an idea stick, you should track how often people use the term and search it organically.

Screen Shot 2019 06 28 at 9.13.46 PM

Image Source

Brand Recall/Recognition Surveys

Ironically, brand recall surveys are probably one of the only methods on this list that actually measure what we consider “brand awareness,” and they’re probably the least useful for 99%+ of brands.

Brand recall surveys are the tried and true method for the large and consumer facing and the Fortune 500.

How do you determine whether you have a greater brand awareness than Pepsi if you’re Coca-Cola? Get a big enough sample of participants, and ask them which brands they know.

Screen Shot 2019 08 14 at 3.24.09 PM

Image Source

The problems with this method are many. Of course, for startups, the level of granularity required in your population sampling would be almost impossible. You can probably find a representative sample if you’re selling deodorant, but not if you’re selling sleep tracking rings or conversion optimization courses.

Alex McEachern put it well in a Smile.io article:

“Many articles out there focus on attempting to measure brand awareness, but I will save you the trouble and tell you that most methods aren’t actually measuring it all. They rely on surveying customers and anonymously asking if they can recall seeing your brand before, which is actually just brand recall.”

Keep in mind, also, that just because I can recall that Spectrum is indeed an internet service provider, doesn’t mean I pay them for their services (regrettably, I’m an AT&T customer). I can name Pepsi in under a second, but I haven’t had a Pepsi in probably a decade.

Social Media Mentions

A super popular method is to look at how many brand mentions you have on social media.

I hate this method.

First off, it’s very difficult to determine the relative importance of a given social media mention using only quantitative metrics. So what does it mean to say 5,000 people have mentioned your brand in the last month? The context could be quite different if you had launched a new product versus your CEO had a scandal.

There is, of course, sentiment analysis, which aims to quantify to some extent the sentiment, or emotional directive, of your social media engagement.

Not only does this have construct validity problems (we’re not quite sure what constitutes a positive or a negative sentiment, realistically), but it might have external validity problems, too (i.e. what people say on Twitter probably doesn’t have a ton of relation to what people say or do in the real world).

Also, who gives a shit if lots of people are talking about you on social media if you’re not selling products? I know that’s not the most academic or polite way to put it, but I’ve not seen concrete evidence that there is even a relationship between the two variables (social engagement and sales).

So the question here is, do you want to be rich or do you want to be famous? [3] If you want to run a business, measure business metrics. If you want to be an influencer, measure your social media mentions.

“Impressions”

This one is the worst. If you think “impressions” equal brand awareness, you’re wrong.

“Impressions” as a standalone metric are used to justify failed marketing campaigns.

0 7

Image Source

As Daniel Hochuli wrote:

“Ask yourself this question – How much ‘brand awareness’ impact did the last post that you didn’t engage with, have on you?

Do you even remember the last paid post that appeared in your feed? My guess is not, but you can bet that the marketer behind it is telling their superior that you do remember and is counting your ‘impression’.”

Real Estate for Categorical Search Terms

As I’ve mentioned, my ideal brand awareness gauge is both narrow and actionable. By that I mean that your sample exclusively includes your target market and you can actually do something about the number you get.

This metric is something we came up with at HubSpot to track our brand awareness for core product search keywords. This method is most useful for companies using search as an acquisition channel, but I think it’s a good gauge of how the market views you anyway.

Take a transactional or comparison product keyword with some significant search volume, like “best form builder,” and see how many sites that rank for the term mention your brand.

Out of the top 20 websites that rank for “best form builder,” for example, HubSpot is mentioned on 5 of them (or 25% of them):

Screen Shot 2019 08 14 at 3.46.04 PM

Note: this data is calculated using a homebrew tool I built with R and hosted with Shiny. Email me if you’re interested and I’ll show you how I built it.

You can easily manually get this data if you only have one or two keywords, or you could hack together a crawl of the top 20 pages using Screaming Frog and Excel.

Again, this is especially impactful for those making their money with search, because you can impact the number. Not mentioned on many of the sites that rank? Start partnering up, producing content, and finding a damn way to appear on them! It’s where (potential) customers are looking to find solutions just like yours.

Even if you’re not using SEO to acquire customers, though, it’s a good temperature check for how important publishers think your brand is for a product category.

For example, here’s how often “casper” appears on the top 20 for “best mattresses”:

Screen Shot 2019 08 14 at 3.49.35 PM

This data (like a lot of data) is even richer used comparatively. If you know how often your competitors are mentioned, it puts a good benchmark on the line for you to aim for.

Review websites

Review websites like G2 don’t only measure awareness. They also measure sentiment and show feature comparisons.

So they’re a bit more comprehensive, but they can still give a great indicator of where you stand in the market.

What I like about these sites: they’re 3rd party entities and they include qualitative data like the sentiment of your reviews. They also compare you to competitors, so you don’t just get an isolated number that you don’t know what to do with.

The big problem with most of these sites, though, is that most either operate on a CPC or an affiliate model, so they’re uncomfortably similar to bullshit extortion sites like Yelp is to small businesses.

The big exception is G2, which is an amazing resource for software buyers and businesses alike. If I were you, I’d keep a close eye on my G2 ratings:

Screen Shot 2019 08 14 at 7.00.42 PM

Image Source

You seem skeptical, Alex. Is there anything valuable about brand awareness?

It all depends on our definition of the term. My beef is that we aren’t defining what we mean when we say brand awareness, so we’ve got a veritable tornado of different metrics that all serve only to obfuscate the customer journey, rather than to illuminate it.

I’ll repeat what I said above. Brand awareness metrics should adhere to two principles:

  1. They should sample only those in your target market, the narrower it is defined, the better.
  2. They should be actionable. Trivia is fun, but it has no place in business.

An addendum is that a great metric has context, both within the market and over a time-series. You should be able to stack your brand awareness against other competitors and you should be able to see if you are gaining or losing over time.

Also, maybe we shouldn’t expect metrics to solve every aspect of business decision making for us.

You’ve Just Gotta Believe!

In fact, I think one of the major problems in our data-driven age is when we try to apply science to problems of art.

In other words, if we can’t measure something, trying to do so is mostly wasted effort and storytelling to make ourselves feel good about being “data-driven.” It reminds me of little kids wearing suits and playing house – adorable but a naive facsimile of the real world.

Not all marketing can or should be driven or supported by data.

What can be, should be (particularly experiments and actions that have fast enough feedback cycles and predictive validity). In areas of uncertainty, however, we should be comfortable enough to take some risks and capture some of that elusive optionality.

There’s also always going to be an intangible aspect as to “why” people actually buy from you.

It’s the emotional, the unconscious, the hidden. Whether that’s driven by your company mission, the customer experience and service experienced, or because of the premium and luxurious look of your brand logo, it falls under the bucket of the emotional for me.

At HubSpot, we often talk about “winning hearts and winning minds,” where winning minds is the logical and the quantitative, the stuff we can attribute and track.

Winning hearts, then, constitutes things like being thought leaders, pushing interesting ideas into the ether, and inspiring people. You could bucket this into “brand awareness” if you’d like, but I think that term is overly myopic and doesn’t describe the depth and talent that goes into winning hearts.

So really, just eat the humble pie and realize you’ll never be able to attribute every marketing touchpoint to an end sale, and be okay with that.

Clearly brand messaging and awareness level campaigns have an impact, let’s just not play house and pretend that a failed webinar was actually not a failure because it had a lot of impressions or whatever post-hoc justification we use.

Structurally, I like to frame as an 80/20 rule, which I’ve borrowed from Mayur Gupta:

“Do your growth efforts and performance spend benefit from a strong brand (efficiency and/or effectiveness or organic growth)? Are you able to measure and correlate?

Think about the 80–20 rule when it comes to budget distribution — if you can spend 80% of your marketing dollars on everything that is measurable and can be optimized to get to the “OUTCOMEs”, you can spend 20% however you want. Because 100% of marketing will NEVER be measurable (there is no need).”

Also, don’t do brand marketing if you’re a startup.

Final thoughts on brand awareness

“Brand awareness” is a term used loosely and blithely to describe top of the funnel marketing activities, but in reality, many of the methods we use for tracking it actually measure distinct things entirely.

Campaigns without directly attributable conversions can be impactful. That’s why smart companies and growth teams bucket their actions into a portfolio like Mayur Gupta suggests – 80% trackable spend, 20% however you want. 100% of marketing will never be measurable.

Yes, you have to be aware of a product in order to try it (which is somewhat of a tautology), but brand awareness may also be an emergent property of doing a bunch of other things really well.

In the words of Bob Hoffman, “Well, I’m afraid I have a very old guy opinion. You want customers raving about your brand? Sell them a good fucking product.”

[1] The “SEO-ification” Test

First off, when I started researching the topic of brand awareness to see what others think of it, it immediately struck me that every search result was carefully formulated to rank for the term “brand awareness.” In search results like this, you’ll notice a few common themes:

  1. First, almost all of the content is relatively similar. Sure, titles are a bit different, and maybe one is longer than the other. But for the most part, reading 6 articles in one of these search results doesn’t give you 6x the value of reading one; it doesn’t even give you 20% more value. It’s basically like reading the same one over and over again.
  2. Second, all the content is kind of…vague. I can’t find the word for how I want to describe this content; the closest thing I can do is borrow Benji Hyam’s “mirage content” concept. It looks like a duck, quacks like a duck, but for some reason, it’s just not a duck. There’s no substance, and you can tell the author is just rehashing others’ information.

Terms with search results don’t mean that the term itself isn’t to be trusted. For example, “conversion optimization” has been super SEO-ified, but obviously, conversion optimization (the real definition) is a legitimate practice. It’s just that the stew has been poisoned by know-nothing opportunists (ie lazy marketers).

Similarly, brand awareness could be a tangible and important concept. But with search results like these, it’s wildly difficult to figure out what exactly that concept is.

[2] Who Do We Deify?

Another heuristic I’ve come up with when researching marketing topics is what the example landscape looks like. In other words, in a random blog post, which companies and case studies are chosen, how diverse are the examples, and what does that mean for the generalizability of a topic?

In “brand awareness,” everyone seems to talk about Coca-Cola, Apple, and P&G products.

This would seem, then, that “brand awareness” as a concept either mostly relates to or mostly benefits large consumer brands. There are few, if any, case studies are “brand awareness” with regards to quiet but successful B2B software brands.

[3] Fame vs Fortune

Matthew Fenton wrote a great essay on brand awareness (I’m mostly saying that because I agree with all of it, and it is very cynical about the objective). I love this quote:

“You know who has great awareness? Martin Shkreli. So too does Travis Kalanick. They’ve both found themselves in the headlines throughout the year — but does mean you’re going to be doing business with them?

As a consumer, you’re aware of hundreds of brands that you have no opinion about. Or just don’t like. Or bought once and would never buy again.

Brand awareness isn’t that hard to achieve. You can get it with a big budget, shock value or simple longevity. But if you believe the adage that people buy from those they know, like and trust, then awareness only gets you the “know.” “Like” and “trust” are other things entirely.”

In this sense, “brand awareness” is noise that actually clouds the signal of what actually matters – customers and how much they like you and your business.

If you want to be well-known, maybe your brand should start a TikTok account or something.

The post Brand Awareness is Basically a Meaningless Metric. Here’s Why. appeared first on Alex Birkett.

]]>
https://www.alexbirkett.com/brand-awareness/feed/ 3
The Economics of Content Creation (or Why Most Roundup Posts Are Awful) https://www.alexbirkett.com/content-marketing-economics/ Fri, 12 Apr 2019 21:44:42 +0000 https://www.alexbirkett.com/?p=755 Content creation has a cost, both directly and indirectly. Knowing that cost lets you make better business decisions (and choose what to read)

The post The Economics of Content Creation (or Why Most Roundup Posts Are Awful) appeared first on Alex Birkett.

]]>
If you’re in the content marketing space, you probably notice that a large amount of content is now expert roundup posts, listicles, and shallow case studies. Why is that?

The short answer: this type of content is pretty darn cheap to produce.

Screen Shot 2019 03 25 at 8.18.24 PM

Don’t get me wrong, I’ve produced and directed a fair amount of these types of posts.

But this article will explain the cost of content production, and why that matters if you’re a marketing manager (or just a simple content reader).

The Cost of Content: A Sliding Scale

There’s a cost of creating content, and whether that cost is low or high has direct implications on how well you can trust the content, how competitive it is to create that type of content, and how easily you can produce that content (particularly at scale) if you’re a business.

cheap content

What is the “cost” of content?

When I say content has a “cost,” I don’t necessarily mean that in a direct sense.

Of course, a blog post does cost something, and that cost is particularly apparent when you either a) hire a content marketing manager or b) contract with freelance writers.

In the former, you understand content “costs” based on the resource and time allocation of your employee. A content marketing manager, even a great one, only has so much time in a day to spend writing and editing content.

An in-depth research report “costs” more than a listicle in the sense that it takes more time to create, which leaves an opportunity cost wherein you could be publishing more or different content.

If you’ve ever hired freelancers, you know the qualitative difference between hiring a content farm and hiring a top notch writer. The secret, from a business perspective, is that both sides of the spectrum can and do work from an ROI perspective.

I also want to point out that there is another, indirect cost of content: a piece of content could be “costly” in the sense that it involved years of experience to come up with an idea or piece of knowledge (look at the Animalz blog or Paul Graham’s essays – they don’t happen overnight).

As Whitney Wolfe Herd said on Tim Ferriss’ podcast, “The most expensive currency in the world is experience.”

It could also incur a cost if there is a substantial risk to writing it.

For instance, if a well-respected conversion rate optimization expert publishes an article on their CRO strategy, the costs, in the event that it fails to land or if potential clients poke holes in the essay, are much larger than if a random writer publishes a CRO strategy article (they have no tree to fall from).

Cost can be in the form of time to production (including years of experience), reputational risk, or in actual monetary value of materials and resources to create something like video or a research study.

Another very important point: the quality or value of the content is exogenous to the cost of creating it. That is, while cost is correlated with quality (pricy content tends to be better), there’s nothing inherently better about content because it is costly.

I want to drive that point home here: “cheap” sounds like it means “bad,” but it really just means there’s a low cost of production and a low barrier to entry. It’s not necessarily bad, it’s just more likely to be bad because of that low cost and lower barrier to entry (which I’ll go over in a bit). In aggregate and categorically, quality and cost do correlate:

Screen Shot 2019 03 25 at 8.34.44 PM

The Business Case for Cheap Content is Strong

In business, you should try to maximize the delta between the cost of an action and the reward that springs from it.

If you can produce a cheap piece of content that gets the same or better results than an expensive one, why would you waste the resources on an expensive one? That’d be bad business.

Here’s an example: product listicles do super well for us at HubSpot. Not only do they bring in a ton of traffic, but they are conversion generating machines as well. Compared to other content, they’re easier to produce as well:

Screen Shot 2019 03 25 at 8.38.52 PM
Image Source

I’d be remiss if I didn’t mention that the “cost” of this content is partially hidden. HubSpot has been investing in content, and thus their domain authority, for years, so what we see now is really a cumulative return based on all of the years of previous effort.

…which is actually partially the point I want to make here. Cheap content won’t work if you’re new. Cheap content has a pivotal business purpose if and when you have the ability to rank it.

Huh?

When you’re first launching a website and up until you have the ranking power of the biggest competitors, you have to compete on quality or differentiation. There’s no other way to break through the noise.

Essentially, in the beginning, you need to “do things that don’t scale,” which is the sweat equity you put into the later ability to rank cheap content (templatized, UGC, etc.). A blog post on Atrium.co outlined this perfectly, and this graphic sums it up:

Image Source

While there’s no set point or clear milestone when you can start to rank cheaper content, there does seem to be a “takeoff point” where it gets easier. It does seem to hit an inflection point though. In my experience it looks a bit like this:

Screen Shot 2019 01 17 at 9.06.29 AM

From working at a few content-heavy companies and with many clients, I’ve learned detecting that point is a mixture of two things:

  • Analyzing the competition and making assumptions on feasibility (and building a growth model based on those assumptions)
  • Publishing some content to get a gut feel to where you typically land on Google.

The former helps you architect the strategy before you start, and the latter is crucial for updating your priors.

I’ve found that for the analyzing competitors and assessing feasibility, this advice from Ian Howells in a GrowthHackers AMA is pretty great:

“I use a blend of search volume, intent, and “attainable position”. With any given project, I look to find the leader in the space. So if I was going to work on a site about outdoor/camping products, I’d likely toss REI into aHrefs and spit out all their keywords over 500 searches per month.

I’d then make an assumption about how close I could get to REI’s rankings – say (for sake of argument) I was assuming I can get to 4 spots lower than REI. I’ll just run a new column in excel/google sheets adding 4 to all of their rankings.

That new “attainable position” plus a CTR curve gets me a ballpark on the amount of traffic I could realistically hope to get for the site.

A pivot table to roll these up by page then gives me a map of what pages I need and what the potential traffic is per page. I’ll start with the biggest opp pages and just work my way down.”

You can find average CTR data on SERP positions here and make your own assumptions to build a model. It should be a pretty quick exercise, because when rubber meets the road, it’s rare that you’re very accurate.

Screen Shot 2019 03 25 at 8.55.40 PM

However, when you do start to produce content, you’ll develop a fingerspitzengefuhl as to how much effort and resources you need to invest to outrank your competitors. It’s likely more than you had planned on upfront.

That’s why the key to content strategy, at least in the early stages, is in investing part of your strategy in building out link assets, top of funnel content, and highly socially shareable content. That’s the stuff that will build up your overall website presence and authority so that later on down the line you can write and rank cheap content (and also bottom funnel content and produce pages).

The Role of TOFU

To recap, cheap content is economically smart for businesses to create, but it only works after investing time and effort into decidedly expensive content. Start with noteworthy, remarkable content in order to break through the noise, and then you can experiment with templatized content, UGC, etc.:

Screen Shot 2019 01 17 at 9.08.35 AM

This is why content marketing is said to be a “flywheel.” You put some work in, that force remains constant, and the more effort you put in, the more returns you generate consistently with time.

keeping up with seo in 2017 beyond 91 638

Image Source

Beware the Barrier to Entry (Why Cheap Content May Not Be a Long Play)

The cheaper the content, the more competitive it will be to break through the noise. The barrier to entry is wildly low to write a roundup post (you don’t even need to write anything, really). Therefore, more people will enter that space, and you’ll have a harder time standing out.

Not only that, but you can be easily knocked off your pedestal by an upstart who’s willing to invest more in creating better content.

Screen Shot 2019 04 12 at 3.55.47 PM

Easy example: everyone in the world can put together a loosely curated roundup post, but very few people can conduct original user experience research studies.

To go to an extreme length, if you and a team worked together on a project for two years, no one else but you and the team could tell the same story. Your cost to producing a piece of content in that regard is high, because you put in two years of work to get to the point of writing it.

That’s not to say that cheaper content formats are inherently low quality. A well-curated roundup post with true experts can be massively valuable (though I’ve rarely seen them, I have to say).

Similarly, expensive content can be poorly produced as well. Just because you worked on a project for two years doesn’t mean you’ll have anything valuable to say about the process (or especially that you’ll give an objective write-up of what you did).

That’s a sunk cost, and nobody else cares how much time you spent writing a blog post.

However, if done right, an expensive content strategy can be a powerful moat. The more expensive the content is to create, the harder it is for people to replicate what you’ve done. No one in the world can write the way Tim Urban does. You can’t compete.

Screen Shot 2018 05 05 at 12.13.52 PM

This is often the case with powerful thought leadership and original research. Many will try to replicate it, but it’s very near impossible to beat the initiator.

Don’t hold onto expensive content as a silver bullet method to get early results, though. In the early days, we produced lots of UX research at CXL Institute. The result? They performed largely the same if not marginally better than a normal blog post.

Screen Shot 2019 03 25 at 9.06.57 PM

I’ve touched very little upon business results and outputs of content, because that’s not the point I wanted to make here. But to point out the obvious; if you can get outsized rewards for producing the cheapest content possible, that’s obviously the best business decision you can make. It’s likely you won’t be able to do that, at least sustainably and at the beginning of your journey, but this is a business decision after all.

Now that we’ve got the business-side of this whole “content economics” thing out of the way, let’s dive into the fun and somewhat rant-y stuff from the reader side of the equation.

Cheap Talk and Teardowns: The Shortcomings of Cheap Content

A months ago, I randomly stumbled upon a blog post that was critiquing a campaign I had launched. It was all (mostly) praise. Still, it felt weird.

Some of the takeaways were questionable, but overall I felt flattered. It was nice being recognized.

I mentioned this critique to a friend, also in marketing, and he said, “dude, that type of content is super cheap to create. It’s easy to write those and get some quick social shares.”

That thought lead me to think about the value, accuracy, truthfulness, and benefits of writing (and more so, reading) critiques written by people looking in from the outside.

This seems especially pertinent now, as the new trend in content seems to be writing case studies on successful companies and how they got there (or what Ryan Farley calls “fake case studies.”)

Screen Shot 2018 05 17 at 8.46.23 PM

This thought is really what led me down to the idea of “the cost of content production.” It started out as a cynical brush off towards bad content creators and led to a somewhat pragmatic angle on content marketing strategy.

However, I’d feel bad if I left out my thoughts on why cheap content may actually externalize its cost to the reader. In other words, cheap content is probably contributing to a worse world (or at least a noisier world) and many of us are complicit.

All Talk, No Walk (and Why You Should Value $ Over Opinions)

I’ve done a few landing page teardowns in my life.

There’s a nervousness to doing these, for me. I always wonder, if when presented the landing page or website, will I have anything useful to say? Without seeing any of the site’s data (or even if I could see their data), what gives me the right to critique their CTA color or copy?

Still, I’ve seen lots of landing pages, run a ton of A/B tests, read through hundreds of UX research papers and articles, and have a solid understanding (for a layperson at least) of behavioral science. This, at least, gives me some sort of justification for the remarks I make.

Consider this, though, before you trust my experience-based wisdom:

I may spend a few seconds talking shit about a website for not having a phone number on their homepage. Or having a CTA below the fold. Or having a vague headline. Or whatever best practice you want to talk about.

But in this context, I am not the customer, and I do not have my credit card out. My opinion is almost (almost!) worthless.

The Halo Effect and “Why X Company Grew”

Another thought experiment.

This one explains why even an expert, someone with years of startup experience, for example, could mess up a case study analysis. It’s called the halo effect.

It’s much easier to talk about “How Netflix Grew,” or “Why Casper’s Marketing Works” when you’re analyzing a winner. Everything looks great under that light! But what happened to all the losers who did the same things people attribute as success factors to the winners?

Screen Shot 2018 05 17 at 9.20.39 PM
If Casper was a failed startup, would one reason it’s because “they wasted time on To-Fu content”?

Let’s pretend we had a CRO or UX expert who had the exact same knowledge and skill level as any other top CRO or UX expert, but for some reason they had never heard of Amazon.

If you asked them to teardown Amazon’s website experience, given that absence of knowledge, how different would it look from a CRO or UX expert on planet earth who had heard of the gigantically successful company?

If our opinions and teardowns are valuable, then one should expect zero difference between the critiques. Thank god we have experimentation.

HeloEffect
The view of the outside critic is limited

That’s why I have trouble respecting case studies written on “How [X Super Successful Company] Grew.” The halo effect is almost always going to ruin your hindsight view of what a company did right or wrong (how many companies did the same things as Airbnb but failed? We’ll probably never know).

That’s not to say you can’t learn things from breakdowns, case studies, etc. You definitely can!

In the context of landing page teardowns, people develop a type of fingerspitzengefühl for these things, and you can also learn a lot of underlying psychology and UX principles from these things. The author of a case study can interview the company in question. The author can simple pour tons of hours into truly understanding a given aspect and pulling insights from it. Running 1000 experiments gives you credentials to tell someone how to run their experiment.

But as the reader, you need to know whether that person has run 1000 (or 10, or 1) experiments, and you need to know just what level of knowledge and research went into that case study or breakdown.

Again, I’m guilty of a lot of this stuff.

I’ve given quotes on things I barely know anything about about.

I’ve written listicles, roundups, and other forms of cheap content. I like backlinks, what can I say?

But it’s an externalized cost, because the reader has to spend time and effort wondering, “can I trust the advice of this commentator?” while I get the backlink whether or not I know what I’m talking about. It’s the world we live in. (further reading: There Ain’t No Such Thing As Free Lunch in Content Marketing)

Screen Shot 2018 05 05 at 11.46.31 AM

I’ve spent maybe $20,000 in my life on Facebook ads, hardly a huge expert

Here’s the rub: if you’re not skeptical (maybe even cynical) it’s hard to know who walks the walk and who talks the talk. The benefit to the content creator is the same whether they know what they’re talking about or not (more on that in a bit). It’s up to the reader to discern “fake news” from real value, which is a heavy burden.

This is actually a massive benefit to the content creator. In reality, it’s why the roundup post is so popular. There’s no risk; it’s only upside.

How to Write and Judge a Case Study

Bad case studies, in particular, can be dangerous to readers in ways that listicles aren’t. They’re often viewed as authoritative and sources of truth, when in actuality they can be surprisingly speculative.

I highly recommend reading Ryan Farley’s post on this. He nails it. Here’s a quote:

“So these case studies are cheap and intellectually dishonest.  But what makes them harmful?

They are harmful because they can mislead people, no matter how good their intentions.

I’ve been around the block long enough to recognize cheap content when I see it.  But four years ago, I didn’t recognize this.

I took this crap seriously.

When you produce this stuff, there’s a chance that someone actually tries to apply the ‘lessons’ you are teaching.

When answers are tough to come by, it’s easy to want an easy answer or to be able to simply adapt what another has found success with.”

When I say that the cost of cheap content can be externalized to readers that’s what I mean. The author/website ranks, it costs little in terms of time or reputational cost, but the reader may or may not waste days, months, or years implementing completely fallacious advice.

Screen Shot 2018 03 16 at 11.48.38 AM

Great website on BS case studies. Visit here.

This BS case studies problem was something I was hyper aware of while working at CXL.

Bad case studies in the CRO space were (and still are) a plague, and they contributed to a poor understand of what CRO was actually about. Therefore, we were combative about bad case studies spotted in the wild, but also meticulous about how we published our own case studies.

So we put forth some maxims: if you’re going to publish a case study, you should publish the losing tests as well as the winning tests, the full data set (obviously keeping in mind client confidentiality), and your justification for doing what you did.

Here are the exact words Peep Laja, founder of CXL, wrote regarding A/B testing case studies, and what could make them valuable:

  • Tell me how you identified the problem you’re addressing
  • What kind of supporting data did you have / collect?
  • How did you pull the insights out of the data you had?
  • Show me how came up with all the variations to test against Control, what was the thinking behind each one
  • What went on behind the scenes to get all of them implemented?

We were largely railing against sites like WhichTestWon (RIP), where they provide no context, only gamification and advice as insightful as “blue is better than green buttons.” But it applies more so when you consider the larger space of “case studies,” especially those written by people who don’t even work at the company they’re analyzing (!)

Screen Shot 2018 03 16 at 11.21.29 AM

Professional writers have trouble filling books with this topic, so how can a blog post do it justice?

Here’s what I look for when reading these posts:

  • Did the author work on the project?
  • Do they have something to gain by writing the case study?
    • What is it?
  • Do they have something to lose by giving bad advice or being wrong in their critique?
    • What do they have to lose?
  • Is all the information present? Is there anything fishy with the findings?

Essentially, we can look for “skin in the game.”

If the owner of an SEO agency, someone with 10 years experience and many clients, writes a case study on SEO, they can still be wrong. They have a pretty good incentive to make their work look better than it is, but that’s an easy bias to spot.

But they also have a) something to gain (recognition), but also b) something to lose. Basically, if they give transparently bad advice or information, their reputation is harmed and they can lose clients or industry respect.

Conversely, a record of amazing content positions you in people’s minds as a trustworthy writer, consultant, voice, etc.

Simo’s content is some of the best on the internet and he’s known for it (Image Source)

This is truer in some industries than others, which is why you’ll rarely get away with being a grifter in the analytics space, but you may be able to as a social media influencer (sorry if you’re a social media influencer, but a quick look at the conversations in those two industries makes the difference obvious to the eyes.)

This isn’t the case with a writer who is looking for social shares and backlinks when they write a breakdown on “How Trello Grew.” It’s all upside for them.

Other than Ryan Farley’s great article, if you want some help with identifying good vs bad case studies, specific to the CRO space, I suggest reading Justin Rondeau’s excellent piece on how to read a case study.

Be Wary of Content with Asymmetrical Benefits

People who make predictions for a living don’t suffer the same loss as those who follow the predictions they’ve made.

People who give advice for a living don’t suffer the same loss as those who follow the advice they’ve given.

Caveat emptor, as they say.

Here we have a rule, from Nassim Taleb’s ‘Skin in the Game’:

“Always do more than you talk. And precede talk with action. For it will always remain that action without talk supersedes talk without action.

When consuming any content, think about the asymmetric risks vs. rewards involved with the person who created the content. If there is little downside for the creator, I’m not saying it’s certainly BS, but be wary.

Imagine a roundup post with three people on it: Peep Laja, me, and a writer who has never run an A/B test.

  • Peep has many years experience in CRO and has run thousands of experiments.
  • I have a few years experience and have run tops 80-100 experiments.
  • Then the writer has never run a test and can’t say they’ve ever done true “CRO.” In fact, they’ve barely heard about CRO, save for a blog post written by Neil Patel a few years ago.

So it’s fair to say that is “cost” Peep more to give the advice in the roundup, simply because he had to invest more time and effort into gaining the knowledge and experience. Not only that, Peep has a reputational cost on the line, as he’s appearing in the same paper as a nitwit with no knowledge (not me, the other person!). The nitwit only has something to gain by being featured by those around him.

Yet we all get the same benefit: recognition and a backlink.

The audience gets a variable return: Peep’s advice is expensive, mine is less expensive, and the writer should have to pay you to give you advice. In fact, the cost of scrutiny is placed fully on the audience. This (in the broader world, not just in marketing) is part of the reason it sucks so much to read news: it’s so hard to parse out what is bullshit from what is true now.

Things that contribute to asymmetric benefit (and “penalty-free” content creation):

  • HARO
  • Roundup Posts
  • Scaled out keyword-based content (think Livestrong or other 400 word post content farms). These do have the negative that Google’s algorithm seems to weed them out with time.
  • Prediction posts (what will marketing look like in 2019?)
  • Baseless, opinionated critiques and teardowns

What’s there to say? You have to play the game if you want to benefit from content and SEO, so there’s no way to disincentivize bad authors. Like I said, I’ve given quotes for things I don’t know very well.

Want a cynical end to this story? There’s probably no real way to solve the problem of opportunism and asymmetric risk in content marketing. Why would we take advantage of a backlink, exposure, traffic, or whatever, if we’re given the chance?

Instead, the solution to the discerning reader here seems to be the frustrating advice: caveat emptor. Read things with skepticism.

Conclusion

It’s hard to know what to trust online. There’s the new “fake news” thing, but there’s also a phenomenon of content that is “real,” whatever that means, but without value for the reader and without penalty for the writer. Mirage content.

You can never fully remove this asymmetry, as even knowledgeable authors can give bad advice and vice versa.

The solution seems to be a simple but difficult one: read with skepticism, and call out true charlatanism where it is evident. Additionally, read intellectually honest and rigorous authors more regularly and promote them to the world.

Further, I’ve found that opinionated pieces tend to be pretty valueless, in aggregate. How to pieces, walkthroughs, and data-driven content seem to be pretty important, especially when you’re trying to solve a specific problem.

I’ve found that if you put the blinders on to marketing ideology (other than the fundamentals), and just put your head down and do the work, share knowledge with others on your team and in your industry (informally and privately, even better, as there’s less incentive to posture – the best info I’ve ever learned is at the after party of a conference or meetup), things work out pretty well. You can safely ignore most noise on the internet, anyway.

Content producers: find your edge, weigh costs of production with the expected return and keep in mind barriers to entry (the lower it is, the less likely it is you’ll truly lift above the fray, unless you’re already way above the fray, then publish away) as well as long term moats.

Finally, outside of business, all good art is written with blood. You can’t half-ass masterpieces:

Screen Shot 2019 04 12 at 4.05.25 PM

Image Source

The post The Economics of Content Creation (or Why Most Roundup Posts Are Awful) appeared first on Alex Birkett.

]]>
Content Marketing Strategy: Everything You Need to Know to Build a Growth Machine https://www.alexbirkett.com/content-marketing-strategy/ Tue, 05 Feb 2019 22:38:34 +0000 https://www.alexbirkett.com/?p=653 Content marketing strategy is something few companies do well. This is something I’ve focused on for years, mostly because all the companies I’ve worked for, from super early stage startups to HubSpot where I work now, have been largely supported by content marketing (in one way or another). However, each company’s content marketing strategy was ... Read more

The post Content Marketing Strategy: Everything You Need to Know to Build a Growth Machine appeared first on Alex Birkett.

]]>
Content marketing strategy is something few companies do well.

This is something I’ve focused on for years, mostly because all the companies I’ve worked for, from super early stage startups to HubSpot where I work now, have been largely supported by content marketing (in one way or another).

However, each company’s content marketing strategy was different, though all of them successful.

Most blog posts on content marketing strategy focus on what are assuredly tactical considerations – stuff like how many words your blog posts should be, what content format you should produce, and how to share on social to make shit go viral.

Even the good advice on content marketing strategy is usually too narrow – it comes only from the direction of the company giving it. If something works for Microsoft, that doesn’t mean it works for a startup, and vice versa.

If I were asked to come in and launch or consult on a content marketing strategy at a new company, this is how I’d approach it (this guide is also based on a training I give and is covered extensively in my content marketing strategy course).

Introduction to Content Marketing Strategy: What We’ll Cover

If you’re new to content marketing, read the whole thing. If you care about a particular section, jump around.

Sections:

Yep, it’s gonna be a big guide. Let’s do this.

Preparing for Battle (the Building Blocks of Content Strategy)

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.” – Sun Tzu

Knowing yourself consists of knowing your strengths and weaknesses, and it also consists of knowing who your customer is and how you will reach them (and win their love and loyalty).

Knowing your enemy means knowing the competitive landscape as well as the content marketing space as a whole, and understanding how you can operate on an edge that you can win.

We can also say: know your landscape. This consists in knowing not only your customer and your competition, but the influencers, blogs, publications, and platforms you can use to distribute and amplify your content, and how you can use these tools to reach your potential customers.

You do all this before you ever put pen to paper, by the way; but you also refine this knowledge over time and with new learnings. These are the formative steps to take, before you ever begin your content marketing efforts, to come up with a master content marketing plan.

Buyer Personas (a Quick and Dirty Guide)

The first thing to think about when embarking on a content marketing program is who you’re hoping to reach, your target audience. Who is your buyer, how do they prefer to learn about products, and what type of language do they use to describe their problems and hopes?

For these questions, buyer personas can be invaluable. Creating buyer personas can help you answer questions like whether or not you’ll be able to reach your audience via search engines or social media (and which social media networks), which types of content they like to consume (are case studies even effective? How long should a given blog post be? Should we look into podcasts?), what their pain points tend to be, and in general, how best to formulate your marketing goals.

These aren’t just content marketing problems, of course. Personas are also helpful for product strategy and go-to-market strategy, among other things.

I’ll briefly cover the dos and don’ts of buyer personas here, but if you want a really robust method of doing personas, read this guide.

First, what is a persona? This is my favorite definition:

“Personas are fictional representations and generalizations of a cluster of your target users who exhibit similar attitudes, goals, and behaviors in relation to your product. They’re human-like snapshots of relevant and meaningful commonalities in your customer groups and are based on user research.

I’ve bolded several parts, because I think ignoring them is largely why most personas suck and why they fail.

Fictional representations: Your persona isn’t a real person, so it shouldn’t be a successful customer or account you choose to profile. It’s a generalization that is used to help craft messaging, campaigns, and business decisions, so it needs to be somewhat loose and archetypical.

Cluster of your target users: Your persona won’t be a one-to-one match for each individual customer. Think about it as a mean – variance means that each individual data point (customer) will still probably vary from the average (your persona), but defining the average (the center of the cluster) will still help you reach each individual data point in the cluster.

In relation to your product: Your persona’s characteristics should map to things related to your product and the buying process that a customer goes through to reach your product and use it. The dimensions you define shouldn’t be stupid and irrelevant things like their eye color, gender, or if they like to ski (unless you sell skis).

Relevant and meaningful: I just wanted to repeat that your persona shouldn’t be a cheesy representation of a cartoon character with a cute name and hobbies and interests that aren’t relevant to what you’re doing with your business. Leave the superfluous character development to your novel writing side hustle.

Based on user research: Very important! Don’t make shit up! Do your research. Apply evidence-based personas, and you’ll make better decisions. Though you’ll never reach the perfect representation of your customer, you do want what you define to be accurate.

So a good persona might look like this:

Image Source

A bad persona might look like this:

I’m not trilingual (yet), and it probably wouldn’t matter if I were (unless you’re Duolingo or teaching people how to learn Spanish). 3.23 blog posts is too specific to be useful, and it doesn’t matter that I’m male, 27 years old, or that I enjoy extreme sports, if what you’re selling is a SaaS tool to better manage your client proposals. And what an atrociously distracting stock photo 🙂

A good heuristic: if a model helps you make better decisions, it’s a good model. There’s no perfect model, but models should be consistently directionally accurate and useful. If your persona helps you reach your audience with content, it’s worth creating.

Another note: your personas aren’t static. Audiences change, and so do product strategies. Additionally, you’ll learn more about your customers as you progress, so take a look at your personas every three to six months and update them.

Content Audiences (and Why It Isn’t Always Your Core Customer/User)

Defining your customer is one thing, and it’s important. But it’s also important to note that, in content marketing strategy, your audience isn’t always your customers…

Huh?

See, sometimes your primary audience (who you produce content for) is different than the end audience (your customer). If SEO is your main model, then you’ll need backlinks to make it work, and very few of your customers will have the ability to give you authoritative backlinks.

Even if you’re drumming up awareness and educating an industry on a product they’ve never heard of (and didn’t know how to search for), you’re often speaking first to the influencers in the space, who then in turn speak to your customers.

So the question then becomes: who are your industries influencers? Sometimes, at least in SEO, we call these people the Linkerati (those in power with the ability to bestow powerful backlinks).

Your best bet is to do three things:

  1. Know the landscape (understand who is powerful).
  2. Make friends with the influencers.
  3. Craft content so it appeals to their tastes.

If everyone did just these three steps, you’d hear far fewer people complain about poor results from their content marketing.

Here are five methods for mapping out your influencer landscape/Linkerati:

  1. Ask your customers who they read/trust when you do persona research
  2. Find top influencers using Buzzsumo (and other methods)
  3. Find top blogs and publications using Ahrefs (and other methods)
  4. Find “underground” channels where influencers congregate
  5. Find the top conferences and meetups in your space

In my opinion, you should do all of these things. If you want content marketing to really work well, networking and relationships shouldn’t be an afterthought, but a core part of how you operate. As Robert Greene suggests, “Do Not Build Fortresses To Protect Yourself (Isolation Is Dangerous).”

1. Ask your customers who they read/trust when you do persona research

If you talk to your customers, ask them what they read.

I build this into my persona research. When I send out customer surveys, I include a few questions like:

  • How do you learn new skills/info? (scale, followed by a bunch of factors like “blogs” and “conferences”)
  • What publications do you read? (open ended)

You’ll sometimes find that different personas have different tastes in blogs and content. This is an important thing to note, as it can define how your form comarketing partnerships and PR launches.

2. Find top influencers

Apart from publications, you want to identify individuals who command a ton of influence and reach. Sometimes you’ll know these people just by knowing your space. For example, if you’re in the CRO or Analytics spaces, you know that Peep Laja and Avinash Kaushik are influential names.

But if you want to formalize the research process, BuzzSumo is a great tool for this:

You can also look for “top influencers to follow on Twitter lists,” but I wouldn’t stop at these, since they tend to be circle jerks that list the same people on every list. That means the people on the list are constantly getting hounded with requests, so they’ll be less likely to work with you (and probably not as effective anyway).

3. Find top blogs and publications

Content marketing almost always relies on SEO as a distribution avenue, so you’ll want to find the top blogs in your space. First and foremost, look at Ahrefs to find similar domains to your own and your top competitors:

I like to find which sites link to competitors content as well:

Second, you can use a tool like Growth Bot to find organic competitors to different blogs:

Third, you can scrape those awful software comparison aggregator sites like Capterra to find others in your immediate and secondary product categories.

Finally, you probably know a lot of the top blogs or you can search for them with queries like:

“Top [keyword] blogs in [year]”

Put all of these on a spreadsheet with their corresponding domain authority. A fast way to find domain authority is with a bulk domain analyzer such that Ahrefs has.

Image Source

4. Find “underground” channels where influencers congregate

The people who matter usually talk to each other, and not always on public forums. Sometimes there are Slack groups, and sometimes there are Facebook or LInkedIn groups. Sometimes it happens in person. I can’t speak to the industry you operate in, but you need to get to know it well enough to know where these secret circles are and how you can get invited in. These are probably the most important piece.

5. Find the top conferences and meetups in your space

Gotta get outside and actually meet people face to face sometimes. I find this is my greatest leverage point, as most people just cold email behind a computer, but if you can share a beer with people you can form a true bond. Plus, the “off the record” conversations you have at conferences will far outweigh the value of any content that is written for a public audience.

SWOT and Strategy Audits

What works for one company doesn’t work for another. Backlinko, HubSpot, CXL, WaitButWhy, BuzzFeed, my personal blog – all different content strategies, all successful in their own right.

That means you should never copy someone’s strategy just because it works for them (or God forbid because you saw a representative give a talk at a conference).

In almost all cases, the best case scenario by copying someone’s strategic playbook is that you’ll hit some local maximum that lies somewhere near mediocrity.

HubSpot can publish a handful of articles on long tail search keywords every day and they’ll beat you all day at consistent content execution.

Brian Dean is uniquely suited to publishing infrequent, super comprehensive pieces specifically on the topic of SEO.

CXL didn’t invent research-based long form content, but we executed it to near perfect on a consistent basis. It’d be hard to outperform CXL mimicking that form of content production (at least if you’re also competing for conversion optimization search terms).

So you need to find your edge and exploit it.

How do you do that?

SWOT Analysis

If you’ve ever taken a business class, your eyes may be glossing over at my mention of SWOT Analysis. Or maybe you’re a nerd like me, and you actually enjoy this process.

Whatever the case, the SWOT Analysis is a helpful thought exercise.

How do you complete one? You create a 2X2 matrix, with quadrants representing the following quadrants:

  • Strengths
  • Weaknesses
  • Opportunities
  • Threats

Strengths and Weaknesses are polar opposites, but they’re both internal facing (what are your specific company’s strengths or weaknesses?). Similarly, Opportunities and Threats are yin and yang, but they are outward faces (what market conditions represent opportunities or threats?)

Image Source

If we imagine a random SaaS company coming out of Y Combinator, one with a serial founder, we can fill out their hypothetical SWOT:

Estimating Impact and Feasibility

You also need to weigh your ability to rank for given keywords or compete in your niche. I’m diving into this deeper in the “content economics” section, but briefly, in your SWOT audit, you should think about the feasibility of your strategy.

HubSpot’s esteemed Director of Acquisition, Matthew Barby, said the following on GrowthHackers:

“The biggest hurdle of SEO is knowing what is realistic and what is not, and then being able to decide which is the right lever to pull at that moment. For a site like HubSpot.com, we have a TON of backlinks (which gives the site a load of authority). When we create new content around marketing/sales/service it will tend to rank a lot better than smaller sites because we’ve built up this authority over a number of years. That means we get WAY more leverage from ramping up content creation than a brand new (or even smaller) site.

For a new site, I generally try to shift the focus to building authority vs just publishing a bucket load of content. The questions I try to ask is, “how can I build a steady flow of backlinks into the website?” and “how can I grow the number of people searching for my brand name?” instead of “how can I create as much content as possible.” This sounds simple, but there’s a lot that goes into figuring all this out, and it’s where 9/10 misspent cash comes from.”

While most of this article will focus on SEO, and thus keywords, it’s important to note that’s not the only route to content marketing success and it’s not the only way to bring in customers. In some industries, say the SaaS management niche, buyer’s don’t know what they’re looking for. Or there’s no real word for the term yet, at least not one that is searched frequently.

In spaces like these, there are strategies as well, normally in the form of “thought leadership” style content (also called “movement first” content). Think about HubSpot’s early evangelization of “inbound marketing,” WaitButWhy, or how Paul Graham or Sam Altman write.

Anyway, if you can pull of that style of writing, the one where people simply search your name in Google so they can read your brilliant insights – well, you don’t really need my advice, just keep writing.

The Economics of Content

All content has a cost and an associated return.

Some content, such as roundup posts or listicles, is easy and cheap to produce (and therefore easy to replicate and scale).

Some content, such as original research or thought leadership based on years of experience, is hard and expensive to produce (and thus more difficult to scale, but also more difficult for your competitors to replicate).

In the early stages, it’s likely you’re going to need to work hard and spend more to outcompete the bigger players in your space. Because you have a low domain authority, you’ll need to make up for it in content quality.

Additionally, since you have no audience or build in brand recognition, you need to break through the ‘noise’ in the blogging space, of which there is a ton. Normally this means overindexing on “awareness” level content in the beginning, with the goal that you can eventually easy rank your “consideration” and “decision” stage content that actually brings in the bacon.

However, as you begin to produce content regularly and to learn more about SEO, you’ll likely learn what your “hotspot” is – the minimum effective dose required to rank blog posts and convert visitors. Any additional cost above this reduces your ROI, and especially at scale, incurs consecutive marginal costs.

Normally, it takes an ungodly amount of effort to rank in the beginning stages (up to a DA of about 50 or 60), and then the curve begins to flatten (though never completely) in the upper echelons of website authority. When your DA is that high, as long as your site architecture and technical SEO best practices are followed, you can rank posts with a slightly lower quality (though never too low).

Your long term goal in a content marketing program is to lower the cost of content production as low as you can without compromising on brand promises or return on content efforts.

Image Source

At HubSpot, we’re roughly between the “templated content” and “user-generated” content stages. Now, we can produce comparison pages like this one:

We can also very quickly rank “consideration” level listicles, which are comparatively easy to produce and bring in a very high conversion rate for blog posts:

The effectiveness of templatized content only comes with scale and maturity though. For one, you need the domain authority to rank tons of posts without crafting individual promotion or link building campaigns for each (which would rapidly augment the costs of your content marketing program).

Second, you need a lot of production resources and infrastructure to make templatized content work at scale, especially if you’ll be running SEO experiments on them or hoping to do any conversion optimization on them.

Content marketing, like many facets of marketing, is a flywheel. The more you add to it and the more energy you put into it, the faster it spins (and the rewards compound with time). You put in the effort early in order to reap the rewards later.

Image Source

So in the beginning, it may help to plan out your transactional pages and bottom funnel content. But don’t expect to rank that stuff incredibly quickly without having some solid strategy to build up your authority. Generally, you can do this through a) lots of high traffic/high interest awareness level content on the same topic b) a shit load of link building or c) being in an uncompetitive space.

…or some combination of the three. However, if you want to make content marketing work, in the beginning you should expect to invest a lot more time than you’d like to in awareness level content (it’s the leverage that gets you the returns down the line).

How Much Does it Cost to Produce Content That “Wins”?

  • How much effort do you have to put into a piece of content to get it to rank?
  • How many words is it?
  • How long does it take your writer to write?
  • How many hours of promotion, link building, and distribution do you need?

The potential “cost” here is infinite (just look at a WaitButWhy article). There’s usually a point of diminishing returns you want to aim for with production quality and cost. The tough part: it’s really hard to prescriptively say what that is for any given industry.

In B2B SaaS, it used to be that you could write a 2500 word article with an added infographic, because all the others were only 2000 words. It’s becoming harder to do that because of increased competition. Additionally, it’s not only about word count, but about content density.

Now, in B2B SaaS, you may actually have to invest in a “topic cluster” that consists of many slightly related blog posts if you want to have a chance ranking any of them, let alone all of them. It might require significant link building as well.

(More on constructing Pillars and Clusters below)

In general, the best way to gauge the content quality necessary to rank is to look at what’s currently ranking for keywords you want to go after. This can be as simple as a Google search (even better if you’ve installed Mozbar to see DA and backlinks):

You can also use Ahrefs’ keyword explorer to view the landscape of a search term:

Finally, after writing a few posts and seeing where they land after publication, you can get an intuitive sense of how strong your site authority is. This gives you a fingerspitzengefühl when it comes to content production.

Maximizing Resources & ROI

When you understand how much effort it takes to rank, you can model out what it would cost to achieve the goals your organization hopes to achieve.

Let’s say it takes, on average, a 5000 word pillar page to rank for super competitive terms (“customer satisfaction”) and 2000 words to rank for long tail and less competitive terms (“how to measure customer satisfaction”). Let’s say a 5000 word costs you $1000 and a 2000 word post costs you $300. In addition, you need to do some manual content promotion and link building, and let’s say that costs about $250 per blog post and $750 per pillar page.

Now, you can tie that in with any SLA or quarterly traffic and conversion goals you have. It makes it vastly easier to calculate a content budget, and it also helps you to realize if your plan, given the costs, will even potentially be able to hit your goals you’ve set.

Basically, just calculate your minimum viable production capacity as well as the maximum resources you could potentially allot to your content marketing program.

  • What’s the average time to produce a piece of content?
  • How many producers (writers, designers, etc.) can you put on task?
  • What’s the average time of promotion and distribution?
  • Given those numbers, how many pieces of content can your produce in 1 year (and 1 month)? Does that match up with your expectations in terms of traffic or conversions?
    • If not, which levers can you tweak to meet those goals?

Here is where knowing how to build a growth model helps a ton.

Ian Howells had an even more robust way to model out search traffic potential. Here’s a quote from his GrowthHackers AMA:

“I use a blend of search volume, intent, and “attainable position”. With any given project, I look to find the leader in the space. So if I was going to work on a site about outdoor/camping products, I’d likely toss REI into aHrefs and spit out all their keywords over 500 searches per month.

I’d then make an assumption about how close I could get to REI’s rankings – say (for sake of argument) I was assuming I can get to 4 spots lower than REI. I’ll just run a new column in excel/google sheets adding 4 to all of their rankings.

That new “attainable position” plus a CTR curve gets me a ballpark on the amount of traffic I could realistically hope to get for the site.

A pivot table to roll these up by page then gives me a map of what pages I need and what the potential traffic is per page. I’ll start with the biggest opp pages and just work my way down.”

Beautiful.

Content Planning: Creating a Roadmap and Course of Action

At this point, we have a rock solid strategic underpinning as well as a content growth model and expectations of how to hit our goals. Now how the hell do we plan and produce the actual content?

This section will cover keyword research (topic ideation), content production, and promotion.

How Buyers Search, and How Searchers Buy

We’ve touched on the idea of the buyer’s journey already, but generally speaking, it looks like this:

Image Source

We start at a high level, attracting general readers who probably don’t know about your business yet. Through compelling content and search strategy, we hope to bring them down to the consideration and decision stages, where hopefully they’ll choose to buy from us.

In content marketing strategy, each stage has specific goals.

Awareness stage goals:

  • Build links and domain authority + page rank.
  • Cast a wide net and bring in relevant traffic (though it may not convert right away, except maybe to an email list or lead magnet).
  • Build up your “topical authority” and expertise to help you rank commercial terms.
  • Build relationships with influencers and movers and shakers in your field
  • Build demand and interest.

Consideration/Decision stage goals:

  • Convert traffic into users or customers.
  • Rank for core business terms.
  • Educate buyers and differentiate your business.
  • Sell.
  • Capture demand.

A proper content strategy includes all parts of the buyer’s journey. An unbalanced strategy will never produce the results that a holistic one will.

Business KPIs and What to Track

I’ll breeze through this part because your specific goals will depend on your business. Generally speaking though, you’ll want a way to track the following:

  1. Website traffic data (traffic source, pageviews, etc)
  2. SERP tracking (position, CTR, visibility)
  3. Conversions and business metrics (average order value, email signups, etc)

Depending on your business, those business metrics could vary. Sometimes, it’s a simple as an email subscription. Sometimes, it’s a marketing qualified lead. Sometimes it’s a customer.

Who am I to tell you your business’s goals though? Jot these down before you begin creating content and make sure you have a way to track them.

Keyword/Topic research

To start with keyword research, I like to get a bird’s eye view of the space I’m trying to conquer. For this, you can use Ahrefs to analyze competitors’ sites and see what they’re ranking for. Start by plugging your own in (or if you haven’t started writing, your closest competitor):

You can then use their “competing domains” report to find others in the space.

Without fancy tools, you can also just do a quick Google search for blogs in your niche.

Compile all of these and list their corresponding domain authority in a spreadsheet.

Now, Ahrefs has a really cool tool called “content gap” analysis that lets you plug in a bunch of competitors and see what they rank for but you don’t:

The report looks like this, which you can simply export to CSV.

Or you can individually analyze each website.

If you choose that route, I like the “top pages” report. So much value here!

What you’re trying to do at this step is get a big ass list of keywords that a whole bunch of sites in your niche rank for, but you don’t. Don’t worry about curating the list at this point, just get a big list and put it in a spreadsheet.

Content Strategy Models (Pillar & Cluster Model)

Now that we have a big list of keywords, we want to organize this in some meaningful content model.

Even if we had unlimited time and resources, we’d still want to prioritize the list so we could rank for important keywords faster (thus reaping more of the rewards over time), and so we can strictly curate our site to build topical authority (which Google seems to like – depth over breadth).

Now, there are tons of content models. I like to combine two of them to make a workable content roadmap (or a search insights report, as we refer to it at HubSpot). To start, I like to map out my keywords based on user intent. This is a reflection of the buyer’s journey:

It depends on the business I’m working with what the exact discrete stages are, but I work from high traffic/low intent (awareness) keywords down to low traffic/high intent keywords. At HubSpot, I break normally break terms into “What,” “How,” “Considerations & Tool Discovery” and core decision keywords.

Now we’ve at least broken things down into discrete customer journey stages, but we still need to group things thematically. The goal of this is to internal link all related posts to show Google they are similar, which helps us build “topical authority.” The best framework I know for this is the Pillar and Cluster model.

Basically, you plan out a big pillar page topic (“Digital Marketing”) and then write several shorter posts that target longer tail keywords that related to the core term (“How to Become a Digital Marketer”).

Adding a step, I actually like to start from a core product term and work my way outwards. So in this case, my product page would be a “customer feedback software” tool, then I could build a pillar page on “the ultimate guide to customer satisfaction” and then create tons of related blog posts to help it rank.

Also note that you can have many clusters that tie-in together. For example, “digital marketing” is related to “lead generation” and “email marketing” as they’re all sort of under the umbrella of marketing. Here’s an example from our work on Hubspot’s Service Hub:

And here’s a URL map we planned for a “forms” cluster (though in reality it ended up being slightly different):

Eventually, I like to build up clusters that contain a product page, one pillar page, and several cluster blog posts.

Sometimes, if the website is mature enough and there are enough “decision” level terms, I like to add in “sub-product” or “sub-service” pages. These are children of a parent service. So if you have a product, “popup forms,” that is your parent product, then you may have sub-features that still get search volume like “exit intent popup” or “scroll trigger popup.” Thematically, they belong in the same cluster.

A good tool to use to group together similar keywords is Latent Semantic Analysis. You can do this in R (which is my preferred method), or you can use a tool like LSIgraph:

Another use case for a tool like this is to find related keywords that you can use in a big pillar page. In other words, if you’re writing a big blog post on “content marketing strategy” (*ahem*), you may want to include sections or phrases like “content marketing strategy checklist” and “what is content strategy.”

Or you may want to build them into separate posts if they have enough search volume.

Another tool to find related keywords is answerthepublic.com. This is one of my all time favorite content planning tools when it comes to actually writing the post:

Both of these tools are great at helping you fill out an outline for your pillar pages and ultimate guides:

Eventually, you’ll have a solid content calendar, at least in spreadsheet form, full of tons of different content types. In some cases, you may just want to work out of your spreadsheet (totally viable, we do that with our internal properties at my content marketing agency). If you have a bigger team or work with clients, however, you’ll probably want to use some sort of editorial calendar (I like Trello). This helps you assign different content pieces to different content marketers:

Content Creation: Production That Gets Results

Finally! Time to actually produce content.

The important point here is that we want to bake in promotion elements into the content itself so the piece does well upon publication. Publish and pray is not a strategy. “Quality content” or “great content” is a meaningless term outside of blasé publications like Content Marketing Institute.

With that in mind, especially for my “awareness” level content, I love to build in “link hooks” that help different the piece and make it easier to promote and pick up steam naturally. Most content needs a little extra effort on the margins to really pick up results.

Link + Share hooks

There are a million ways to differentiate content, but here are 7 I like:

  1. Original images
  2. Data & Research
  3. Original Charts
  4. New frameworks with made up names
  5. Quotes from experts
  6. Pros & Cons tables
  7. Controversial Hot Takes

Aaaaand examples of each…

Original images

Data and Research

Original Charts

Frameworks with made up names

Quotes from experts

Pros and Cons tables

Controversial Hot Takes

Promotion tactics

Before I ever publish a post, I make sure I have a very clear idea of where I’m going to promote it. Most of the time, I like to build out a sort of “PR launch list” that includes blogs, influencers, and communities that fit into these three categories:

  • Tier 1 is high authority and high relevance. A lot of the time, these sites are super competitive with the terms i’m going for, so very hard to snag a link from.
  • Tier 2 is really where I spend most of my time. Complementary (noncompetitive) sites that aren’t insanely high authority. This is where you can get the most bang for your buck with link building especially.
  • Then Tier 3 is mid to high authority but not as relevant. Here we can consider general marketing or business blogs like business2community.

I think it’s a great exercise for anyone to map these people and places out, even if you think you know your space super well. I guarantee you’ll find some good link/partner/promotion opportunities you hadn’t even considered. Other tools to find these targets:

  • Onalytica
  • BuzzSumo
  • Scraping software comparison sites for key categories
  • Find link roundups in your niche
  • Work with agency partners or integration partners
  • Find Slack groups in your niche

Don’t forget communities and social spread:

  • Twitter
  • Facebook
  • Designer News
  • Hacker News
  • Reddit
  • Growth Hackers
  • Slack Groups
  • Quuu

Then make sure you store all this in a spreadsheet (or even better, a CRM).

Now, I like to bucket promotion into two categories: Short Term + Long Term.

First, you want a short term spike to drive attention and traffic to your post in the first place.

This helps get the early adopters on board and it spurs a bit of organic social traffic. These things are secondary signals that may actually help your organic rankings later on (secondary because presumably they aren’t direct ranking factors, though they can bring influential people to your site who may share and link to it later).

For that, at least for my industry and content, I usually throw the posts on a few communities I’m active in.

I email it out to my list.

And I share on social.

That’s basically it. I really just want a short traffic spike, and I know that I mostly hate the promotion aspect of this sport, so I do the minimum effective dose. Then I focus all my core efforts on the long term promotion, which is really just link building and content optimization (which we’ll talk about in a few sections). For now, link building.

Sometimes, I’ll do cold email outreach, especially if I’m really trying to promote a piece (for example, my recent guide on A/B testing is something I’ve put more than normal time into). If you want a big guide on that, check this out.

Honestly, though, most of the time I just ask people I know well to put a mention in a post they have or I do guest posting. Relationships trump cheap tactics, especially when it comes to outreach.

Back to the point I made earlier about not isolating yourself. Get to know people in your industry and broader space as well. It’s all about digging the well before you’re thirsty.

Measuring Results

As I mentioned in a previous section, you want to measure the three big things:

  1. Website analytics
  2. Search metrics and rank tracking
  3. Conversion & business data

While there are a million other things you can track nowadays, those are the core of any content marketing analytics approach.

As for website analytics, Google Analytics is really the gold standard. There are other tools, tons of them actually, but Google Analytics is the one I’m most used to and most comfortable with.

Bonus if you set up goals for things like email signups, and even better if you set up interesting event tracking, like scroll depth and behavioral stuff like banner interactions. I wrote a massive guide on content analytics, so check that out if you want to nerd out.

Next, if your content marketing strategy is search focused (which it probably should be), you’ll want to measure your rankings. The analog to this if your strategy is social focused would be some sort of social media monitoring tool like HootSuite.

For SEO, I like to use a combination of Ahrefs and Search Console, and also Screaming Frog for the occasional crawl.

Image Source

Finally, make sure you can attribute business results to your content marketing. Often this can be done quite simply in Google Analytics using goals.

Sometimes, however, you’ll want to build a more robust data pipeline and data warehouse to create custom attribution models to weigh out the efficacy of your content.

That’s for fellow nerds though, most can get by with the data dashboards your marketing tool gives you…

All of these analytics solutions give you access to one of the most powerful (and underrated) levers in content marketing…content optimization.

Content Optimization

I’ve written a massive guide on this one already, but the gist of it is that you can and should look back at old content to improve it. The cost of doing so is much lower than creating net new content (and remember our content economics model…the lower the cost to achieve the same return, the better).

Two ways to improve content:

  1. Boost rankings on content that almost ranks on page 1
  2. Boost conversions on underperforming (but high ranking) content.

I won’t go deeply into the tactical ways of doing those two things here, but if you’re interested in content optimization, check out my guide. Also check out this list of content optimization tools. I believe every serious content program should do this at least once a quarter, and if run a program that is truly mature, you may want to have a person or team that works on this.

Content Auditing and Maintenance

“Whosoever desires constant success must change his conduct with the times.” – Niccolo Machiavelli

As suggested in the “content optimization” section, you need to stop and do a content audit every once in a while – but at a level even higher than simply asking what content pieces could be doing better.

Every six months to a year, I think it’s important to take a step back, look at your KPIs and your associated results, and ask, “how are we doing?” Are our content marketing goals contributing to our business goals? Can we experiment with the type of content we’re producing (webinars? New forms of social media marketing?)

Not only that, but you should redo your SWOT every once in a while, because strengths and weakness change rapidly in this space (and so do opportunities and threats).

For instance, as you build domain authority, you add the strength of being able to rank more and more templatized/cheap content. However, as you build traffic via search, a threat may be your overreliance on Google as a channel. As such, a strategic may look into diversification.

Looking Forward & Balancing Your Portfolio

Normally, when starting out, you need to be narrowly focused on an acquisition channel, and even more so a tactic within that channel, and exploit the hell out of it while it still gives you returns. Then, over time, either the returns slow down (the law of shitty click throughs), or your reliance on the tactic or channel leads you to a fragile position.

In either case, a mature program diversifies and rebalances the portfolio every once in a while.

There are many portfolio models you could use in the game of content marketing strategy, but the one I like the most, as you scale, is a variation of the Barbell Strategy of investing. The definition from Wikipedia:

“One variation of the barbell strategy involves investing 90% of one’s assets in extremely safe instruments, such as treasury bills, with the remaining 10% being used to make diversified, speculative bets that have massive payoff potential. In other words, the strategy caps the maximum loss at 10%, while still providing exposure to huge upside”

In other words, continue to pour the vast majority of your resources and efforts into your safe, stable channel (probably a strong SEO-driven content program) and then pull a smaller percentage of your resources to work only on high volatility, experimental programs.

This way, you can cap your downside, continue to reap the rewards of your cash cow, but encourage innovation before other competitors move in on your slow moving machine.

Conclusion

Content marketing strategy is a tough thing to learn, but with time and pressure, you can master it.

Reading a guide like this will help (hopefully), but as with anything meaningful, you’ve got to plunge into the deep end and just figure things out for yourself.

There’s so much domain specificity here, that even though I’m aware of and disappointed in biased advice, there’s almost no way I’m not writing with some sort of bias (however hidden from myself).

However, this guide should give you a good starting framework to operate from, even if the specifics are slightly different in your case.

The post Content Marketing Strategy: Everything You Need to Know to Build a Growth Machine appeared first on Alex Birkett.

]]>
What is A/B Testing? An Advanced Guide + 29 Guidelines https://www.alexbirkett.com/ab-testing/ Mon, 12 Nov 2018 15:33:19 +0000 https://www.alexbirkett.com/?p=609 A/B testing (aka split testing or online controlled experiments) is hard. It’s sometimes billed as a magic tool that spits out a decisive answer. It’s not. It’s a randomized controlled trial, albeit online and with website visitors or users, and it’s reliant upon proper statistical practices. At the same time, I don’t think we should ... Read more

The post What is A/B Testing? An Advanced Guide + 29 Guidelines appeared first on Alex Birkett.

]]>
A/B testing (aka split testing or online controlled experiments) is hard. It’s sometimes billed as a magic tool that spits out a decisive answer. It’s not. It’s a randomized controlled trial, albeit online and with website visitors or users, and it’s reliant upon proper statistical practices.

At the same time, I don’t think we should hold the standards so high that you need a data scientist to design and analyze every single experiment. We should democratize the practice to the most sensible extent, but we should create logical guardrails so the experiments that are run are run well.

The best way to do that I can think of is with education and a checklist. If it works for doctors, I think we can put it to use, too.

So this article is two things: a high level checklist you can use on a per test basis (you can get a Google Docs checklist here), and a comprehensive guide that explains each checklist item in detail. It’s a choose your own adventure. You can read it all (including outbound links), or just the highlights.

Also, don’t expect it to be completely extensive or cover every fringe case. I want this checklist to be usable by people at all levels of experimentation, and at any type of company (ecommerce, SaaS, lead generation, whatever). As such, I’ll break it into three parts:

  • The Basics – don’t run experiments if you don’t follow these guidelines. If you follow these, ~80% of your experiments should be properly run.
  • Intermediate Topics – slightly more esoteric concepts, but still largely useful for anyone running tests consistently. This should help reduce errors in ~90% of experiments you run.
  • Advanced Topics – won’t matter for most people, but will help you decide on fringe cases and more advanced testing use cases. This should bring you up to ~95-98% error reduction rate in running your tests.

I’ll also break this up into simple heuristics and longer descriptions. Depending on your level of nerdiness or laziness, you can choose your own adventure:

The frustrating part about making a guide or a checklist like this is there is so much nuance. I’m hyper aware that this will never be complete, so I’m setting the goal to be useful. To be useful means it can’t run on for the length of a textbook, though it almost does at ~6000 words.

(In the case that you want to read a textbook, read this one).

I’m not reinventing the wheel here. I’m basically compiling this from my own experiences, my mentors, papers from Microsoft, Netflix, Amazon, Booking.com and Airbnb, and other assorted sources (all listed at the end).

What is A/B Testing?

A/B testing is a controlled experiment (typically online) where two or more different versions of a page or experience are delivered randomly to different segments of visitors. Imagine a homepage where you’ve got an image slider above the fold, and then you want to try a new version instead showing a product image and product description next to a web form. You could run a split test, measure user behavior, and get the answer as to which is optimal:

Image Source

Statistical analysis is then performed to infer the performance of the new variants (the new experience or experiences, version B/C/D, etc.) in relation to the control (the original experience, or version A).

A/B tests are performed commonly in many industries including ecommerce, publications, and SaaS. In addition to running experiments on a web page, you can set up A/B tests on a variety of channels and mediums, including Facebook ads, Google ads, email newsletter workflows, email subject line copy, marketing campaigns, product features, sales scripts, etc. – the limit is really your imagination.

Experimentation typically falls under one of several roles or titles, which vary by industry and company. For example, A/B testing is strongly associated with CRO (conversion optimization or conversion rate optimization) as well as product management, though marketing managers, email marketers, user experience specialists, performance marketers, and data scientists or analysts may also run A/B tests.

The Basics: 10 Rules of A/B Testing

  1. Decide, up front, what the goal of your test is and what metric matters to you (the Overall Evaluation Criterion).
  2. Plan upfront what action you plan on taking in the event of a winning, losing, or inconclusive result.
  3. Base your test on a reasonable hypothesis.
  4. Determine specifically which audience you’ll be targeting with this test.
  5. Estimate your minimum detectable effect, required sample size, statistical power, and how long your test will be required to run before you start running the test.
  6. Run the test for full business cycles, accounting for naturally occurring data cycles.
  7. Run the test for the full time period you had planned, and only then determine the statistical significance of the test (normally, as a rule of thumb, accepting a p value of <.05 as “statistically significant”).
  8. Unless you’re correcting for multiple comparisons, stick to running one variant against the control (in general, keep it simple), and using a simple test of proportions, such as Chi Square or Z Test, to determine the statistical significance of your test.
  9. Be skeptical about numbers that look too good to be true (see: Twyman’s Law)
  10. Don’t shut off a variant mid test or shift traffic allocation mid test

The Basics of A/B Testing: Explained

1. Decide Your Overall Evaluation Criterion Up Front

Where you set your sights is generally where you end up. We all know the value of goal setting. Turns out, it’s even more important in experimentation.

Even if you think you’re a rational, objective person, we all want to win and to bring results. Whether intentional or not, sometimes we bring results by cherry picking the data.

Here’s an example (a real one, from the wild). Buffer wants to A/B test their Tweets. They launch two of ‘em out:

Can you tell which one the winner was?

Without reading their blog post, I genuinely could not tell you which one performed better. Why? I have no idea what metric they’re looking to move. On Tweet two, clicks went down but everything else went up. If clicks to the website is the goal, Tweet one is the winner. If retweets, tweet number two wins.

So, before you ever set a test live, choose your overall evaluation criterion (or North Star metric, whatever you want to call it), or I swear to you, you’ll start hedging and justifying that “hey, but click through rate/engagement/time on site/whatever increase on the variation. I think that’s a sign we should set it live.” It will happen. Be objective in your criterion.

(Side note, I’ve smack talked this A/B test case study many times, and there are many more problems with it than just the lack of a single metric that matters, including not controlling for several confounding variables – like time – or using proper statistics to analyze it.)

Make sure, then, that you’re properly logging your experiment data, including number of visitors and their bucketing, your conversion goals, and any behavior necessary to track in the conversion funnel.

2. Plan Your Proposed Action Per Test Result

What do you hope to do if your test wins? Usually this is a pretty easy answer (roll it out live, of course).

But what do you plan to do if your test loses? Or even murkier, what if it’s inconclusive?

I realize this sounds simple on paper. You might be thinking, “move onto the next test.” Or “try out a different variation of the same hypothesis.” Or “test on a larger segment of our audience to get the necessary data.”

That’s the point, there are many decisions you could make that affect your testing process as a whole. It’s not as simple as “roll it out live” or “don’t roll it out live.”

Say your test is trending positive but not quite significant at a p value of < .05. You actually do see a significant lift, though, in a micro-conversion, like click through rate. What do you do?

It’s not my place to tell you what to do. But you should state your planned actions up front so you don’t run into the myriad of cognitive biases that we humans have to deal with.

Related reading here.

3. Base your test on a reasonable hypothesis

What is a hypothesis, anyway?

It’s not a guess as to what will happen in your A/B test. It’s not a prediction. It’s one big component of ye old Scientific Method.

A good hypothesis is “a statement about what you believe to be true today.” It should be falsifiable, and it should have a reason behind it.

This is the best article I’ve read on experiment hypotheses: https://medium.com/@talraviv/thats-not-a-hypothesis-25666b01d5b4

I look at developing a hypothesis as a process of being clear in my thinking and approach to the science of A/B testing. It slows me down, and it makes me think “what are we doing here?” As the article above states, not every hypothesis needs to be based on mounds of data. It quotes Feynman: “It is not unscientific to take a guess, although many people who are not in science believe that it is.”

I do believe any mature testing program will require the proper use of hypotheses. Andrew Anderson has a different take, and a super valid one, about the misuse of hypotheses in the testing industry. I largely agree with his take, and I think it’s mostly based on the fact that most people are using the term “hypothesis” incorrectly.

4. Determine specifically which audience you’ll be targeting with this test

This is relatively quick and easy to understand. Which population would you like to test on – desktop, mobile, PPC audience #12, users vs. non-users, customer who read our FAQ page, a specific sequence of web pages etc. – and how can you take measures to exclude the data of those who don’t apply to that category?

It’s relatively easy to do this, at least for broad technological categorizations like device category, using common A/B testing platforms.

Point is this: you want to learn about a specific audience, and the less you pollute that sample, the cleaner your answers will be.

5. Estimate your MDE, sample size, statistical power, and how long your test will run before you run it

Most of the work in A/B testing comes before you ever set the test live. Once it’s live, it’s easy! Analyzing the test after the fact is especially easier if you’ve done the hard and prudent work up front.

What do you need to plan? The feasibility of your test in terms of traffic and time length, what minimum detectable effect you’d need to see to discern an uplift, and the sample size you’ll need to reach to consider analyzing your test.

It sounds like a lot, but you can do all of this with the help of an online calculator.

I actually like to use a spreadsheet that I found on the Optimizely knowledge base (here’s a link to the spreadsheet as well). It visually shows you how long you’d have to run a test to see a specific effect size, depending on the amount of traffic you have to the page and the baseline conversion rate.

You can also use Evan Miller’s Awesome A/B testing tools. Or, CXL has a bunch of them as well. Search Discovery also has a calculator with great visualizations.

6. Run the test for full business cycles, accounting for naturally occurring data cycles

One of the first and common mistakes everyone makes when they start A/B testing is calling a test when it “reaches significance.” This, in part, must be because in our daily lives, the term “significance” means “of importance” so it sounds final and deterministic.

Statistical significance (or the confidence level) is just an output of some simple math that tells you how unlikely a result is given the assumption that both variants are the same.

Huh?

We’ll talk about p-values later, but for now, let’s talk about business cycles and how days of the week can differ.

Image Source

The days of the week tend to differ quite a bit. Our goal in A/B testing is to get a representative sample of our population, which general involves collecting enough data that we smooth out for any jagged edges, like a super Saturday where conversion rates tank and maybe the website behavior is different.

Website data tends to be non-stationary (as in, they change over time) or sinusoidal – or rather, it looks like this:

Image Source

While we can’t reduce the noise to zero, we can run our tests for full weeks and business cycles to try to smooth things out as much as possible.

7. Run the test for the full time period you had planned

Back to those pesky p-values. As it turns out, an A/B test can dip below a .05 p-value (the commonly used rule to determine statistical significance) at many points during the test, and at the end of it all, sometimes it can turn out inconclusive. That’s just the nature of the game.

Image Source

Anyone in the CRO space will tell you that the single most common mistake people make when running A/B tests is ending the test too early. It’s the ‘peaking’ problem. You see that the test has “hit significance,” so you stop the test, celebrate, and launch the next one. Problem? It may not have been a valid test.

The best post written about this topic, aptly titled, is Evan Miller’s “How Not To Run An A/B Test.” He walks through some excellent examples to illustrate the danger with this type of peaking.

Essentially, if you’re running a controlled experiment, you’re generally setting a fixed time horizon at which you view the data and make your decision. When you peak before that time horizon, you’re introducing more points at which you can make an erroneous decision and the risk of a false positive goes wayyy up.

Image Source

8. Stick to testing only one variant (unless you’re correcting for it…)

Here we’ll introduce an advanced topic: the multiple comparisons problem.

When you test several variants, you run into a problem known as “cumulative alpha error.” Basically, with each variant, sans statistical corrections, you risk a higher and higher probability of seeing a false positive. KonversionsKraft made a sweet visualization to illustrate this:

This looks scary, but here’s the thing: almost every major A/B testing tool has some built in mechanism to correct for multiple comparisons. Even if your testing tool doesn’t, or if you use a home-brew testing solution, you can correct for it yourself very simply using one of many methods:

However, if you’re not a nerd and you just want to test some shit and maybe see some wins, start small. Just one v one.

When you do feel more comfortable with experimentation, you can and should look into expanding into A/B/n tests with multiple variants.

This is a core component of Andrew Anderson’s Discipline Based Testing Methodology, and if I can, I’ll wager to say it’s because it increases the beta of the options, or the differences between each one of the experiences you test. This, at heart, decreases your reliance on hard opinions or preconceived ideas about “what works” and opens you up to trying things you may not of in a simple A/B test.

But start slowly, keep things simple.

9. Be skeptical about numbers that look too good to be true

If there’s one thing CRO has done to my personality, it’s heightened my level of skepticism. If anything looks too good to be true, I assume something went wrong. Actually, most of the time, I’m poking at prodding at things, seeing where they may have been broken or setup incorrectly. It’s an exhausting mentality, but one that is necessary when dealing with so many decisions.

Ever see those case studies that proclaim a call to action button color change on a web page led to a 100%+ increase in conversion rate? Almost certainly bullshit. If you see something like this, even if you just get a small itch where you think, “hmm, that seems…interesting,” go after it. Also second guess data, and triple guess yourself.

As the analytics legend Chris Mercer says, “trust but verify.”

And read about Twyman’s Law here.

10. Don’t shut off a variant mid test or shift traffic allocation mid test

I guess this is sort of related to two previous rules here: run your test for the full length and start by only testing one variant against the control.

If you’re testing multiple variants, don’t shut off a variant because it looks like it’s losing and don’t shift traffic allocation. Otherwise, you may risk Simpson’s Paradox.

Intermediate A/B Testing Issues: A Whole Lot More You Should Maybe Worry About

  1. Control for external validity factors and confounding variables
  2. Pay attention to confidence intervals as well as p-values
  3. Determine whether your test is a Do No Harm or a Go For It test, and set it up appropriately.
  4. Consider which type of test you should run for which problem you’re trying to solve or answer you’re trying to find (sequential, one tail vs two tail, bandit, MVT, etc)
  5. QA and control for “flicker effect”
  6. Realize that the underlying statistics are different for non-binomial metrics (revenue per visitor, average order value, etc.) – use something like the Mann-Whitney U-Test or robust statistics instead.
  7. Trigger the test only for those users affected by the proposed change (lower base rates lead to greater noise and underpowered tests)
  8. Perform an A/A test to gauge variance and the precision of your testing tool
  9. Correct for multiple comparisons
  10. Avoid multiple concurrent experiments and make use of experiment “swim lanes”
  11. Don’t project precise uplifts onto your future expectations from those you see during an experiment.
  12. If you plan on implementing the new variation in the case of an inconclusive test, make sure you’re running a two-tailed hypothesis test to account for the possibility that the variant is actually worse than the original.
  13. When attempting to improve a “micro-conversion” such as click through rate, make sure it has a downstream effect and acts as a causal component to the business metric you care about. Otherwise, you’re just shuffling papers.
  14. Use a hold-back set to calculate the estimated ROI and performance of your testing program

Intermediate A/B Testing Issues: Explained

1. Control for external validity factors and confounding variables

Well, you know how to calculate statistical significance, and you know exactly why you should run your test for full business cycles in order to capture a representative sample.

This, in most cases, will reduce the chance that your test will be messed up. However, there are plenty more validity factors to worry about, particularly those outside of your control.

Anything that reduces the representativeness or randomness of your experiment sample can be considered a validity factor. In that regard, some common ones are:

  • Bot traffic/bugs
  • Flicker effect
  • PR spikes
  • Holidays and external events
  • Competitor promotions
  • Buggy measurement setup
  • Cross device tracking
  • The weather

I realize this tip is frustrating, because the list of potential validity threats is expansive, and possibly endless.

However, understand: A/B testing always involves risks. All you need to do is understand that and try to document as many potential threats as possible.

You know how in an academic paper, they have a section on limitations and discussion? Basically, you should do that with your tests as well. It’s impossible to isolate every single external factor that could affect behavior, but you can and should identify clearly impactful things.

For instance, if you raised a round of capital and you’re on the front page of TechCrunch and Hacker News, maybe that traffic isn’t exactly representative? Might be a good time to pause your experiments (or exclude that traffic from your analysis).

2. Pay Attention to Confidence Intervals as Well as P-Values

While it’s common knowledge among experimenters that one should strive to call a test “significant” if the p-value is below .05. This, while technically arbitrary, ensures we have a certain level of risk in our decision making and it never rises above an uncomfortable point. We’re sort of saying, 5% of experiments may show results purely due to chance, but we’re okay with that, in the long run.

Many people, however, fail to understand or use confidence intervals in decision making.

What’s a confidence interval in relation to A/B testing?

Confidence intervals are the amount of error allowed in A/B testing – the measure of the reliability of an estimate. Here’s an example outlined by PRWD:

Image Source

Basically, if your results, including confidence intervals, overlap at all, then you may be less confident that you have a true winner.

John Quarto-vonTivadar has a great visual explaining this:

Image Source

Of course, the greater your sample size, the lower the margin of error becomes in an A/B test. As is usually the case with experimentation, high traffic is a luxury and really helps us make clearer decisions.

3. Determine whether your test is a Do No Harm or a Go For It test, and set it up appropriately.

As you run more and more experiments, you’ll find yourself less focused on an individual test and more on the system as a whole. When this shift happens, you begin to think more in terms or risk, resources, and upside, and less in terms of how much you want your new call to action button color to win.

A fantastic framework to consider comes from Matt Gershoff. Basically, you can bucket your test into two categories:

  1. Do No Harm
  2. Go For It

In a Do No Harm test, you care about the potential downside and you need to mitigate it or avoid it. In a Go For It test, we have no additional cost to making a Type 1 error (false positive), so there is no direct cost invoked when making a given decision.

In the article, Gershoff gives headline optimization as an example:

“Each news article is, by definition, novel, as are the associated headlines.

Assuming that one has already decided to run headline optimization (which is itself a ‘Do No Harm’ question), there is no added cost, or risk to selecting one or the other headlines when there is no real difference in the conversion metric between them. The objective of this type of problem is to maximize the chance of finding the best option, if there is one. If there isn’t one, then there is no cost or risk to just randomly select between them (since they perform equally as well and have the same cost to deploy). As it turns out, Go For It problems are also good candidates for Bandit methods.”

Highly suggested that you read his full article here.

4. Consider which type of test you should run for which problem you’re trying to solve or answer you’re trying to find (sequential, one tail vs two tail, bandit, MVT, etc)

The A/B test is sort of the gold standard when it comes to online optimization. It’s the clearest way to infer a difference between a given element or experience. Though there are other methods to learning about your users.

Two in particular that are worth talking about:

  1. Multivariate testing
  2. Bandit tests (or other algorithmic optimization)

Multivariate experiments are wonderful for testing multiple micro-components (e.g. a headline change, CTA change, and background color change) and determining their interaction effects. You find which elements work optimally with each other, instead of a grand and macro-level lift without context as to which micro-elements are impactful.

Image Source

In my anecdotal experience, I’d say good testing programs usually run one or two multivariate tests for every 10 experiments run (the rest being A/B/n).

Bandit tests are a different story, as they are algorithmic. The hope is that the minimize “regret” or the amount of time you’re exposing your audience to a suboptimal experience. So it updates in real time to show the winning variant to more and more people over time.

Image Source 

In this way, it sort of “automates” the a/b testing process. But bandits aren’t always the best option. They sway with new data, so there are contextual problems associated with say, running a bandit test on an email campaign.

However, bandit tests tend to be very useful in a few key circumstances:

  • Headlines and Short-Term Campaigns (e.g. during holidays or short term, perishable campaigns)
  • Automation for Scale (e.g. when you have tons and tons of tests you’d like to run on thousands of templatized landing pages)
  • Targeting (we’ll talk about predictive targeting in “advanced” stuff)
  • Blending Optimization with Attribution (i.e. testing, while at the same time, determining which rules and touch points contribute to the overall experience and goals).

5. QA and control for “flicker effect”

Flicker effect is a very special type of A/B test validity threat. It’s basically when your testing tool causes a slight delay on the experiment variation, briefly flashing the original content before serving the variation.

There are tons of ways to reduce flicker effect that I won’t go into here (read this article instead). A broader point is simply that you should “measure twice, cut once,” and QA your test on all major devices and categories before serving it live. Better to be prudent and get it right than to fuck up your test data and waste all the effort.

6. Realize that the underlying statistics are different for non-binomial metrics (revenue per visitor, average order value, etc.) – use something like the Mann-Whitney U-Test instead of a Z test.

When you run an A/B test with the intent to increase revenue per visitor or average order value, you can’t just plug your numbers into the same statistical significance calculator as you would with conversion rate tests.

Essentially, you’re looking at a different underlying distribution of your data. Instead of a binomial distribution (did convert vs. didn’t convert), you’re looking at a variety of order sizes, and that introduces the concept of outliers and variance into your calculations. It’s often the case that you’ll have a distribution affected by a very small amount of bulk purchasers, who skew a distribution to the right:

Image Source

In these cases, you’ll want to use statistical test that does not make the assumption of a normal distribution, such as Mann-Whitney U-Test.

7. Trigger the test only for those users affected by the proposed change (lower base rates lead to greater noise and underpowered tests)

Only those affected by the test should be bucketed and included for analysis. For example, if you’re running a test on a landing page, where a modal pops up after scrolling 50%, you’d only want to include those who scroll 50% in the test (those who don’t would never have been the audience intended for the new experience anyway).

The mathematical reasoning for this is that filtering out unaffected users can improve the sensitivity (statistical power) of the test, reducing noise and making it easier for you to find effects/uplifts.

Most of the time, this is a fairly simple solution involving triggering an event at the moment where you’re looking to start analysis (at 50% scroll depth in the above example).

Read more on triggering here.

8. Perform an A/A test to gauge variance and the precision of your testing tool

While there’s a constant debate as to whether A/A tests are important or not, it sort of depends on your scale and what you hope to learn.

The purpose of an A/A test – testing the original vs the original – is mainly to establish trust in your testing platform. Basically, you’d expect to see statistically significant results – despite the variants being the same – about 5% of the time with a p-value of < .05.

In reality, A/A tests often open up and introduce you to implementation errors like software bugs. If you truly operate at high scale and run many experiments, trust in your platform is pivotal. An A/A test can help provide some clarity here.

This is a big topic. Ronny Kohavi wrote a great paper on it, which you can find here.

9. Correct for multiple comparisons whenever applicable

We’ve talked a bit of about the multiple comparisons problem, and how, when you’re just starting out, it’s best to just run simple A/B test. But you’re eventually going to get curious, and you’ll eventually want to run a test with multiple variants, say an A/B/C/D/E test. This is good, and you can often get more consistent results from your program when you test a greater variety of options. However, you do want to correct for multiple comparisons when doing this.

It’s fairly simple mathematically. Just use Dunnett’s test or the Sidak correction.

You also need to keep this multiple comparisons problem in mind when you do post-test analysis on segments. Basically, if you look at enough segments, you’ll find a statistically significant result. The same principle applies (you’re increasing the risk of a false positive with every new comparison).

When I do post-test segmentation, I often use it more as a tool to find research questions than to find answers and insights to based decisions on. So if I find a “significant” lift in a given segment, say Internet Explorer visitors in Canada, I note that as an insight that may or may not be worth testing. I don’t just implement a personalization rule, as doing that each time would certainly lead to organizational complexity, and would probably result in many false positives.

10. Avoid multiple concurrent experiments and make use of experiment “swim lanes”

Another problem that comes with scale is running multiple concurrent experiments. Basically, if you run two tests, and they’re being run on the same sample, you may have interaction effects that ruin the validity of the experiment.

Best case scenario: you (or your testing tool) creates technical swim lanes where a group can only be exposed to one experiment at a time. It prevents, automatically, this sort of cross-pollination, and reduces sample pollution.

A scrappier solution, one more fit for those running fewer tests, is to run your proposed experiments through a central team who gives the green-light and can see, at a high level, where there may be interaction effects, and avoid them.

11. Don’t project precise uplifts onto your future expectations from those you see during an experiment.

So, you got a 10% lift at 95% statistical significance. That means you get to celebrate that win in your next meeting. You do want to state the business value of an experiment like this, of course – what’s a 10% relative lift mean in isolation – so you also include a projection of what this 10% lift means for the business. “We can expect this to bring us 1,314 extra subscriptions per month,” you say.

While I love the idea of tying things back to the business, you want to tread lightly in matters of pure certainly, particularly when you’re dealing with projections.

An A/B test, despite misconceptions, can only truly tell you the difference between variants during the time we’re running the experiment. We do hope that differences between variants expand past the duration of the test itself, which is why we go through so much trouble in our experiment design to make sure we’re randomizing our sample and testing on a representative sample.

But a 10% lift during the test does not mean you’ll see a 10% lift during the next few months.

If you do absolutely need to project some sort of expected business results, at least do so using confidence intervals or a margin of error.

“We can expect, given the limitations of our test, to see X more subscriptions on the low side, and on the high side, we may see as many as Y more subscriptions, but there’s a level of uncertainty involved in making these projections. Regardless, we’re confidence our result is positive and will result in an uptick in subscriptions.”

Nuance may be boring and disappointing, but expectation setting is cool.

12. If you plan on implementing the new variation in the case of an inconclusive test, make sure you’re running a two-tailed hypothesis test to account for the possibility that the variant is actually worse than the original.

One-tail vs. two-tail a/b testing. This can seem like a somewhat pedantic debate in many cases, but if you’re running an A/B test where you expect to roll out the variant even if the test is inconclusive, you will want to protect your downside with a two-sided hypothesis test.

Read more on the difference between one-tail and two-tail A/B tests here.

13. When attempting to improve a “micro-conversion” such as click through rate, make sure it has a downstream effect and acts as a causal component to the business metric you care about. Otherwise, you’re just shuffling papers.

Normally, you should choose a metric that matters to your business. The conversion rate, revenue per visitors, activation rate, etc.

Sometimes, however, that’s not possible or feasible, so you work on moving a “micro-conversion” like click through rate or improving the number of people who use a search function. Often, these micro-conversions are correlative metrics, meaning they tend to associate with your important business metric, but aren’t necessarily causal.

Increased CTR might not increase your bottom line (Image Source)

A good example is if you find a piece of data that says people who use your search bar purchase more often and at higher volumes than those who don’t. So, you run a test that tries to increase the amount of people using that search feature.

This is fine, but make sure, when you’re analyzing the data, that your important business metric moves. So you increased people who use the search feature – does that also increase purchase conversion rate and revenue? If not, you’re shuffling papers.

14. Use a hold-back set to calculate the estimated ROI and performance of your testing program

Want to know the ROI of your program? Some top programs make use of a “holdback set” – keeping a small subset of your audience on the original version of your experience. This is actually crucial when analyzing the merits of personalization/targeting rules and machine learning-based optimization systems, but it’s also valuable for optimization programs overall.

A universal holdback – keeping say 5% of traffic as a constant control group – is just one way to try to parse out your program’s ROI. You can also do:

  • Victory Lap – Occasionally, run a split test combining all winning variants over the last 3 months against a control experience to confirm the additive uplift of those individual experiments.
  • Re-tests – Re-test individual, winning tests after 6 months to confirm that “control” still underperforms (and the rate at which it does).

If you’re only running a test or two per month, these system-level decisions may be less important. But if you’re running thousands of tests, it’s important to start learning about program effectiveness as well as the potential “perishability” or decay of any given test result.

Here are a bunch of other ways to analyze the ROI of a program (just don’t use a simple time period comparison, please).

Advanced A/B Testing Issues – Mostly Fringe Cases That Some Should Still Consider

  1. Look out for sample ratio mismatch.
  2. Consider the case for a non-inferiority test when you only want to mitigate potential downsides on a proposed change
  3. Use predictive targeting to exploit segments who respond favorably to an experience.
  4. Use a futility boundary to mitigate regret during a test
  5. When a controlled experiment isn’t possible, estimate significance using a bayesian causal model

Advanced A/B Testing Issues: Explained

1. Look out for sample ratio mismatch.

Sample Ratio Mismatch is a special type of validity threat. In an A/B test with two variants, you’d hope that your traffic would be randomly and evenly allocated among both variants. However, in certain cases, we see that the ratio of traffic allocation is off more than would be natural. This is known as “sample ratio mismatch.”

This, however, is another topic I’m going to politely duck out of explaining, and instead, link to the master, Ronny Kohavi, and his work.

He also has a handy calculator so you can see if your test is experiencing a bug like this.

2. Consider the case for a non-inferiority test when you only want to mitigate potential downsides on a proposed change

Want to run a test solely to mitigate risk and avoid implementing a suboptimal experience? You could try out a “non-inferiority” test (as opposed to the normal “superiority” test) in the case of easy decision tests and tests with side benefits outside of measurement capability (e.g. brand cohesiveness).

This is complicated topic, so I’ll link out to a post here.

3. Use predictive targeting to exploit segments who respond favorably to an experience.

A/B testing is cool, as is personalization. But after a while, your organization may be operating at such as scale that it isn’t feasible to manage, let alone choose, targeting rules for all those segments you’re hoping to reach. This is a great use case for machine learning.

Solutions like Conductrics have powerful predictive targeting engines that can find and target segments who respond better to given experience than the average user. So Conductrics (or another solution) may find that rural visitors using smartphones convert better with Variant C. You can weigh the ROI of setting up that targeting rule and do so, managing it programmatically.

Image Source

4. Use a futility boundary to mitigate regret during a test

This is basically a testing methodology to improve efficiency and allow you to stop A/B tests earlier. I’m not going to pretend I fully grok this one or have used it, but here’s a guide if you’d like to give it a try. This is something I’m going to look into trying out in the near future.

5. When a controlled experiment isn’t possible, estimate significance using a bayesian causal model

Often, when you’re running experiments, particularly those that are not simple website changes like landing page CTAs, you may not be able to run a fully controlled experiments. I’m thinking of things like SEO changes, campaigns you’re running, etc.

In these cases, I usually try to estimate how impactful my efforts were using a tool like GA Effect.

It appears my SEO efforts have paid off marginally

Conclusion

As I mentioned up front, by its very nature, A/B testing is a statistical process, and statistics deals with the realm of the uncertainty. Therefore, while rules and guidelines can help reduce errors, there is no decision tree that can result in the perfect, error-less testing program.

The best weapon you have is your own mind, inquisitive, critical, and curious. If you come across a fringe issue, discuss it with colleagues or Google it. There are tons of resources and smart people out there.

I’m not done learning about experimentation. I’ve barely cracked the surface. So I may reluctantly come to find out in a few years that this list is naive, or ill-suited for actual business needs. Who knows.

But that’s part of the point: A/B testing is difficult, worthwhile, and there’s always more to learn about it.

Key Sources:

Also, thanks to Erik Johnson, Ryan Farley, Joao Correia, Shanelle Mullin, and David Khim for reading this and adding suggestions before publication.

The post What is A/B Testing? An Advanced Guide + 29 Guidelines appeared first on Alex Birkett.

]]>
Growth Models https://www.alexbirkett.com/growth-models/ Tue, 17 Jul 2018 12:37:46 +0000 https://www.alexbirkett.com/?p=571 How do you model and predict growth (and growth opportunities)? There’s all this talk about “growth models” and “growth modeling” but not much talk about how to build them and get value from them. As with many things, it’s easy to see why they’re important, but hard to put them into action. This post covers ... Read more

The post Growth Models appeared first on Alex Birkett.

]]>
How do you model and predict growth (and growth opportunities)?

There’s all this talk about “growth models” and “growth modeling” but not much talk about how to build them and get value from them.

As with many things, it’s easy to see why they’re important, but hard to put them into action.

This post covers how I think about growth modeling, how several impressive companies do growth models, and how you can build your own (even if you’re just starting out).

What’s a Growth Model? An Introduction

Growth models are a representation of the underlying mechanisms, levers, and reasons for your company’s growth.

They seem to have become popular recently, notably with the rising trend of ‘growth hacking,’ ‘growth marketing,’ ‘growth,’ or whatever we’re currently calling data-driven startup marketing (“a marketer by any other name…”).

Here are a few definitions of growth models from various sources…

Segment:

“Growth models…are feedback loops that project how one cohort of users leads to the acquisition of the next cohort of users. Viewing growth with this reinforcing model simplifies a complex system with tons of moving pieces to a set of functions and assumptions.”

HackerNoon:

“Every startup needs a framework/model for growth; a focused approach for scaling its organisation and user base.”

GrowthHackers:

“The concept of a growth model is both an old and a new one. It has a lot of similarities and connections to what’s traditionally called a “business model”, but companies and teams now focus much more specifically on growth and take a much more data-driven and experimental approach.

At its core, a growth model boils down to a way to conceptualize and summarize your business in a simple equation, which allows you to think about growth in a holistic and structured way.”

Translation: it’s a new concept for an old idea about creating a simplified high level model in order to make better business decisions. You’re trying to explain “how does this company grow?” What levers exist as inputs that contribute to “growth” as an output (however you define growth)?

In growth, there tends to be two different types of models:

  • Qualitative models
  • Quantitative models

Qualitative models are going to be more descriptive in nature. They’ll be high level descriptions of how your business is growing and plans to grow. You can draw pictures of these:

Image Source

For example, it’s easy enough to analyze the growth of an ecommerce business on a qualitative level: look at your Google Analytics source/medium report. How are you currently acquiring visitors and customers?

You can say, at a high level, that you’re acquiring users from some main traffic sources, you’re converting them to subscribers, users, or customers at some rate, and some amount go on to repeat purchase (or tell their friends). You can get a pretty darn good idea of how your business is growing just from these numbers.

Even without considering attribution or looking at data, you probably have a good finger tip feel for how you’re getting customers. Is it virality? Word of mouth? Content marketing? PPC? Write all this down, and you’ve got a good idea of your primary growth levers.

Eventually, when you learn more about how you’re growing, particularly when you justify the use and implementation of a good analytics setup and can get granular data, then you can build out a quantitative model from this qualitative one.

Things can get pretty nitty gritty here, and every model is different (it really depends on your business concept). Here’s an example from Sidekick, a team that existed within HubSpot a few years ago that made features like Email Tracking and Documents that now exist within the HubSpot Sales suite:

Image Source

As a hypothetical example, let’s say your ecommerce business is growing primarily from content marketing. In this case, you’d itemize the different steps of that customer journey and fill in corresponding metrics.

Some important variables in the context of content marketing might be sessions to the blog (overall top of funnel traffic), how many people click over to the store, how many people add something to their cart, how many people start checkout, how many people purchase, what the average purchase size is, how often people make return purchases on average, and possibly email subscriptions as well (which you could also build a micro-model out of).

You should make your spreadsheet prettier than mine, but here’s an example of modeling out that one growth channel over a few months.

Semi-related side note: I covered, in depth, how to model content growth and analyze results in this post on content marketing analytics as well as in my course on content marketing strategy

The important part with a growth model is that you project out possible results to the future using your current trends. That way, you can tweak different variables to see what the biggest impact levers could be.

You can predict, based on current trends, where you’ll be in 3, 6, and 12 months, and if you’re okay with that projection. If you’re not okay with the projection, you can run sensitivity analysis to see which levers may be the most effective places to put your focus.

Is the add to cart to checkout page step low? Conversion optimization can help solve that. Do you simply need to lift your organic sessions because they’re stagnating? It’s easy enough to see that in the model.

Your model is only worth building if you’re going to use it to help you make decisions.

Growth Models Examples: 5 Ways to Model Growth

While most growth models are spoken of in general terms, they usually have similar ingredients. What acquisition channels are bringing in customers? How much is each customer worth? How long does each customer last? How many friends do they invite to your product or service?

In addition, they usually boil down to a few variables, which most of the time are further broken down into sub-variables. The main things are usually:

Acquisition Channels * Value of a User/Customer * Retention

One of the most lucid growth models I’ve seem comes from Drew Sanocki, who explains ecommerce growth in terms of three levers.

1. Drew Sanocki’s Three Levers of Ecommerce Growth

Drew Sanocki, ecommerce growth legend, teaches a concept in CXL Institute’s Ecommerce Growth Masterclass that I really like.

He explains that there are really only three growth levers we can pull to improve ecommerce growth:

  • Number of customers
  • Average order value
  • Customer retention

Here’s his full quote from the course (also, take the course):

“We get caught in tactical maneuver hell, where we look at all these tactical opportunities and get stressed out about optimizing this entire thing when it really only boils down to these three multipliers.

And the power of these three is that improving any one of them is good, but if you can improve all three, the results multiply.

For example, in a year, do you think you can increase your retention by 30%? Can you increase your average order size by 30%? Can you increase your total number of customers by 30%?

Any one of these in isolation, I think, is really doable. The trouble people get in is when they try to find the silver bullet that will double your total number of customers in a year. It’s really hard.

But if you look at only moving each of these only 30%, you’re going to more than double the business.”

Now, you can further break down each of these categories, right? Customer acquisition can be broken down into several smaller factors actually:

  • Customer acquisition channel (PPC, SEO, etc.)
  • Conversion rate (of the people that land on your site, how many convert?)
  • Word of mouth/virality (how many customers bring other customers to you?)

Each of those is sort of a mini-lever that you can pull within the number of customers category. That’s the magic of building out a quantitative model, too. Once you see that, while you’re bringing tons of customers to your site with SEO, but none of them are converting, you can begin to work on conversion rate optimization. It’s a great way to prioritize high impact growth opportunities.

Similarly, you can break down average order value into different factors:

  • Discounting
  • Operational costs (reducing the cost of shipping, merchandising, becoming more efficient)
  • CPA and advertising
  • Upsells/cross-sells/recommendations

This, I suppose, could be also put under the category of conversion optimization. But in reality, with this lever you’re optimizing for increased order size, instead of optimizing for increased purchase percentage.

Finally, the last category is around retention. How long do customers stay around (and in ecommerce terms, how frequently do they purchase from you?). To this end, you can break that down into channels like:

  • Email marketing
  • Purchase frequency
  • Customer satisfaction
  • Customer lifetime value

If you’re a subscription commerce company, this step is even more apparent and even more important.

Note: you can use models like this for things outside of ecommerce as well. For example, Shanelle Mullin used this concept to create a model for content marketing growth:

Image Source

As Drew mentioned in the quote above, if you can increase one of these levers by a few percentage points, that’s great. But if you can increase every one of these levers by 10%, that’s compound value. That’s where growth happens.

This is a good macro-model for ecommerce. Let’s look at a similar model for SaaS (traditionally B2B, but works for B2C as well).

2. SaaS Growth Modeling

Similar to ecommerce, you’ve really got a few growth levers for SaaS:

  • Number of customers
  • Average order value
  • Customer retention

The only real difference here is how you define customer retention, and the steps that it takes to become a customer.

Often, in B2B, you’re going to break down your “number of customers” lever into distinct pieces:

  • Traffic
  • Subscriptions
  • Marketing Qualified Leads
  • Sales Qualified Leads
  • Customers

In this model, someone may find you via a search query (“customer feedback software”), and perhaps they land on a blog post. They find that you offer an ebook that explains how to accurately measure customer satisfaction, so they download that and they become a Marketing Qualified Lead.

Next time they visit, they sign up for a webinar on customer success, and they give you their company info and phone number. Now we can refer to them as a Sales Qualified Lead.

Because your model will be higher touch, most customers will require a sales touch, so you separate your stages into two distinct lead classes to reflect that.

If your B2B model is a lower touch model, like Dropbox, or if you run a B2C application, your model may look like this:

  • Traffic
  • Freemium or free trial users
  • Upgrade to paid customers

In this scenario, a customer may hear about HeadSpace on podcast advertisement, check out the website, and return later to give it a try. They sign up for the free app, use it for a couple days, then never returned. They dropped off before upgrading to becoming a paid customer.

For all intents and purposes, these B2B growth models have pretty much the same levers as ecommerce models, you just define stages differently and break steps down into micro-models differently (though how you break these down also depends on your acquisition strategy).

Image Source

Image source

One of the better worksheets I’ve seen was built out by Candace Ohm, data scientist and improv comedy legend. She offers an Excel spreadsheet (here) where you can  learn how funnel metrics, customer churn, user demand, and virality effect your growth curve. It’s worth playing around with:

We’ve got the high level models down now, so let’s dive into a few micro models that we can use to improve given channels or parts of the funnel, like word of mouth/referral and conversion optimization.

3. Optimizing Referral Marketing: A Model

Referral is usually a growth lever, regardless of business type or size. People telling other people about your business is one of the best ways to grow. Most smart businesses try to incentivize this in some way (in addition to building something worth talking about).

Though a lot of word of mouth is frankly un-trackable (if I tell a friend how awesome MeUndies is, they won’t know how to attribute that), you can track and optimize a good portion of referral traffic, especially if it’s incentivized (i.e. you give a discount or tracking code).

Like any other model, you’ll want to break it out step-by-step:

  • How many people are offered a referral link?
  • How many people accept the link?
  • How many people send the link to other people?
  • How many people do they send the referral link to?
  • How many of those people open the message?
  • How many of them click through to the website?
  • How many buy something?

Then it loops back, because you can offer that person a referral code as well. This is essentially known as a viral loop, and you measure its effectiveness using a ‘viral coefficient.”

We can also vastly simplify this model, like the following graphic shows:

Image Source

Depending on your technology stack, you may have to pull this data and build the model by yourself. But you might also just be able to get the reporting from your tool, especially if you use something like Referralcandy.

4. Conversion Optimization Modeling

Conversion optimization should be an inevitable part of your model, because no matter what business you’re in, you’re going to bring a some amount of visitors to the website, and a certain percentage of them are not going to buy or become users.

If you can increase the percentage of visitors that convert, you get a compound effect over time (and increasing conversion rate increases the effectiveness of your other channels, which lets you bid more on ads, spend more on content, etc.).

Image Source

Now again, if you’re in ecommerce, a lot of this is simplified due to simple prototypicality. In other words, almost all ecommerce sites follow a very similar pathway to conversion. Everyone has a cart, a checkout flow, a thank page after conversion, etc.

The simplest growth model you can construct for ecommerce CRO, then, is a sort of funnel. You can start at the broadest level and go all the way to the home run:
Homepage -> product page -> add to cart -> checkout -> purchase conversion.

Every time you can increase a step of this is a step in the right direction, though the end conversion is of course the most important (as well as the order size). But if you can systematically improve your funnel, all other marketing efforts will be improved by extension.

Everyone has their own approach to auditing and modeling conversion opportunities, as well.

I reached out to Luiz Centenaro, Optimization Manager at Optimizely & an eCommerce consultant, to see how he approaches CRO growth models, and he explains that he sets up a baseline with A/A testing and Google Analytics analysis:

“I typically approach an eCommerce site by running an A/A test on every touchpoint.

A/A test the Homepage, Category Pages, Cart Pages, Product Pages and Checkout and track clicks on everything. If you have a good marketing team this can all be accomplished within Google Analytics or Google Tag Manager but you can also do this with your A/B testing platform such as a Optimizely.

After you run your A/A test you’ll have a baseline with revenue per visitor and conversion rate for every page and you can segment to see the difference between mobile and desktop too.

Simultaneously while the A/A test is running you can research the demographics of the visitors using Google Analytics. One of my favorite reports is demographics by age.

Age and demographics should be taken into account when crafting hypothesis to A/B test. You won’t market to a 65 year old the same way you market to an 18 year old and they show significantly different user behaviors.”

It’s not much more difficult in other types of websites either, so long as you can define the discrete stages of your customer journey. With a B2B SaaS company, that may look something like: Homepage -> Pricing Page -> Get Started Page -> Signup Flow (several steps?) -> New User Created -> Activation Event -> Upgrade to Paid

Conversion optimization can help optimize the steps on a site that lead to a visitor becoming a user, and possibly even a user becoming an activated or engaged user.

5. Virality: A Micro-Model

Viral growth is one of the better developed channels for building out models. While some viral mechanisms may look different (Apple iPods, Hotmail, Dropbox, and Bird scooters are all examples of virality), they do include similar variables that allow us to model the system:

  • The Viral Coefficient (K)
  • Viral Cycle Time

The Viral Coefficient is just a name for the number of new users a current user brings in through virality. The formula is stupid simple: K = i * conv%, where i is the number of invites sent and conv% is the conversion rate of those invites.

That’s really all you need to model out the first part. ForEntrepreneurs even offers a free spreadsheet to help you build that out (here):

Image Source

The other part of the equation is how fast you can acquire new users through viral loops. Clearly, the faster you can go through the viral loop, the better.

As with the Viral Coefficient, your Cycle Time includes several sub-variables as well. David Skok draws out an example here:

Image Source

To the extent you can shorten the time length between any of those steps, you can increase your growth rate.

It’s important to model the time factors as well (which is included in the ForEntrepreneurs spreadsheet. Really, if you’re interested in viral growth, I’d just read that post, as this section is clearly just a summarized version of it):

Image Source

With this, and every other model, the powerful part is that you can tweak different variables to see what happens to the output. Increase or decrease conversion rate of invites by 5%. What happens? Increase the number of invites sent. Does that move the needle?

This can help you make important decisions on what drivers to focus on when optimizing viral loops.

Everyone wants virality.

As David Skok put it, the perfect business model is “Viral customer acquisition with good monetization. However viral growth turns out to be an elusive goal, and only a very small number of companies actually achieve true viral growth.”

In my experience, this area has been the most over-exposed to bad content publishing and bad thought leadership. Most of the core of viral growth is a noteworthy product. It’s hard to make a bad or a commodity product viral (though not impossible, just probably not worthwhile).

Limitation With Models and What to Expect

As the statistician George E.P. Box famously said, “All models are wrong; some models are useful.”

No matter what model you use to represent and predict growth, it won’t be completely accurate once the rubber meets the road, once your plan meets the messiness of reality. As the great philosopher Mike Tyson once said, “Everyone has a plan until they get punched in the mouth.”

This echoes the common wisdom, probably first said by Helmuth von Moltke the Elder: “No plan survives contact with the enemy.”

That’s all to say: be fluid, and update your model as you get new information and insight.

When you look at, say, an inbound marketing funnel, you don’t expect it to exactly and linearly reflect reality, do you? Funnels are growth models; they’re ways to simplify the concept of how you’re growing, provide directions for measurement, and allude to opportunities for effort and optimization.

Image Source

These types of things are most useful in planning; the execution of those plans is still to be determined. I’ve built out complex models only to find once I hit the ground that I didn’t actually have the resources to carry out some of my top prioritized plans. Whoops.

So, don’t expect these models to be perfect. Expect them to be useful and actionable.

Conclusion

Growth models give you an imperfect, yet helpful model to show you how you’re growing, what kind of growth numbers you can expect in the future, and some possible opportunities to impact that growth in a positive direction.

In most businesses, there are really three levers you can pull, at least at a high level:

  • Number of total customers
  • Average customer/transaction value
  • Retention/lifetime value

To each of these levers, you can break them down into, really, an unlimited array of possible channels and tactics. That’s where things get complicated (and there’s always a tradeoff between cost and complexity of modeling). A good model is both useful and accurate, to some degree of each, but no model can be both perfectly accurate and comprehensible/useable.

Build out models to give you a better understanding of your growth, and also a better way to communicate that with others. Don’t expect a perfect vision of reality, but expect them to help you prioritize and find opportunities you otherwise may not have.

The post Growth Models appeared first on Alex Birkett.

]]>
How to Capture Email Leads (using Journalism’s 5 W’s technique) https://www.alexbirkett.com/capture-email-leads/ Wed, 30 May 2018 00:54:06 +0000 https://www.alexbirkett.com/?p=492 Capturing email leads is one of the primary goals of most content marketing programs. The money’s in the list, you get a million dollars back for every dollar you invest in email marketing, yada yada yada all the cherry picked statistics. Anyway, you know it’s important or you wouldn’t have found your way here. The ... Read more

The post How to Capture Email Leads (using Journalism’s 5 W’s technique) appeared first on Alex Birkett.

]]>
Capturing email leads is one of the primary goals of most content marketing programs. The money’s in the list, you get a million dollars back for every dollar you invest in email marketing, yada yada yada all the cherry picked statistics.

Anyway, you know it’s important or you wouldn’t have found your way here.

The way most people go email capture it is pretty rudimentary: throw up a popup with some seemingly tempting offer, and let the chips fall where they may.

Upon first glance, email capture seems straightforward, with little wiggle room. However, there are really endless possibilities for experimentation and creativity. Actually, I believe there are so many different ways to go about email capture, that it’s a bit overwhelming.

So, to help myself set up some parameters when auditing or optimizing an email capture program, I walk through these questions:

This is usually referred to as the Five Ws (or the Five Ws and How, 5W1H, or Six Ws). It’s commonly applied in journalism, though also in other inquisitive endeavors such as research and police investigations. I believe it can help you to set up an email collection program, but it can also help you audit an existing one and improve it. Without a guiding framework, it’s easy to get lost in millions of tactics, or worse, never know where to get started at all.

All that follows can be applied across industries – SaaS, ecommerce, personal blogs, etc. The specifics may be slightly different, but the game is the same. Emails are valuable for everyone, after all.

Who: Defining Your Target Personas and How to Reach Them

Sales people talk to customers. Customer Success talks to customers. But marketers don’t usually talk to customers. No bueno.

This is a shortcoming for many reasons, but for a concise argument, it’s because “who” you want to target helps you answer the rest of these questions. It all starts with the customer.

This “who?” question branches off in two directions:

  • What is our target audience like (how can we model our target customer)?
  • To whom will we actually show an email capture form?

The first question is a broader one; it affects every part of your marketing.

I know there’s a lot of bashing around personas, but it’s mostly because marketers have ruined them. The modern “persona” is almost a parody of itself. They’re created with no data, they include irrelevant or useless information, and they’re given silly names like “Meticulous Melvin.”

They are to marketing what the cheesy stock photo is to web design: a lazy placeholder for something that should actually have value.

This is “aspirational Alex,” and his favorite band is Blink 182.

All that aside, personas hold lots of value if you do them right. They’re a representative model – not 100% accurate, but an actionable approximation – of your target customer that you can use for various product and marketing decisions.

This is no place to dive into what I consider to be a good user persona creation process (summarized: use real data instead of made up stuff). But you should read this CXL article on the topic.

In general, before you create email capture offers, do some research on your audience and find out what they may actually respond to. It will save you tons of time and frustrations.

The second branch here is more local: to whom will you target your email capture form? You can choose to target offers based on:

  • Traffic source
  • Referral source
  • URL/page targeting
  • Number of pages views
  • New vs. return visitors
  • Mobile vs. desktop.

If you’re a Google Analytics user, this information could be found in both the “Audience” and “Acquisition” sections of your dashboard.

Now, the most basic implementation of this specific audience targeting is “everyone.” This is probably unsexy to personalization advocates, but oh well – you save on a lot of complexity by just throwing a static form on your blog asking people to subscribe. Most sites have some sort of static subscription mechanism like that:

When you do start running some A/B tests on your email capture forms – or at least looking into your historical content marketing analytics data – you may find that some audience segments are responding better or worse to different offers. This is when you might want to look into audience targeting.

Most popup tools you’ll use offer this sort of thing out of the box, at least with basic rules like device targeting (mobile vs. desktop) and URL inclusions or exclusions.

You can also build these rules out in a tag management platform like Google Tag Manager, if that’s what you’re using to fire your lead capture tools.

You can get pretty complicated with this stuff, but I always try to keep it simple, as complexity carries a management cost. On my own site, I basically just target all desktop users on their first visit (and their next visit afterwards, after a period of 2 weeks), after scrolling 50%.

I also have a few static forms on my site that everyone sees. There’s no special targeting.

It doesn’t need to be this simple though. For instance, when I was at CXL, we aligned several offers with different content categories, depending on the problem it solved (enterprise CRO program building, A/B testing ebook, CRO mastery guide, etc.).

HubSpot has a million content offers depending on the content topic, the blog where it lives, the desired conversion pathway, etc.

While I said your “who” is your starting point, in reality, it’s something that you’ll definitely learn more about after you create email capture forms and get data rolling in. No matter the pretty picture we paint with our personas, reality usually tells us the situation is different. As Mike Tyson once said, “everyone has a plan until they get punched in the mouth.”

In other words, analyze the data once you have it; it can help you learn which offers convert better or worse on which pages and with which audiences.

This stuff is quite easy to model out at a basic level, and you can get an excellent look at user intent and how well your offer is aligned. Just line up all the associated impressions and conversion rates, and see which ones are lagging:

As I showed in my article on content optimization, I like to model out projects based on the assumption that we could possibly bring every one of these up to the average conversion rate.

To do that, just calculate the average conversion rate (=AVERAGE(E:E)) and put that in a new column.

Multiply the impression count in column C by the average conversion rate, and you have a feasible goal that is hypothetically attainable. You can then put some conditional formatting on them to isolate only the potential uplifts greater than zero (i.e. posts that have a below average conversion rate and could therefore improve if you brought them up to the average).

Then you can project out which email capture locations would bring the most value if you focused on optimizing it.

Note: when you do a report like this, it doesn’t necessarily mean that an offer or targeting is under-optimized for a page; it may just mean the page and the traffic it brings has low or now intent to sign up. I find this is a worthy heuristic, however, to explore optimization opportunities.

Sometimes, I lament, you can tweak and optimize a page forever for marginal or zero gains, and you’re better off simply moving on to “warmer” traffic.

You can also spin through a few custom reports in Google Analytics to weigh out which segments seems to be responding better or worse. A super common and easy one to start with is targeting by device. It’s probably the case that you should be targeting mobile and desktop users differently, especially with email optins:

Intuitively, ask yourself, what audience or acquisition segments would make a difference or matter in terms of what offer they see or when they see it? Ask those business questions, and then dig into your digital analytics data to see if there truly are any meaningful discrepancies between segments. Then use that to form a hypothesis driven experiment, where you can maybe eke out some additional email captures.

No matter what route you choose as to whom you target, just make sure that’s it’s something you truly consider before throwing up some whack, generic popups. This often goes overlooked, but user intent and who gets your offer is just as important as what your offer is and when you deliver it. In fact, it’s difficult to craft a good offer if you don’t understand who you’re targeting.

It’s hard to perfect your messaging without knowing your audience (Image Source)

What: Aligning the Offer with your Audience

I’m wading into murky waters with this one, because it’s still a problem I’m working on, struggling with, and chipping away at bit by bit.

You know your ideal audience. You have defined parameters for your audience targeting. You have some idea of what you do with email subscribers and/or leads as soon as they opt-in.

Now what the hell do you offer them in the first place?

Let’s back up a bit first, and define the basic stuff. What I’m really talking about here is sometimes called a “lead magnet.”

A lead magnet is an exchange of value for information. It’s the promise of an ebook, an email course, a quiz result, a discount, or just regular content updates, in exchange for an email address and potentially other personal information.

In this step, then, we’re talking about maybe the meatiest part of lead generation: your side the of the bargain. What are you offering this anonymous stranger in exchange for their kind offering of their email address?

As it turns out, this is a really hard problem with multiple dimensions:

  • What format should your offer be (PDF, video, web content, other)?
  • Do some formats convert better than others? Do some contribute to greater down funnel conversion rates (product signup, purchases, upgrades, etc)?
  • How do you determine which audiences get which offers (if you’re doing different offers at all)?
  • What formats produce the best qualified leads?

There are a million questions here, and seemingly no universal answers.

The best thing I can do here is give two insights that I’ve learned along the way:

  • Learn, to the best of your abilities, about the user’s intent and match your offer the best you can.
  • Continually update your beliefs and offer alignment based on what the data tells you.

I recognize that can sound vague taken solely as palabras, but I’ll try to get you an example or two to illustrate.

Learning about user intent

How are users coming to your page? If you don’t know that, head over to your Google Analytics account and go to Behavior > Site Content > Landing Pages, and then set up a secondary dimension of “Source / Medium.” This, at a high level, will show you how people are arriving at your site.

Let’s pretend you’re setting up a content offer / content upgrade for a blog post of yours that gets a lot of traffic. Most of the traffic comes from organic search. What to do? Look at the keywords that it’s ranking for (that are presumably bringing visitors in).

In our hypothetical case, we’ve got a page addressing the safety concerns of CBD oil. Here are the keywords it’s ranking for:

Now, candidly, we may not have needed to go through this process to get to the user intent, but it’s such a fast process that I figure, “why not?” And you might learn something new, like finding out a ton of traffic may be coming from a term you didn’t even know you ranked for.

In any case, knowing that people come to the site searching “is cbd safe,” and “is cbd oil safe,” we can cater our offer with this knowledge in mind:

  • Maybe we offer a beginner’s guide to CBD oil dosing safely
  • Maybe we offer a 10% discount off their first CBD order (if they order today!)
  • Maybe we offer them a fact sheet on the health benefits of CBD
  • Maybe offer them a quiz or survey that rates their CBD knowledge

There’s no right answer here; it’s something you have to consider in each individual case, stake out a rational strategy to get started, and then test and optimize with time.

Something of interest with all of this talk about “aligning for user intent,” is that there may be a good programmatic way to do this when you’re doing keyword research. IPullRank wrote an excellent post covering it (it’s quite technical). I’ve fought my way through about half of the process, so I can’t speak completely to its efficacy yet, but it seems quite cool.

If you don’t want to learn R and Python, though, you can do what most people do when aligning content and keywords to user intent: just eyeball it and make a gut decision. It’s usually going to be pretty close to accurate.

Or, let your creative juices flow. This is actually the part of the process where you get to have some fun, and most marketers don’t take advantage of it.

The world probably doesn’t crave another ultimate guide ebook; do something weird and attention grabbing. Offer a free vape pen with their first CBD oil if they order in the next 25 minutes (and if they tell you their favorite 90s music video). I don’t know, I’m riffing, but I think it’s fair to say that you’re not going to break down any barriers by swimming in the red ocean of the millions of ebooks and webinars out there.

Look at the data and adapt based on what you learn

Here’s the important part: just because you’ve looked at user intent and decided on an offer doesn’t mean it needs to say like that forever. Look at the data and adapt.

One thing I’ve worked on at HubSpot is “historical blog optimization.” We take a bunch of high organic traffic blog posts, reconsider their conversion paths, and try out new CTAs to see if we can get any wins. The interesting thing? The conversion rates are usually super variable, even though all the posts, in my mind, seem to be similar intent.

I wish I could show you actual data, but I’ll just make up data that looks really similar to show you what I mean:

This may look trivial – conversion rates vary, obviously – but it’s more surprising when you consider that, at least in my mind, all of these posts were super similar intent. They had similar keywords, were written about similar topics, and were published on the same company blog. I put the same CTA that lead to the same landing page, but got such different results.

The average conversion rate that your tool will likely spit out on reports, in this case, doesn’t even matter that much. One post brought in a .06% conversion rate, while two were in the 2.8% range. That’s so different!

Your results may look different, but I highly encourage you to pull these numbers, if for no other reason than to learn that the same offer and same conversion path can have much different results depending on the page. From here, you can move on and attempt to optimize individual pages and offers (again, my post on content optimization).

A quick example of an action I would take on this information is the following: let’s say I put a bunch of product CTAs on a bunch of blog posts. They lead you right to a product page, where you can sign up free. It’s higher intent than a related ebook, but lower intent than requesting a sales demo.

If I then go back and see that some posts are converting very well, I leave those alone for the time being.

But if i see that some posts are converting super low, say two standard deviations lower than the mean, I’ll consider swapping the CTA out for something lower effort – for example, an ebook on “email productivity.” Perhaps I’ll try to go for a simple email subscription instead of an actual product sign up or even an ebook signup.

I’ll usually play with offers on only the highest traffic posts, since that’s where you’ll see impact.

Sometimes, you’ll find out, the user intent on the page is just too damn low or irrelevant to craft a compelling offer. For example, if you wrote a post on “59 ways to increase your Twitter followers,” but you sell car insurance, there may not be a ton of ways to capture those emails.

If that’s the case, I find, it’s best to move forward on net new content and offer creation. Also, reconsider your content strategy and stop playing to vanity metrics like page views.

When: Beyond Simple Exit Intent Popups

When does your offer fire? This is where we talk about one of my favorite topics: behavioral targeting.

Behavioral targeting is just what it sounds like: delivering a given experience based on behavioral indicators (as opposed to demographic, psychographic, etc.).

It’s a broad term, but through lots of good work by companies like BounceX, behavioral targeting has mainly become associated with conversion optimization.

Even more specifically, it’s usually known now as mouse and scrolling behavior on a website.

The most common example of behavioral targeting is probably the exit intent popup. I’m going to go to GrowthHackers.com and click on a random post, and it’s almost certainly going to have an exit intent popup. Watch this:

First try! It’s super common, at least in the marketing space.

Another common one: scroll depth triggered popups. That’s how I’ve got mine set up. It triggers when you’ve scrolled 50% of the way down a page:

Pro tip: set up scroll depth tracking with Google Tag Manager. It’s super easy to do and you can then see how far most people are scrolling down on a given blog post. Your report will look something like this:

Only 324 out of 1081 hit 50% on this page, meaning perhaps I should consider a better triggering time in order to reach more of my audience?

Another common one: a popup upon first arrival. This is especially true in ecommerce, where you’re usually trying to quickly capture the attention of casual shoppers. Many stores use a discount offer to do so. This is an extreme example, but not uncommon:

Sometimes, your “when” doesn’t actually need to be explicit, active behavior like scrolling or moving your mouse to exit the page. Sometimes it can just be based on time on page. Often, marketers trigger opt-ins with some combination of all of these:

  • After 10 seconds on site
  • After scrolling 50%
  • Excluding visitors from Hacker News
  • Excluding mobile visitors

…And on and on. In my mind, it’s not so much the spread of your impressions but the accuracy of your targeting. The fewer uninterested or irrelevant people you can target, the better, simply because it’s a better user experience for everyone involved. Also, because you don’t want a bunch of unqualified and uninterested people on your email list. Read this CXL post on unsubscribing 83000 emails to learn why. More isn’t always better.

The best timing trigger, in my mind, is based on a behavioral signal that implies strong intent. For instance, if a user clicks on a link that says “download our free spreadsheet template,” there’s a lot more intent than simply throwing an exit popup with the same offer for everyone.

Image Source

People usually call that a “content upgrade,” but it’s really just a prequalified click that triggers your form

Here’s a glimpse of one form of behavioral targeting I’m working on right now…

At HubSpot, we have tons and tons of “templates” blog posts. They cover topics like “sales email templates,” “follow up email templates after a networking event,” and “follow up email after job interview.” They get tons of traffic, and there’s lots of room to test out different conversion pathways and timing triggers.

Anyway, I’ve set up a test where, when a user copies text to their clipboard from a given email template, they get a message that tells them the message was successfully copied (kind of delightful, right?), and that we have a tool for templates, tracking, and automating emails (sell it!). It looks like this in action:

I think there’s a lot of creativity regarding the “when” of a content offer. I gave a few common examples and a somewhat more creative one here, but this is one area where marketers are continuously innovating. I think there’s a lot of room to optimize here, especially beyond the common “exit intent” and hit-you-right-away email popups.

Where: Placement and Real Estate

Where you place your email capture form matters a bunch, too. It’s hard to speak for specific websites, but Andrew Anderson, one of the smartest CRO people I know, had this to say about patterns he’s seen regarding split tests:

“Real Estate usually has a much higher beta and longer half life than copy changes. Spatial changes tend to be better than contextual changes for long term monetary value. Even in those cases you better be designing your efforts to see if that is true in this case and going forward.”

In other words, the space that website elements occupy on a page really affects user behavior and conversion.

Before you run off testing a million and one different locations for your email capture form, though, remember this CRO heuristic: meeting customer expectations is a simple best practice. Do what people expect and you’re most of the way there.

Users expect to navigate a site in a certain way, with the broadest categories branching out into specific filtering (e.g. Home > Category > Subcategory > Product). They expect clear, action driven CTAs, that accurately explain the next step. They expect lead generation forms to be in certain places on your site.

A few common locations include…

On the blog sidebar:

Bottom of the page:

On a dedicated “subscribe page”:

On a dedicated landing page (above the fold):

Lightbox popup:

Scroll box popup (or anything on the bottom right or left side):

My advice: use the expected placements before you start experimenting with creative real estate changes. People generally expect forms to be in those places, so don’t reinvent the wheel, at least right off the bat.

At CXL, we used to have quite a few badass and creative email capture forms, thanks mostly to working with BounceX, who crushes it with creative lead generation methods. They’re not up live on CXL’s site anymore, so I can’t get a screenshot, but we had:

  • A hover over lead capture trigger. When someone hovered their mouse on the header image, it turned into a CTA.
  • A left sidebar pull away that took you to the left side of the screen (hard to describe without an image, but it was cool and dynamic).
  • A callout quote box in the middle of articles with Peep pitching an ebook offer we had.

However you do it, please try to refrain from doing that full screen takeover, welcome mat bullshit. It’s bad UX.

Why: The Most Important and Underlooked Question

In retrospect, I should have started the article with this one, but editing and formatting is for chumps.

Your “who” is definitely important, but you should probably define this question first: Why are you collecting emails?

What do you plan on doing with the information? Haven’t you considered what your “lead nurturing” experience would be (what a terrible phrase “lead nurturing” is).

Essentially, drawing out a customer journey map, starting at A and ending with Z, helps you formulate a smart and gentle plan to capture emails and information in a way that balances succeeding at your business goals with providing a good user experience. Considering your end goals helps you craft your content, the content offer, how many email capture forms you really, and how much information you ask for within your forms.

To that end, the whole “why” thing is a pretty big topic. It usually leads me down a rabbit hole of digging into core marketing systems and email automation flows, instead of simply the email capture mechanisms. I recommend starting out reading some materials on customer journey mapping and following it up with some good content on lead nurturing (there’s gotta be a better word for that?).

How: Form Optimization (and Moving Beyond Basic Forms)

So let’s say you’ve reached a point where you have a pretty pristine idea of whom you’re targeting, with which offer, when, and where on the page. You’ve also got a pretty sweet email automation workflow that’s converting leads into customers like gangbusters.

Another angle you can use to improve your email capture results it by looking at “how” you’re asking.

  • How have you designed your lead capture form?
  • How many form fields are you using?
  • Can you optimize the sign up flow?
  • Can you get rid of traditional web forms altogether?

There’s a level of creativity that expands beyond simple lead generation forms (give me your email, I’ll give you an ebook on the thank you page, etc.). Most of it starts with implementing some good form analytics solution to know your current form performance. I like using a dedicated tool, something like Formisimo. You can, however, also use Google Tag Manager to set up some pretty sweet form tracking implementations.

In any case, you’ll want to know when and where people are dropping off of your forms. This allows you to make intelligent decisions in culling form fields or simply reordering them to make the process smoother.

Image Source

Form optimization is a massive subject. Should you use single or multi step forms? Single or multi column? How much information is too much to ask for?

Maybe it’s a cop out, but I’m just going to link to my favorite resources we published at CXL. No need to reinvent the wheel here:

Another topic is on the lead form itself. Nowadays, many companies are testing out bots or other types of interactive/conversational forms.

There are a bunch of tools you can use to help you design a chatbot/conversational form yourself (there’s probably a difference between the two, technically, but right now they seem to be used interchangeable. Here’s one such solution:

I haven’t tested conversational forms or chatbot much myself, but I can say that, generally speaking, I mostly hate chatbots. I’ve seen a few good ones, but most of the time I’d much rather just use a simple form. Or if I want to talk to people, live chat. Chatbots seem to be a weird hybrid model that is just unsatisfactory. Maybe it’s just been bad execution, but they all seem to be written with cutesy, chatty copy that I find annoying and patronizing. I don’t know, feel free to change my mind.

Something I do like, though: interactive quizzes and surveys. I think those are completely underrated as a lead generation mechanism. Think of Buzzfeed quizzes, but ones that ask for email addresses in exchange for the results. This Greenpeace one is a good example:

Image Source

There are also voice forms and lead generation to think about. I don’t know enough about that to write about it, but it’s a thing. Read more here if interested.

Point is, the world is iterating on the basic idea of a form. I’m sure there will continue to be inspiring and interesting ways to collect visitor information in the future. If you’re just getting started, there are a lot of basics to cover first. But if you’re passed that point and want to start thinking about “how” you’re actually capturing information, there’s a fast moving world of innovations to look into.

Conclusion

Email capture is a huge topic. Frankly, this guide is probably too long as is, but I haven’t even covered lead magnet creation and the various types you can create, and I haven’t even touched upon things like GDPR, which do matter for information collection now.

That said, this guide should give you a core understanding of how to build out an email capture strategy, and it should give you more than enough ammo to know how to begin optimizing an existing set. Just ask yourself the questions every good journalist or police investigator (or Ludacris) does:

  • Who?
  • What?
  • When?
  • Where?
  • Why?
  • How?

If I’ve missed anything crucial, be sure to call me out in the comments! Hope this helps you capture some emails leads & subscribers.

The post How to Capture Email Leads (using Journalism’s 5 W’s technique) appeared first on Alex Birkett.

]]>
A no-nonsense email outreach manual https://www.alexbirkett.com/email-outreach/ https://www.alexbirkett.com/email-outreach/#comments Tue, 01 May 2018 10:40:13 +0000 https://www.alexbirkett.com/?p=461 I think I’m pretty good at email outreach. I’ve done a bunch of it, for a variety of reasons from link building to strategic partnerships to simply wanting to meet up for caffeinated beverages, and have had pretty good success in general. One time, I even had an unexpected case study written up about my ... Read more

The post A no-nonsense email outreach manual appeared first on Alex Birkett.

]]>
I think I’m pretty good at email outreach.

I’ve done a bunch of it, for a variety of reasons from link building to strategic partnerships to simply wanting to meet up for caffeinated beverages, and have had pretty good success in general.

One time, I even had an unexpected case study written up about my backlink outreach. Kinda cool!

I also get a ton of email outreach because I’ve held the editorial keys at CXL and because I’m at HubSpot (and you just sorta get lots of sales pitches when you’re at HubSpot and people rely on firmographic data to sell things). Like you, I’ve seen bad outreach examples as well as good ones.

Cold emailing is something basically everyone has to do (or would benefit from being better at); not everyone has already opted into your email list, so some outbound is warranted.

Quick Preface: The Problem of Critiquing Outreach (or Critiquing Anything without Inside Knowledge)

I want to say that I need to tread lightly here, though: nothing bothers me more than critique with no knowledge of internal data or skin in the game. After all, “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena.”

Image Source

In other words, it’s easy to call someone out for “bad outreach,” but they may be crushing it and it just didn’t resonate with you (for whatever reason – maybe you were just having a bad day). Things that to you seem like the biggest mistakes could be working just fine!

I’m also trying to balance two extremes: staunch moral principles and politeness with business results. I think it can be done, that you don’t need to choose one or the other. I don’t think you need to be a horrible, annoying spammer to get business results. In fact, I think it’s better in the long run if you’re not.

If you’re in the “but the data says it works” crowd for email outreach, and you use trickery like “RE:” in the headline, or follow up 8 times for a backlink, try this thought experiment…

What if everyone in the world were doing what you are?

If you wouldn’t like how the world was in that scenario, all I’m doing is imploring you to take a small step back and reconsider a more strategic approach. Like, next time, don’t just pump a bunch of AI content through an email outreach tool. Marketers tend to ruin everything, but we don’t have to operate that way.

Another disclaimer: I’ve probably sent some dumb outreach emails in the past. We’re all human, and hopefully all improving. If I’ve sent you a dumb outreach email, feel free to tell me that, but I’m likely aware and regretful already.

So I’ll try to go light on what I personally find grating and draw more attention to the underlying principles behind how I write outreach messages to both get results and remain capable of sleeping at night. I’ll focus on the man in the arena and mostly avoid my personal distaste (mostly!).

First, what’s in it for me?

First: value.

If you’re not adding value (appealing to self-interest), you’re not going to get a good response or success rate, and you’re likely to burn bridges rather than build long term beneficial relationships.

The rest of the rules in my article rely on this implicit assumption: you’re giving some sort of value.

This is maybe a nebulous concept. What’s value? It could be many different things: friendship and favors, the draw of future reciprocity, cold hard cash, stimulating conversation, an introduction, etc.

That’s for you to figure out. But that’s rule #1 and can’t be forgotten. You can’t just email someone and ask them to do something, without giving any value yourself…

What’s in it for me?

When people talk about personalized emails, it’s not just about having a first name or company name token or mentioning a blog post they wrote. It’s about catering your outreach efforts to what your recipients will value.

Think about your target audience and customize it all – subject line, follow up email sequence, everything – and you’ll avoid the spam folder.

5 Rules for Email Outreach

Here ya go, five rules, all from post-hoc analysis at the underlying values behind my outreach email writing:

It doesn’t matter what purpose you’re doing email outreach for: sales development, link building, product launch, partnerships – I’ve done them all. It usually comes down to the same few principles.

You can apply these whether or not your email outreach is cold. If your email outreach is not cold (i.e. you know the person), you should definitely follow this. The stakes are higher. You don’t want to be a bad communicator with someone you already have a relationship with.

Cold emails have a lower ‘cost’ because there’s no existing bridge you’re burning with shit communication (though you’re altering future opportunities and you may not even know it).

Also, at the core of all of this is that you’re providing some sort of value with your outreach. It need not be something so explicit as money or respect, it could be something as subtle as friendship or a favor down the line. But no outreach email will work if you’re just a “me-me-me” value leech.

1. Talk Like a Human Talking to a Human

For some reason, when we put pen to paper and send an email we forgot the normal way to talk. This is true even in peer-to-peer or colleague-to-colleague communication, but it’s even more true if you’re emailing a stranger or emailing someone to ask for a favor.

For some reason, in these situations, we get weird.

All of this is weird.

Just read your letter out loud before you send it. Ask yourself, “do I sound like a normal person?”

Or, “If I got this email, would I like to respond to me?”

If the answer is no, crumple it up and try again.

(Most of these tips come down to rational empathy and realizing that if you were on the receiving end you wouldn’t like your own email.)

This echoes the best writing advice of all time (by Paul Graham): write like you talk.

Obviously, in some contexts, you need to be more formal than in others. If you’re asking to grab a coffee, it’s probably not necessary to be super formal:

If you’re emailing an executive with an ask that requires lending their time (e.g. if you want a quote for an article), then be a bit more polite and buttoned up.

In general, “don’t miss the forest for the trees.” When I was working on marketing for Service Hub, we did a lot of SEO, which included link building.

We could have went full aggressive link building mode on this (as many do), but part of our goals involved building relationships with influencers. Spamming content marketers asking for links doesn’t build relationships. It’s just annoying. With this in mind, we kept our messaging relatively benign, yet straightforward.

We also wanted to invoke reciprocity, and not solely ask for a link. Because we were additionally seeking writings for our Service blog, we would lead with an email like this:

Straightforward and normal, right? You may not give the link, but it’s an honest attempt and not too grating.

I used to do that in the early, early days of working at CXL, as well. We build a whole content promotion process, and I genuinely wanted to find guest writers in addition to whatever social shares or links I could build, so I’d send emails like this:

With the Service Hub stuff, if there wasn’t a good opportunity for a backlink ask but the publication was still desirable in terms of relevance and domain authority, we’d work to get a guest post secured on the site.

Obviously, it takes more work to create and place guest posts, but it’s worth it if the publication is aligned and authoritative. We found a way to create additional value with this as well by working with internal HubSpot experts who wanted to get their thoughts out on a given subject. For example, here’s a guest post that Blake Toder wrote for the Usabilla blog:

In these cases, the link is cool, but it’s also great spreading HubSpot thought leadership on customer success related topics. We weren’t myopic or transactional in our focus.

Also, leave the crazy at home. Where one side of awful is represented by robotic tone, the other side is represented by mania or weirdness. Both sides are equally counterproductive to your efforts and you should avoid them.

Don’t be weeeeeird

I’ve heard a version of this called “chatty copy,” and I can’t really stand it. Again, as with any qualitative advice in this article, take it with a grain of salt because I’m just one angry person who’s worked in marketing for a while, but when the copy is patronizing and silly to the point of absurdity, it just reeks of “try hard.”

It’s a pretty straightforward tip, but just read your outreach out loud before you hit send and try to put yourself in the receivers shoes. Bonus points if you can get an objective voice to review your email.

2. Save Time, But Not at the Expense of Quality

Another principle my team has is “automate as much as possible without sacrificing quality.”

Fact 1: emails get ignored. Fact 2: More emails tend to increase the odds of a response.

So, we set up HubSpot Sequences and automated parts of our email outreach follow up.

In actual fact, a large percentage of the links we acquired were from the second or third email from our Sequence. I’ll talk later about the diminishing returns of a ton of follow-up emails, but for now, suffice to say that you should probably send more than one.

The emails would gradually taper off in aggressiveness. Email 1 was a direct ask for a link:

Email 2 tapered off and focused more on getting them to write for us or do a quote for an article.

Email 3 was a “last shot” email and asked if they’d give an expert quote for a new article.

We tried to give as much value as possible in these emails without being spammy or annoying. We also tried to keep in mind that all of this influencer outreach was not only about the short-term benefit of acquiring links, but it also served the purpose to stake out a place in the customer success space and build relationships with important, smart people.

Also, if you can, track your outreach and try to improve it with time.

While doing Service Hub stuff, we tracked everything in HubSpot CRM, including what tier the contact was, creating a Deal pipeline, as well as setting up Workflows to remind us to reach out for link asks a few weeks after reaching out to certain contacts.

We also tracked all guest posts we asked for in Deals, just to make sure that everything was in one place. While this was a separate system than what our content team used to track guest writers, it helped us to get a high level view of all of our influencer communications and progress.

3. Don’t Beat Around the Bush

Don’t obfuscate, don’t cut around the edges, just ask for what you want.

In other words, cut to the chase. Let’s not play games. It doesn’t need to be rude, but make it apparent. I can’t remember how many emails I’ve had to respond to with, “what exactly are you asking for?”

The person shouldn’t have to decipher what you’re asking for. Here’s how I did outreach for a recent Product Hunt launch:

Simple, yeah? Hopefully I didn’t break their “don’t ask for votes” rule, because I used the clever euphemism of “give some love.”

In this case, I knew the person I was emailing pretty well. Still, you can take this approach and apply it to cold outreach as well. Here’s an example of a link ask I really liked recently:

I do the same thing when I do link building outreach. I don’t beat around the bush, I make it abundantly clear what I want:

You don’t want to be the person who asks for a social share or, god forbid, for “feedback” on your article. I know this is touted as a best practice by some SEO experts, but it’s really just lying…You definitely don’t want my feedback, so pretending that you do is dishonest (I take it back if you actually want feedback. It’s a BIG ask of someone you don’t know, but you do you).

I’m not the only one who hates this type of stuff:

On this line: don’t preheat the oven by emailing me before you launch an article and asking permission to send it over. I know this is another tactic that everyone thinks works better, but it always makes me think you’re doing that thing where you’re using psychology to trick me. Foot in the door or whatever.

If you want something, just ask. You can do this on the first email, you don’t even need a warm up. We’re all busy, and if you’re emailing a blogger or influencer, they get lots of emails. Lay out what’s in it for me, the key takeaway (gimme value!), what you want, and make it easier for me to do it.

That last point leads me to the next rule…

4. Don’t Treat People Like They’re Stupid

This is the section where I rant about pet peeves, so if you just wanna learn what works and don’t care for moral finger wagging, move along.

For some reason or another, subtle persuasion tricks, and timely tactics are what get passed around in the marketing space. Nowhere is this truer than in sales and outreach.

How many templatized emails have you gotten in the last 3 months? How many have you responded to?

You’ve heard the same refrains repeatedly:

  • “{{Name}}, are you the right person?”
  • “Is Thursday at 4pm or Friday at 2pm better for a call for you?”
  • “I wrote X piece of content. Do you mind if I send it over for you to check out when it’s published?”
  • “I’m a huge fan and reader of {{your blog}}!”

These all sound innocuous in a vacuum, but they all carry a really cynical underlying principle that marketers and salespeople sometimes have. That belief that, if only you could ‘trick’ the person into saying yes, your outreach problems will be over.

Though there are hundreds of these taught and repeated lines, and they’ll continue to adapt and evolve, let’s walk through these one-by-one just to understand how offensive these things can be…

“{{Name}}, are you the right person?”

This one is pretty transparently annoying. If I’m not the right person, why are you emailing me? It sounds like you need to do more research, not put the onus on me to do your prospecting. When you personalize your emails, that should include proper targeting parameters, not just personalization tokens on the name field and mass blasted to the entire company list.

It’s offensive that you’re essentially creating a negative externalized cost where I’m presumed to do your work for you (and stop whatever I’m doing to reply!)

Again, before I’m called out. I’ve appended an email with “if you’re not the right person, my bad, but if it’s at all interesting, can you intro me?”

I genuinely try to reach the right person and sometimes mess up slightly (“business development manager” vs “partnership marketing manager” is a hard difference to know off the bat). My point here is only to shame the disingenuous bunch of simply email a shitload of people at a given company to get some sort of response and introduction.

“Is Thursday at 4pm or Friday at 2pm better for a call for you?”

The logic behind this one is if you given them a chance to tell you “no,” they will. This, instead, forces the choice to be between two different times, which rests on the assumption that the answer to the first necessary question, “do you want to talk to me,” is yes.

Neither time is good for me, so get outta here w/ that Calendly link.

“I wrote X piece of content. Do you mind if I send it over to you when it’s published so I can get some feedback?”

This is my least favorite one, even though on the surface it actually looks like the least presumptuous.

It’s my pet peeve because I know it’s a tactic, I know it’s copied from thought leaders, and I know it’s disingenuous. It is done, not for my benefit and courtesy, but because you read someone online that the click through rate is better when you first send a “pre-heat” email.

I will almost always ignore these, or respond by saying, “no, I don’t want to see your infographic.”

This is such blatant psychological trickery, because it’s so easy to say “sure, send it over,” and pass on the discomfort of rejection to a later date. But this propels the “foot in the door” technique, where you’re now a part of the conversation and more likely to link back to the person or whatever they want.

“I’m a huge fan and big reader of {{your blog}}!!”

Don’t say this if you’re not.

Additionally, if you’ve downloaded an email outreach checklist that includes specifically how many touchpoints you should have with the person before you reach out, crumple that up and throw it away. If you think liking 4 posts on social media gives you some additionally ordained privilege to someone’s time, that’s a strange belief.

Doesn’t mean you shouldn’t follow people on social media and engage with them, but doing so tactically is gross and strategic. If I see you pop up on LinkedIn one day, like a Tweet of mine the next, and then two days later send a sales email, I’m exactly 0% more likely of saying yes. You’re dealing with people here, and I can’t help but feel objectified if you’re following a conversational formula when beginning to engage.

I’m not saying any of this stuff is ineffective, I’m just saying that if you’re trying to ‘trick’ another person into giving you a backlink it’s not very ethical.

Addendum to Point #4

This section is basically a rant, and I realize that may not be helpful.

You may be saying, “Alex, these techniques work. Why would I stop just because they annoy you?”

I’d agree. In isolation. This is one of those cases where you should ask yourself, “if everyone in the world did this, would the world be a better or worse place?”

I struggle writing these sections, because it sounds like moral finger wagging. But I’m just as interested in effectiveness as anyone. I just truly believe that by focusing on these short terms and casually dishonest tactics, we’re burning up the fields for everyone else (and our future selves). They aren’t sustainable tactics.

At the same time, I realize that sometimes you need to be scrappy and get responses. In that case, feel free to treat this like an airy rant and move on. I’ll still try to give some tactical tips in the rest of the article.

5. Follow Up, But Don’t Be Obnoxious

I’ve been thinking a lot about Nassim Taleb’s “Silver Rule”: do NOT do to others what you don’t want them to do to you.

We can easily sit back and say “it works,” because we’re the doers and not the receivers. It’s the same case with the last tip, really; I’m not arguing that it doesn’t work, I’m arguing that it probably makes the world a worse place by doing it.

So, when you read advice on how many follow ups you need to close a sale, think about it from a pollution perspective as well: what if you were receiving those messages?

Image Source

Granted, most people probably give up too soon. Emails get lost in an inbox. You need to get attention if you’re doing business.

And it depends on the context. If you’re trying to close a massive enterprise software deal, you know, follow up. But if someone didn’t want to link back to your infographic, you really don’t need to email them seven more times.

Here, I think there are two opposing camps. One said is basically saying you should never send cold emails, let alone follow up on them. The other is saying you should do what it takes to close the sale, and that’s almost always several follow ups, usually 7+.

I couldn’t imagine sitting in either camp, so I actually ask myself several questions to contextualize the outreach campaign:

  • What am I asking for? What’s the effort it takes them to complete the request?
  • Do I know the person at all or have some sort of mutual connections?
  • How important is the ask?
  • How much value am I providing them in the exchange?

And then I make an intelligent decision. Sometimes it’s a single email w/ a follow up, sometimes it’s 4 emails. But it depends on the context.

Bonus Tip: Real Relationships Trump the Transactional

If you’re doing tons of cold email outreach, it’s generally going to be less effective than real relationships. People respond to asks according to three layers:

  • Do they know you?
  • Do they like you?
  • Is your ask in line with their goals?

If you don’t have any of those, you better be offering some tremendous additional value. But if you’re someone that is known, liked, and understands the goals of the other person, you have pretty good odds. This is true for any type of outreach: broken link building, sales outreach efforts, influencer marketing outreach, campaigns to generate leads, and on and on.

People don’t like to think like this, and I think that’s for a few reasons.

One, it’s mushy advice. “Build relationships,” sounds like one of those platitudes like “be yourself.” However, in this case, it really is the only advice that should matter.

Second, it’s hard and a long term thing.

It’s easier to think in terms of subject line trickery, optimizing open rates, copy and pasting outreach templates for your email campaigns, and adding cute GIFS to your cold outreach email, because those are easy to implement.

They’re also easy to test and prove out with data. If you send a few hundred w/ this email subject line and a few hundred with another, presuming you have somewhat rigorous experimental design, you can get a clarifying idea on what gets a better response rate and completion rate.

Building relationships has no predictable ROI. It shouldn’t. One of the worst types of people in the world is one that expects business ROI from relationships.

Final Thoughts

Most advice on email outreach is pretty good. I’m not adding too much new here:

  • Add value
  • Don’t be super weird or robotic
  • Don’t be annoying or make the world a worse place

But we usually take away the least effective and most short term parts of email outreach advice – he tactical, the short term, the psychological trickery, the subject lines, the social proof, the email campaign automation.

Write outreach emails that you yourself would want to receive.

Add value, and don’t be a dick.

The post A no-nonsense email outreach manual appeared first on Alex Birkett.

]]>
https://www.alexbirkett.com/email-outreach/feed/ 3
Content Optimization: How to Make Content Better https://www.alexbirkett.com/content-optimization/ Fri, 23 Mar 2018 20:39:12 +0000 https://www.alexbirkett.com/?p=354 They say the third lever of content marketing growth is content optimization. Content creation, content promotion, and content optimization. Who’s they? Bloggers, speakers, thought leaders – you know the lot Because of my background in conversion optimization, and just a general desire to improve and optimize things, content optimization is exciting to me. Optimization implies ... Read more

The post Content Optimization: How to Make Content Better appeared first on Alex Birkett.

]]>
They say the third lever of content marketing growth is content optimization.

Content creation, content promotion, and content optimization.

Who’s they? Bloggers, speakers, thought leaders – you know the lot

Because of my background in conversion optimization, and just a general desire to improve and optimize things, content optimization is exciting to me.

Optimization implies an improved ROI, efficiency, scale, and a continuous and compound ROI over time (and at scale).

Content optimization means (presumably) that we can spend less time creating and distributing our work, and get more value from what we’re putting out there.

That’s the theory, anyway.

What doesn’t get talked about as much is how the hell one optimizes old content in the first place.

Well, it’s something I’ve thought a lot about and done even more of.

So that’s what this article will cover: how to look back at content you’ve already launched into the world and improve it to rank higher in the search engine or go further on social media or improve conversions, systematically and at scale.

My case is that content optimization strategy should be a core part of your marketing strategy, especially if you’ve published at scale already.

Content Optimization: Two Different Approaches

When looking back at old content, you can look at things two different ways (both valid and valuable):

  1. Find high traffic but low converting posts and increase the conversion rate.
  2. Find low traffic but high search volume/potential posts and increase search engine rankings or distribution.

The first method, in my opinion, is easier, at least from a prioritization standpoint.

You can very easily build out a model using your total traffic and your historical conversion rate metrics to calculate, with some degree of accuracy, how much value you can expect. This is basically a “what if?” analysis and I’ll walk you through how to build one out in a minute.

It’s also easier because we usually “set it and forget it” when it comes to conversion offers with content. With a little care and thought it’s usually pretty easy to optimize this part.

The second method (search engine rankings) usually has a higher ceiling in terms of how much value you can squeeze out of it. Terms like “great content” and “high-quality content” and even user experience are all somewhat subjective in search engine optimization, so it’s harder to know exactly how to update a piece and how much extra value you can get from it.

The difference between clicks on the first, second, third, and all the other SERP results is astounding, and if you can lift your rankings you can gain a lot of traffic. Similarly, even the top SEO company have tons of pages ranking from 5-20, and with a bit of effort, it’s always possible to lift those.

Image Source

The first method is mostly going to involve strategic work.

You’ll run an analysis of top opportunities, calculate the upside, and then go through the process of optimizing the acquisition pathways on each page you deem worth it. That last step, the optimizing of the acquisition pathways, is a ton of hard work and takes a talented hand to do so. It takes creativity, empathy, skill (i.e. good marketing).

The second method also takes a lot of analysis work, but it’s generally a bit easier to understand how you can improve a page if you’ve got a decent understanding of SEO. It’s usually some combination of content quality, internal linking or site architecture, external linking work, or some low hanging fruit like H1/H2/title tag optimization.

I’ll walk through each of these things in depth, to the point that it may get strenuous to read this guide if you’re only interested in one of the methods. Realistically, you probably should focus on one of these at a time as each step will require a ton of trial and error will rarely be easy or clean in practice.

This post is like a darn book. To that end, here’s a table of contents to help you jump around as you please.

CRO & Maximizing Conversions on High Traffic Content

If you’re doing content marketing or digital marketing at all, it’s very likely your content follows a power law: most of your traffic comes from a few posts. That’s the way it was at CXL, and it’s that way at HubSpot, too. Most content powerhouses deal with this type of distribution.

Image Source

The wrong way to look at this (as many “analysts” have) is that you should produce less content. That’s not a solution, that’s the table measuring the ruler.

You can’t guess what’s going to be a massive success ahead of time (though you can increase the probability with good strategy and execution).

No, the point of this is to say that you’re going to have some posts that have way more traffic than other posts. However — these posts will often have a much lower conversion rate than lower traffic pages.

This may or may not be true of your site, but I’ve seen this firsthand from every site I’ve worked with.

(There’s also a power law with the # of blog posts that deliver the highest percentage of leads/conversion usually as well. Sometimes there’s overlap between high traffic & high conversion blog posts, and that’s just magical).

The most common explanation is that the top post is so top-of-funnel that users aren’t converting as high on the same offers as on your bottom-of-funnel posts.

The second most common explanation is that you hit a high traffic topic that’s slightly outside your niche (if you sell commercial kitchen supplies to restaurants in Austin, an infographic on the top coffees in town may or may not be super relevant to conversion).

Image Source

In both cases, the fix is to align your offers on-page with your visitors’ intent and customer journey stage. You have to match the incoming temperature of your visitor – don’t offer them an ebook if they want a demo, and don’t offer them a dress suit if they just want a first time visitor discount (and maybe a tie).

And if you have no conversion points on your page, well, add one. Easy fix, there.

How to Find High Traffic/Low Conversion Pages

There are many analytics platforms. The most ubiquitous of the analytics tools is Google Analytics, so even though you can probably grab insights from HubSpot or Sumo or whatever lead capture tool you use, we’ll use GA here.

We’ll pull a quick report that will give us an approximation of the conversion rates of different posts. This assumes that you have goals set up for your “conversion,” which could be an email collection form or otherwise. This report also relies on “landing pages” as the variable, so we may be missing out on some nuance with people who view lots of blog posts or navigate your site from somewhere else, but then convert on a specific blog post.

Anyway, we want useful, not perfect.

Go to Behavior > Site Pages > Landing Pages. Then use the “comparison” view instead of the table view. Change the metric that you’re comparing from “Sessions” to “Goal Conversion Rate.” It should look like this:

You can also use a filter like “/blog/” or whatever you use to distinguish your blog posts from non-blog posts (sometimes you’ll have a specific View for your blog, in which case just use the whole report).

From there, you can find which high traffic blog posts are converting much lower than the site average. I talk more about how to do this on my post in content marketing analytics, by the way.

You can also pull this data to Excel in raw format and do a similar analysis, but it usually suffices to just focus on the highest traffic, lowest converting posts, and you can see that starkly with this report.

If you’re doing it in Excel, pull your data over and use conditional formatting to highlight blog posts that convert less than the average. Then use a filter to only look at those:

Quick point: I love Google Analytics as much as the next guy, but it actually may be easier to use the analytics from your marketing tool in this case.

At least in the case of HubSpot, the CTAs tool has great reporting and you can compare side-by-side with all of your CTAs (or export the data and analyze it elsewhere). It shows which pages are converting best that are using the same CTAs and it also aggregates CTA conversion rates so you can compare apples to apples.

Image Source

Now you have a good idea of which blog posts represent the biggest opportunities, at least from a bird’s eye view. Next you need to prioritize which ones you’ll focus on first and how much lift you can expect.

How to Prioritize and Size Opportunities

We’ll need to dump our data into Excel for this. We only need three basic variables: blog post title (or URL), page views (try to do an average monthly count from a spread of 3-6 months), and conversion rate (same thing with the average).

Where you get this doesn’t matter. You can pull it from Google Analytics, your marketing automation tool, or your analyst’s magic crystal ball (just not from your imagination).

Just make darn sure you have good quality data and that you trust it.

What you’re about to do is a common planning and projection analysis used to see what the upside of certain actions is (a watered down version of it, anyway). If the data isn’t right, your projections aren’t going to be worth much.

So, pull your data to Excel. On first strike, I like to only pull the top ten trafficked posts that are below the site average. You can find those using the above Google Analytics report, or by bringing your data to Excel and using conditional formatting to show those below average.

Then use a filter to only show those that are highlighted:

Once you have those, build out some additional columns for your projected values. You can get more precise with this, but to keep things simple, I like to use the site average to project out numbers. The assumption is that, if that’s the average, we can probably get any post there with some optimization effort (obviously that simplifies things, but it’s good for prioritization):

From there, it’s extremely to see which opportunities are the biggest. You can even project these numbers out over a longer time period (such as a year) to see what the potential upside could be.

This type of modeling helps especially when you have to make tradeoffs. For instance, if you have enough content resources to either invest in this type of conversion optimization, in net new content creation, or in SEO projects to lift current content to get more traffic, then you can see which one merits the prioritization.

Note: this is but one way to model things out.

Also, “all models are wrong, but some are useful.” The point here isn’t to project your exact amount of conversions you’ll get, but rather to choose between projects when you have a set amount of resources.

Even within this list, it helps you choose which articles you should focus the most attention on.

How to Gauge Intent of Visitors and Align Your Offer

In PPC advertising, there’s a popular notion that takes into account the “temperature” of a target audience. A display ad may be reaching completely cold traffic, so your offer shouldn’t be something bottom of the funnel like a demo. Maybe it should be an e-book, or something that pushes them down the funnel until they’re a warmer temperature.

Image Source

People don’t talk about this as much with organic search traffic, but it’s the same case: people land on your site with widely varying levels of intent.

How do you determine the intent and user journey stage?

There are many ways to doing so, but they all start with understanding what channels people are coming from and what keywords they’re searching. To analyze your marketing channels is simple. Log into Google Analytics and go to your Acquisition > All Traffic > Source/Medium report.

Start digging around and asking questions. What are your highest performing channels? Lowest performing? If you’re running campaigns, what ones are doing well and what ones are doing not so well?

Explore the data the bit.

Specific to SEO traffic, you need to analyze what keywords are bringing users to your pages. To do that, enter the URL of a blog post in Ahrefs and click on “Organic Keywords” (you can also get this info from Search Console or many other SEO tools):

What *did* happen to Alex and ROI?!

Then you need to classify these keywords into a temperature state: are they warm, ready to buy visitors, or are they cold, barely know your brand visitors? This helps define your offer and conversion pathway:

  • If you’re a nerd like me, you might be interested in running clustering and classification algorithms to place keywords in user journey state buckets (read this on how to do that). (Disclosure: I’m still working on doing this in a way that I trust and that doesn’t take lots of tinkering and tweaking. Work in progress but promising)
  • If you’re not, you may have just as much success using common sense to bucket keywords into user state (read this on how to do that).

Running Content Experiments and Converting Visitors to Leads or Customers

Content experiments are tricky because they are at an increased risk of being affected by things like seasonality and other validity threats. Google’s algorithm changes a ton, people search with different intent at different times of the year, and it’s hard to test on a truly representative sample.

However, you can still test, and you should still test – the same way you would with any other website element or experience.

I like to test at the bottom of the funnel. Don’t worry about things like time on page or bounce rate, use something like conversions as your metric to optimize against.

Most lead capture tools allow you to do this on their platform (if they don’t, get a new one). You still have to adhere to the same statistics principles you would with any other A/B test (and time period comparisons are still a bad methodology, as is always the case when trying to infer causality).

I’ve written a million articles on A/B testing at this point, but these three will cover everything to get you started:

Lifting Traffic Where There’s Potential

There’s another side of this content optimization coin: lifting up traffic. If you have lots of content already, it’s likely you rank for some stuff, don’t rank for other stuff, and rank on the second or third pages for the rest.

Content optimization is all about lifting those high value pages that aren’t ranking, and especially those that are almost ranking page one, to the front.

How to Find Articles That Are Losing Traffic

Here’s a sad fact marketers have to grapple with: even if you build a great piece of content and it ranks well, eventually it may start to lose traffic.

That could happen for a variety of reasons:

  • Competitors start to create content that outranks you
  • Google’s SERP changes (adding feature snippets, ads, etc.)
  • Search volume for your keywords drops

There’s not much you can do about the third one, but knowing what the issue is (and that there is an issue) helps you move forward on a potential plan. Competitors outranking you? Beef up your content and build links. SERP changes? Optimize your content to get that feature snippet, carousel, or whatever else.

Image Source

First step, though, is to find out if you’re losing traffic (and which posts are losing the most). Here’s how you do that.

Log into Google Analytics, pick a period of time (let’s say 3 months) from last year (let’s say from January 1 – March 1 2017).

Then, go to Behavior > Site Content > Landing Pages and set your time range. Also, set your filter so that you’re only analyzing the property you care to analyze (e.g. /blog/).

You could get a high level view from here, but I prefer to narrow down to only organic traffic. To do that, set up a secondary dimension of “Default Channel Grouping.”

Then set up an advanced filter that only includes “organic search.”

Next, include all rows (scroll to the bottom and adjust the number where it says “show rows”) and export this data to CSV.

Open your spreadsheet and name the first tab whatever month and year it is (Jan – Mar 2017). Then delete all the data you don’t need (leave only the URL and the Sessions columns):

Go back to GA and change the date range to the current period. Make sure it’s the same time period and same start and stop dates, but for this year. This helps iron out traffic differences due to seasonality (always compare apples to apples). In this case, it means we need to set our date range from Jan 1 – March 1 2018.

Export to CSV, and bring it to tab 2 of your spreadsheet. Again, delete all data except for the URL and Sessions. Then rename the tab to something like Jan – Mar 2018.

Now add another column (Column C) to tab #1 and name it something like “Sessions 2018” (also rename Column B to something like “Sessions 2017”). Now do a Vlookup, like the following (in Column C) where ‘tab 2’ is the title of your tab:

=VLOOKUP(A2, ‘[tab 2]’!A:B, 2, FALSE)

Should look like this:

Now we’re going to see if there has been a significant drop. You can do whatever percentage you think is significant, but this example we’ll flag anything that has dropped by 20%.

Add column D and title it “20%+ decline?” then insert this formula in D7:

=IF(C2<(B2-(B2*0.2)),TRUE,FALSE)

Looks like this:

That formula asks if the number in Column C is 20% or greater less than the number in Column B. Then you can do conditional formatting to highlight those where that is the case.

Note: the data I’m using is from Google’s merchandising store so it’s kind of boring. It’s way more interesting if you’re using blog data because of the natural fluctuations in rankings and traffic over time. But alas, my personal site doesn’t have enough organic traffic and HubSpot probably wouldn’t love it if I shared screenshots of GA data, so Google demo account it is ¯\_(ツ)_/¯

The next question, if you’re losing organic traffic over time, is why? There are a few common culprits:

  • You’ve fallen in rankings
  • The SERP experience has changed (feature snippets, carousels, etc., have been added)
  • Your click-through-rate has changed
  • Search volume for your keywords has dropped

So, you need to triangulate. Tracking rankings is easy. Every SEO tools does it and you can also do it in Google Search Console.

If you haven’t dropped rankings, has your CTR fallen? Again, you can track this in Search Console.

If your CTR hasn’t fallen, has the SERP changed? If there are feature snippets, carousels, ads, etc., can you capture those spots without a herculean amount of effort?

If the answer is no to all those, it’s likely search volume for the keywords you were ranking for has fallen. You can get an approximation of this effect in Search Trends by looking at your position over time and your impressions over time, but it still won’t be precise: you don’t know which long tail keywords you may have been ranking for that dropped off, and the trends are approximate and averaged.

What should you do in that case?

My advice: Drink a glass of wine and take your dog to the park. Maybe learn a new language. Life isn’t all about SEO and marketing.

How to Find Articles That Are Almost Ranking Well

The best way to grow your traffic may be to publish net new articles. That’s true especially if you’re starting out. But it’s more likely, especially if you have a lot of content already published, that you’re almost ranking well for a ton of high value keywords. You’ve just gotta find ‘em, analyze them, and optimize them.

There are a few ways to do that. I’ll show you one of those ways (one that assumes you have an Ahrefs account, which you totally should have).

First, log into Ahrefs and enter the domain that you’d like to analyze.

It’s possible, too, that you just want to analyze a specific subfolder or subdomain if your site is set up that way (e.g. site.com/blog). Whatever the case, enter that in the domain explorer.

I’ll use CXL as an example since my personal site has virtually zero traffic (you can analyze any property you want in Ahrefs – pretty neat for competitor analysis or client work, but that’s another story).

You’ll see a variety of interesting numbers on your dashboard and features on the side. Ignore them all except for “Organic keywords” on the top. Click on the number (in this case “113K’”). That will bring you to a dashboard that shows all the keywords you’re ranking for in the search engine and the corresponding URLs.

From here, you’ll want to filter things down. It depends on what rankings you’d like to isolate, but I consider anything in the 10-21 range worthy of optimization (another nice set could be from 6-10 if you really wanna inch up on the results page, or 11-21, or really whatever range you want. These are arbitrary numbers for the most part).

So click on “Position” and choose which rankings you want to filter for.

After that, set up a filter for volume. Again, this depends on what you consider a worthy amount of volume. I try to optimize for keywords above 1k, but let’s set the bar at 200 for now.

This will allow us to combine similar keywords later in Excel to get a better picture of the overall opportunity (e.g. if “Customer Satisfaction Surveys” ranks for both “how to measure customer satisfaction” and “satisfaction survey template,” we want to include both of those in our opportunity analysis).

Now export your file to CSV.

Cut down the columns you don’t care about (historical rankings, etc.). You now have raw data, and actually, you can get a pretty good picture of which opportunities exist from a qualitative look at this data:

Especially if you add conditional formatting to the volume and difficult (or CPC) columns, you can see which blog posts represent the bigger opportunities for optimization.

However, my favorite thing to do here is to create a Pivot Table. Doing so can allow you to combine the volume of two or more keywords that a single blog post is ranking for.

For example, if Blog Post X is ranking for in position 12 for Keyword A (500 volume) and position 14 for Keyword B (1000 volume), then we can see that the average ranking for this URL is 13 and it’s got a potential of 1500 search volume (note: you don’t have to use average position. It can be confusing, but it helps me size the ease of an opportunity). This makes it easier to look at absolute opportunities.

Here’s how I set that up in Excel:

If you’d like, you can then pull these entries to a different sheet and order them by traffic potential. If we do that, we can see that the top 10 opportunities represent a search volume potential of about 500,000:

From there, you can head back over to your raw data sheet and check out which keywords correspond to the URL which high search potential. Here are the keywords for my example URL (on cognitive biases written by my past colleague, the super talented Shanelle Mullin).

What can they do from here? Well, a few things, depending on the context.

The first thing I would do is type in each of these keywords into a) Google and b) Ahrefs and see what is currently ranking and the backlink profiles and competitiveness of the other sites ranking.

Let’s try that with “list of cognitive biases,” for which CXL is ranking #20.

It’s not a shock that many of the currently ranking articles are informational and come from top sites, like Wikipedia, Mental Floss, and Business Insider.

Another thing to note is that they’re more general than the CXL title, as they relate to all applications of cognitive bias and not just CRO. Realistically, it’s a better branding play for CXL to include the focus on CRO, but it may be limited the search traffic and intent, something to consider in optimization.

Next, I would look at how these results stack up from a competitive perspective. Plug in your own URL into Ahrefs and get your baseline data on quantity of backlinks, domain rating, URL rating, etc.

Then plug in the keyword you’re trying to rank for (reminder “list of cognitive biases”) in the keyword explorer tool:

Scroll all the way to the bottom of this report and look at the current rankings. You can see, side by side, the backlink counts, Domain Rating, URL Rating, and “Ahrefs Rank” (a sort of aggregate metric that attempts to tell you how strong your search capability is).

Learning from a quick scan: Wikipedia is a monster and won’t be fucked with, but the others are all subject to be overtaken.

It would take a bit more effort to analyze the quality of each of the articles on that list (and I won’t walk you through that), but you essentially want to match the search intent (clearly a list of cognitive biases), and you want to optimize on-page for that and build links).

Optimizing on-page is a huge topic, so I’ll defer to the master on that topic: On-Page SEO: Anatomy of a Perfectly Optimized Page

You can also use a nice tool like SEMrush’s SEO Writing Assistant or other content optimization software like Surfer SEO.

Finally, you can work on Click-through-Rate to squeeze out even more traffic out of your rankings. Here’s a good article from Wordstream on how to do that.

So, to optimize this piece of content, we have a) a possible page title change b) some on-page optimization, c) internal linking d) some beefing up of the content to make it more thorough than the others and e) link building.

I won’t go into link building fully, as I’ve done that in a previous article on content promotion. But I want to briefly go over how to optimize your content to make it easier to build links (by building in linkable assets).

6 “Hooks” for Rankable and Linkable Content

One way to create linkable content is to genuinely write the best thing on the internet on that topic. It may sound grandiose, but that was the explicit content strategy we held at CXL.

Outside of that, there are other more tactical things you can do to help out with link acquisition and social media shares. There are a variety of these, but in my experience, it comes down to a few really effective ones. Scott Tousley and I call them “content hooks”:

  • Original data & stats
  • Original Images
  • Charts and Graphs
  • Quotes from influencers
  • Frameworks
  • Pros and Cons Tables

The mindset here is that you work backwards and think, “given the target sites I’d like to get links from, how can I craft my content to make it easier to acquire those links?” In the marketing world, if you have original data, new fancy frameworks, or original images or charts, it makes things leagues easier to add value.

A brief walk through these, with examples, is in order.

1. Original data & stats

This one is a bit of heavy lifting in terms of costs, but if you can pull legit, impressive data and publish it, you’re going to have a competitive advantage. Certain companies really excel at this, including CXL with their UX studies.

Image Source

Buzzsumo also does this really well with their huge content analyses.

Image Source

HubSpot has a whole research program dedicated to original insights.

2. Original Images

True story, I was recently at a conference where I saw that some original images we created to explain A/B testing had been used by a keynote speaker (w/o crediting us, by the way).

People search for images, especially when creating content (blog, conference talk, or otherwise), and if your images come up when they search, you get a link (as long as they actually credit you).

My line of thought is, if you’re going to use images, why not try to create your own wherever that is possible? We did that for HubSpot with our NPS survey image:

This is an especially helpful tactic if you can create a visualization for a complicated topic, like segmentation or multivariate testing.

3. Charts and Graphs

This one is sort of a hybrid between “original images” and “original data,” but essentially you want to give some impressive data visualization to explain concepts or insights. It’s a big trend for bloggers to write data-driven posts, and images like these give the impression of using data to support your claims (doesn’t matter if the chart is bullshit, it’s going to get links anyway).

Here’s an example of a CSAT journey map I put together in R for a HubSpot post:

I’m no master of data visualization, and things can get super sophisticated, especially when you start to implement interactive visualizations. Ryan Farley did a great job of this with his interactive retention visualization:

4. Quotes from influencers

Roundups are usually boring, but quotes from smart people help you a) create better content and b) promote that content on social media once it’s published. Working with smart people to put together content also helps you build relationships and support smart voices by giving them a platform.

I certainly have an affinity for BigCommerce when they feature my opinions in their articles:

Image Source

There may not be a direct route to a link here, but there is a pathway through increased social shares and distribution that usually leads to natural links. Plus, as I mentioned, if you curate your features well, it can help you create better content. Matt Gershoff, CEO of Conductrics, has certainly made my articles smarter than I could have made them on my own:

Roundups can work, too, if they don’t suck. Peep put together an awesome one on new GA features. Luiz Centenaro put together a nice one as well on community building:

5. Frameworks

When in doubt, invent a framework. Bonus points if it’s actually useful. I’ve done it a bunch at HubSpot:

This framework is admittedly not that useful. I just made an acronym out of the process for running customer satisfaction surveys. Who knows, though, maybe it helps someone remember the information better.

A better example is something like PXL, an A/B test prioritization framework that is undeniably useful. It’s something that I’ve used with clients to help prioritize experiments:

Brian Dean, however, is the king of this tactic. He not only uses this technique all the time, popularizing terms like Skyscraper Technique, but he also named the technique of naming techniques. Meta! His frameworks genuinely help explain SEO concepts in a simple and actionable way, so they catch on.

The best thing you can do is create a framework that truly helps fill a knowledge gap or helps people put a concept to use. I think Brian Dean, CXL, WiderFunnel, Reforge, and others have done this really well.

6. Pros and Cons Tables

The world is a confusing place. If you can help visitors clear up confusion on a given topic or set of solutions, you deserve a link. For example, there are lots of customer feedback survey types, so we listed pros and cons of each one to help people choose the appropriate type for their scenario:

We also created original images of these tables, combining that tactic as well.

Any way you can visualize or simplify comparisons or pros and cons can help users make decisions. Can you do it with software or pricing? Conversion Rate Experts did that really well with A/B test software comparisons:

Optimize On-Page SEO

There are tools now like Clearscope and Surfer that help you figure out what the gaps are in your SEO content.

Basically, you can plug in a target keyword, your text, and then get a score and recommendations to better position yourself to rank in search engines. They reverse engineer ranking factors and can help you find relevant keywords to use and generally make the article more seo-friendly to match the searchers intent. Here’s a screenshot of this very piece in Clearscope:

This will give you keyword, word count, subheadings, and readability recommendations. Outside of that, you can make marginal gains from improving title tags, alt tags, meta tags, etc. Same thing with internal linking and other HTML updates. At scale on a large enough website, they can move the needle, but on any one given piece, they’re smaller potatoes.

Sometimes, you need to re-sculpt the article to rank for an entirely different keyword. This happens when content doesn’t match the search intent of the search query driving people to the post.

Figuring that out requires some keyword research. You want to see, other than your target keyword / primary keyword, what search terms you already rank for and some new ideas for search terms you could target directly. These will likely be search terms that you don’t rank on the first page for.

For this piece, I rank for terms like “email blast examples,” but the piece is currently written as a generic high level guide. So I could re-write the piece to be more focused on the examples intent.

Relaunch: How to Get Back Off the Ground

After you beef up your content with some on-page optimization and add some link hooks, you should relaunch it. Give it a little velocity. It’s a new and improved piece, afterall. Why not give it some content promo love?

Basically, you can launch the thing like it’s new again. After all, it kind of is. As with most things content & SEO related, Brian Dean is the master and he’s already written a great guide/case study that covers how to do this. Check it out here.

Conclusion

Content optimization is important, often talked about, and rarely understood. How do you optimize content? What’s that even mean?

Here we’ve laid out two paths to doing so: improving conversion paths and improving traffic growth. Within those two paths there are multiple tactics for analyzing, prioritizing, and optimizing content for increased traffic, conversions, and whatever else you’re chasing after.

One can never truly encompass a topic and all the creative tactics that are possible, though. For that reason, I leave it as an open question: what am I missing? Any creative ways to surface optimization opportunities, uses of personalization, or otherwise? Feel free to comment or shoot me an email or whatever.

The post Content Optimization: How to Make Content Better appeared first on Alex Birkett.

]]>