Tag Archives: Analysis

Landing Page Optimization: Free worksheet to help you balance segmentation and resources

All things being equal, the more segmented and targeted your landing page is, the higher your conversion rate will be. Everyone in marketing knows that.

However, the other part of the equation that is rarely talked about — the more segmented and targeted your landing page is, the more resources (time, focus, development, agency hours, etc.) it will likely take.

Sure, there are some tools that will automate this process by automatically displaying, say, a relevant product recommendation. There are some that will reduce, but not eliminate, extra work by pulling in anything from a relevant dynamic keyword change to entirely different content blocks.

But for most companies today, getting more segmented with their landing pages is going to take time or money that could be spent on something else.

So how do you find the balance? When is it worth launching a new landing page to serve a new motivation or when can you just try to make your current landing pages serve multiple motivations?

We’ve created a free worksheet to help you make that decision and (if necessary) get budget approval on that decision from a business leader or client.

 

Click Here to Download Your FREE Landing Page Segmentation Worksheet Instantly

(no form to fill out, just click to get your instant download of this PDF-based tool)

 

This quick-and-easy tool helps you decide when you need a new landing page to target a more specific audience. Here is a quick breakdown of some of the fields you will find in the worksheet, which has fillable form fields to make recording all the info easy for you.

Step 1: What do you know about the customers?

Who are your ideal customers? It’s important to know which customers your product can best serve so you can make the right promise with your marketing.

Possible sources of data to answer this question include transactional data, social media reviews, customer interviews, customer service interactions, and A/B testing. The most popular way to learn about customers is with an internal metric analysis, which is used by 69% of companies.

You’ll want to know demographics like age(s), gender(s), education, income(s), location(s) and other factors that are important to your product.

You’ll also want to know psychographics like what they move toward (their goals), what they move away from (their pains) and what value(s) they derive from your product purchase.

You also want to know who the page needs to serve from among the customers. Is it someone who has never visited before and is unaware of the category value? A repeat purchaser? And so on. Knowing their previous relationship to the landing page, your company and your products is important to creating high-converting landing pages.

Step 2: Based on what you know, what can you hypothesize about the customers?

What are the motivations of visitors? Visitor motivation has the greatest impact on conversion, according to the MECLABS Institute Conversion Sequence Heuristic. You can get indications about what motivations these visitors might have, based on sources like inbound traffic sources, previous pages viewed, A/B testing results, site search keywords, PPC keywords, customer service question, and testing, not to mention the previous info you’ve already completed about demographics, psychographics and the like.

You want to hypothesize what different motivations visitors might have, and why they have that motivation (keep asking why until you get to the core motivation, this can be very informative).

For example, I have a Nissan LEAF. I had multiple motivations for buying a LEAF. Motivation A was to get a zero-emission car. Motivation B was to save money on gas, maintenance, etc.

Drilling down into Motivation A, why did I want a zero-emission car? Because I didn’t want to pollute. Why? Because I didn’t want to increase local air pollution or add to climate change. Why? Because my kids breathe the local air and will be impacted by climate change.

Getting down to the core motivation might create messaging that taps deeper into your customers’ wants and needs than simply mentioning the features of the product.

Which brings up the next question. What must the landing page do to serve these motivations? You can use the previous info, previous customers, analytics, previous purchases — and intuition to answer that question.

Essentially, you want to be able to fill in the blanks: The landing page must do ________________ so customers can ______________. Use as many as apply to the motivations you are trying to meet. Is there a natural grouping? Are they very different?

Using the car example previously, the landing page must do a good job tapping into customers desire for a better, cleaner world so customers can see the deeper environmental impact of a driving a zero-emissions vehicle.

Step 3: Based on customer motivations, does it make business sense to create a new landing page?

This is where the rubber meets the road (car analogies notwithstanding). All marketers are pro segmentation. But you can’t do everything.

On the flip side, marketers can underinvest in their landing pages and overinvest in traffic driving and ultimately leak money by having too few, unsegmented landing pages that are trying to do too much for too many different motivations — and thus, doing none of them well.

Does it make business sense to make a new, more segmented landing page? Three more landing pages?  Dozens of dynamically generated content boxes or headlines targeting different motivations for a specific landing page?

Now that you have a sense of the different motivations you’re trying to serve, you should ask what distinct customer sets these customers represent, and what percent of profits each generates. If it helps to identify them, assign a name to customer sets that have similar motivations. Whether it’s something like Aspirational Suburbanites or Laid-back Lindas, some element of personification can help you feel closer to the customer. You should combine your transactional and analytics data with the previously completed info to arrive at the customer sets and percent of profit generated by each.

This is the value side of the equation.

For the cost side of the equation, you need to ask how many resources it takes to create a new landing page? Based on your work with web or design agencies, outside consultants and internal development teams, it helps to put a cost to the work even if it’s internal salaried employee time that you won’t technically be billed for. That will help you understand if there is an ROI for the work. Costs you want to consider are your marketing team, copy, design, development, conversion optimization and A/B testing.

Decision: Do I need a new landing page?

With this info, you can decide if you need a new landing page. Does the landing page you already have or the one you are currently developing closely enough match the motivations of the profitable core of customers? Will the landing page work with editors to match the motivations of the profitable core of customers? Or is a new landing page needed to more closely serve motivations of a profitable subgroup of customers?

Seeing the amount of business you can get — and the cost it will take to get you there — can help you get past the simple idea that segmentation is good or that your current landing page is good enough for all customers. You can move on with a deeper understanding of whether or not your business should invest in a more segmented landing page(s) to better tap into motivations of a uniquely motivated (and profitable) set of customers.

Use this worksheet to make the decision for yourself and make the case for budget to your business leaders and clients.

 

Click Here to Download Your FREE Landing Page Segmentation Worksheet Instantly

(no form to fill out, just click to get your instant download of this PDF-based tool)

 

Special thanks to MECLABS Web Designer Chelsea Schulman for designing this sharp-looking interactive worksheet.

Related Resources

Lead your team to breakthrough results with A Model of your Customer’s Mind – These 21 charts and tools have helped capture more than $500 million in (carefully measured) test wins.

B2B Marketing: Homepage segmentation effort increases time spent on site 171%

The Benefits of Combining Content Marketing and Segmentation

MECLABS Landing Page Optimization online certification course

The post Landing Page Optimization: Free worksheet to help you balance segmentation and resources appeared first on MarketingExperiments.

Get Your Free Test Discovery Tool to Help Log all the Results and Discoveries from Your Company’s Marketing Tests

Come budget time, do you have an easy way to show all the results from your testing? Not just conversion lifts, but the golden intel that senior business leaders crave — key insights into customer behavior.

To help you do that, we’ve created the free MECLABS Institute Test Discovery Tool, so you can build a custom discovery library for your organization. This simple tool is an easy way of helping your company create a repository of discoveries from its behavioral testing with customers and showing business leaders all the results of your testing efforts. Just click the link below to get yours.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

In addition to enabling you to show comprehensive test results to business leaders, a custom test discovery library for your brand helps improve your overall organization’s performance. You probably have an amazing amount of institutional knowledge stuck in your cranium. From previous campaigns and tests, you have a good sense of what will work with your customers and what will not. You probably use this info to inform future tests and campaigns, measure what works and build your knowledge base even more.

But to create a truly successful organization, you have to get that wisdom out of your head and make sure everyone in your marketing department and at your agencies has access to that valuable intel. Plus, you want the ability to learn from everyone in your organization as well.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

This tool was created to help a MECLABS Research Partner keep track of all the lessons learned from its tests.

“The goal of building this summary spreadsheet was to create a functional and precise approach to document a comprehensive summary of results. The template allows marketers to form a holistic understanding of their test outcomes in an easily digestible format, which is helpful when sharing and building upon future testing strategy within your organization. The fields within the template are key components that all testing summaries should possess to clearly understand what the test was measuring and impacting, and the validity of the results,” said Delaney Dempsey, Data Scientist, MECLABS Institute.

“Basically, the combination of these fields provides a clear understanding of what worked and what did not work. Overall, the biggest takeaway for marketers is that having an effective approach to documenting your results is an important element in creation of your customer theory and impactful marketing strategies. Ultimately, past test results are the root of our testing discovery about our customers,” she explained.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

Here is a quick overview for filling out the fields in this tool (we’ve also included this info in the tool) …

Click on the image to enlarge in new window

How to use this tool to organize your company’s customer discoveries from real-world behavioral tests

For a deeper exploration of testing, and to learn where to test, what to test and how to turn basic testing data into customer wisdom, you can take the MECLABS Institute Online Testing on-demand certification course.

Test Dashboard: This provides an overview of your tests. The info automatically pulls from the information you input for each individual test on the other sheets in this Excel document. You may decide to color code each test stream (say blue for email, green for landing pages, etc.) to more easily read the dashboard. (For instructions on adding more rows to the Test Dashboard, and thus more test worksheets to the Excel tool, scroll down to the “Adding More Tests” section.)

Your Test Name Here: Create a name for each test you run. (To add more tabs to run more tests, scroll down to the “Adding More Tests” section.)

Test Stream: Group tests in a way that makes the most sense for your organization. Some examples might be the main site, microsite, landing pages, homepage, email, specific email lists, PPC ads, social media ads and so on.

Test Location: Where in your test stream did this specific test occur? For example, if the Test Stream was your main site, the Test Location may have been on product pages, a shopping page or on the homepage. If one of your testing streams is Landing Pages, the test location may have been a Facebook landing page for a specific product.

Test Tracking Number: To organize your tests, it can help to assign each test a unique tracking number. For example, every test MECLABS Institute conducts for a company has a Test Protocol Number.

Timeframe Run: Enter the dates the test ran and the number of days it ran. MECLABS recommends you run your tests for at least a week, even if it reaches a statistically significant sample size, to help reduce the chances of a validity threat known as History Effect.

Hypothesis: The reason to run a test is to prove or disprove a hypothesis.

Do you know how you can best serve your customer to improve results? What knowledge gaps do you have about your customer? What internal debates do you have about the customer? What have you debated with your agency or vendor partner? Settle those debates and fill those knowledge gaps by crafting a hypothesis and running a test to measure real-world customer behavior.

Here is the approach MECLABS uses to formulate a hypothesis, with an example filled in …

# of Treatments: This is the number of versions you are testing. For example, if you had Landing Page A and Landing Page B, that would be two treatments. The more treatments you test in one experiment, the more samples you need to avoid a Sampling Distortion Effect validity threat, which can occur when you do not collect a significant number of observations.

Valid/Not Valid: A valid test measures what it claims to measure. Valid tests are well-founded and correspond accurately to the real world. Results of a valid test can be trusted to be accurate and to represent real-world conditions. Invalid tests fail to measure what they claim to measure and cannot be trusted as being representative of real-world conditions.

Conclusive/Inconclusive: A Conclusive Test is a valid test that has reached the desired Level of Confidence (95% is the most commonly used standard). An Inconclusive Test is a valid test that failed to reach the desired Level of Confidence for the primary KPI (95% is the most commonly used standard). Inconclusive tests, while not the marketer’s goal, are not innately bad. They offer insights into the cognitive psychology of the customer. They help marketers discover which mental levers do not have a significant impact on the decision process.

KPIs — MAIN, SECONDARY, TERTIARY

Name: KPIs are key performance indicators. They are the yardstick for measuring your test. The main KPI is what ultimately determines how well your test performed, but secondary and tertiary KPIs can be insightful as well. For example, the main KPI for a product page test might be the add-to-cart rate. That is the main action you are trying to influence with your test treatment(s). A secondary KPI might be a change in revenue. Perhaps you get fewer orders, but at a higher value per order, and thus more revenue. A tertiary KPI might be checkout rate, tracking how many people complete the action all the way through the funnel. There may be later steps in the funnel that are affecting that checkout rate beyond what you’re testing, which is why it is not the main KPI of the test but still important to understand. (Please note, every test does not necessarily have to have a main, secondary and tertiary KPI, but every test should at least have a main KPI.)

Key Discoveries: This is the main benefit of running tests — to make new discoveries about customer behavior. This Test Discovery Library gives you a central, easily accessible place to share those discoveries with the entire company. For example, you could upload this document to an internal SharePoint or intranet, or even email it around every time a test is complete.

The hypothesis will heavily inform the key discoveries section, but you may also learn something you weren’t expecting, especially from secondary KPIs.

What did the test results tell you about the perceived credibility of your product and brand? The level of brand exposure customers have previously had? Customers’ propensity to buy or become a lead? The difference in the behavior of new and returning visits to your website? The preference for different communication mechanisms (e.g., live chat vs. video chat)? Behavior on different devices (e.g., desktop vs. mobile)? These are just examples; the list could go on forever … and you likely have some that are unique to your organization.

Experience Implemented? This is pretty straightforward. Has the experience that was tested been implemented as the new landing page, home page, etc., after the test closed?

Date of implementation: If the experience has been implemented, when was it implemented? Recording this information can help you go back and make sure overall performance correlated with your expectations from the test results.

ADDING MORE TESTS TO THE TOOL

The Test Dashboard tab dynamically pulls in all information from the subsequent test worksheets, so you do not need to manually enter any data here except for the test sequence number in Column A. If you want to create a new test tab and the corresponding row in the “Test Dashboard,” follow these instructions:

    • Right click on the bottom tab titled “Template – Your Test Name Here.” Choose “Move or Copy.” From the list of sheets, choose “Template – Your Test Name Here.” Check the box “Create a Copy” and click OK. Right click on your new “Template – Your Test Name Here (2)” tab and rename as “Your Test Name Here (7).”
    • Now, you’ll need to add a new row to your “Test Dashboard” tab. Copy the last row. For example, select row 8 on the “Test Dashboard” tab, copy/paste those contents into row 9. You will need to make the following edits to reference your new tab, “Your Test Name Here (7).” This can be done in the following way:
      • Manually enter the test as “7” in cell A9.
      • The remaining cells dynamically pull the data in. However, since you copy/paste, they are still referencing the test above. To update this, highlight select row 9 again. On the Home Tab>Editing, select “Find & Select (located on the far right)>”Replace,” or use “CTRL+F”>Replace.
      • On the Replace tab of the box, enter Find What: “Your Test Name (6)” and Replace with: “Your Test Name (7).”
      • Click “Replace All”
      • All cells in the row should now reference your new tab, “Your Test Name (7)” properly.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

Special thanks to Research Manager Alissa Shaw, Data Scientist Delaney Dempsey, Associate Director of Design Lauren Leonard, Senior Director of Research Partnerships Austin McCraw, and Copy Editor Linda Johnson for helping to create the Test Discovery Library tool.

If you have any questions, you can email us at info@MECLABS.com. And here are some more resources to help with your testing …

Lead your team to breakthrough results with A Model of your Customer’s Mind: These 21 charts and tools have helped capture more than $500 million in (carefully measured) test wins

Test Planning Scenario Tool – This simple tool helps you visualize factors that affect the ROI implications of test sequencing

Customer Theory: How we learned from a previous test to drive a 40% increase in CTR

The post Get Your Free Test Discovery Tool to Help Log all the Results and Discoveries from Your Company’s Marketing Tests appeared first on MarketingExperiments.

Value Proposition: How to find the best expression of your value

The value proposition “why” — why should customers choose your product — can be answered in 100 different ways. But how do you determine the most effective answer?

Often, it is determined in a conference room with a rigorous debate amongst leaders and experts. But experts do not have the answers to this question — only your customers do. So, we turned to the customer to answer this question by conducting an experiment with a global news distributor.

EXPERIMENT

In the five-minute video below, Flint McGlaughlin explains how determining the best expression of value generated a 22% increase in conversions, and an important learning.

 Let’s take a closer look at the experiments featured in this video …

THE CONTROL

This global news distributor came to the research team at MECLABS Institute, the parent company of marketing experiments,with the goal of determining which element of their value proposition was most appealing to their customers.
So, they developed three different articulations of their core offer, using the homepage to test which one will have the most impact on conversions. 

TREATMENTS

(click on the images to enlarge in new window)

 

Treatment 1 tested a hypothesis that the group’s authority was the most appealing element of their offer, using phrases like “For almost 60 years,” “inventing the industry” and “most authoritative source of news.” Each of these points serves to foster a conclusion in the mind of the customer about the authority of this organization.

Treatment 2 had a different hypothesis, challenging the idea that the group’s comprehensive network was more appealing than any other element. This hypothesis was supported with phrases like “over 200,000 media outlets,” “hundreds of thousands of journalists,” “170 different countries” and “most comprehensive media network in the world.”

Finally, Treatment 3 argued that the group’s superior customer service was the most important element to its customers. Like the other treatments, this version used key supporting phrases like “exceptional customer service,” “work personally … one-on-one,” “200,000 errors caught each year” and “available 24 hours a day, 365 days a year.”

RESULTS

In the end, the treatment focused on the organization’s authority, and experiments outperformed all other treatments with 26% more conversions. While the conversion lift itself was impactful for this organization, the approach to achieving it is what provided the most valuable learning.  

The team not only determined the best articulation of their value proposition, they learned that clearly displaying the right value proposition articulation can maximize the force of your offer. And in order to uncover what is right for YOUR customers, marketers must engage in a mental dialogue. People don’t want to be talked at; they want to be communicated with. The marketer asks questions with their message, and the customer answers with their behavioral data.

RELATED RESOURCES

Learn how the MECLABS methodology can transform your business results

6 Good (and 2 Bad) B2B and B2C Value Proposition Examples

Value Force: How to win on value proposition and not just price

Form Optimization: The importance of communicating value before making the “ask”

The post Value Proposition: How to find the best expression of your value appeared first on MarketingExperiments.

A/B Testing: Why do different sample size calculators and testing platforms produce different estimates of statistical significance?

A/B testing is a powerful way to increase conversion (e.g., 638% more leads, 78% more conversion on a product page, etc.).

Its strength lies in its predictive ability. When you implement the alternate version suggested by the test, your conversion funnel actually performs the way the test indicated that it would.

To help determine that, you want to ensure you’re running valid tests. And before you decide to implement related changes, you want to ensure your test is conclusive and not just a result of random chance. One important element of a conclusive test is that the results show a statistically significant difference between the control and the treatment.

Many platforms will include something like a “statistical significance status” with your results to help you determine this. There are also several sample size calculators available online, and different calculators may suggest you need different sample sizes for your test.

But what do those numbers really mean? We’ll explore that topic in this MarketingExperiments article.

A word of caution for marketing and advertising creatives: This article includes several paragraphs that talk about statistics in a mathy way — and even contains a mathematical equation (in case these may pose a trigger risk for you). Even so, we’ve done our best to use them only where they serve to clarify rather than complicate.

Why does statistical significance matter?

To set the stage for talking about sample size and statistical significance, it’s worth mentioning a few words about the nature and purpose of testing (aka inferential experimentation) and the nomenclature we’ll use.

We test in order to infer some important characteristics about a whole population by observing a small subset of members from the population called a “Sample.”

MECLABS metatheory dubs a test that successfully accomplishes this purpose a “Useful” test.

The Usefulness (predictiveness) of a test is affected by two key features: “Validity” and “Conclusiveness.”

Statistical significance is one factor that helps to determine if a test is useful. A useful test is one that can be trusted to accurately reflect how the “system” will perform under real-world conditions.

Having an insufficient sample size presents a validity threat known as Sample Distortion Effect. This is a danger because if you don’t get a large enough sample size, any apparent performance differences may have been due to random variation and not true insights into your customers’ behavior. This could give you false confidence that a landing page change that you tested will improve your results if you implement it, when it actually won’t.

“Seemingly unlikely things DO sometimes happen, purely ‘by coincidence’ (aka due to random variation). Statistical methods help us to distinguish between valuable insights and worthless superstitions,” said Bob Kemper, Executive Director, Infrastructure Support Services at MECLABS Institute.

“By our very nature, humans are instinctively programmed to seek out and recognize patterns: think ‘Hmm, did you notice that the last five people who ate those purplish berries down by the river died the next day?’” he said.

A conclusive test is a valid test (There are other validity threats in addition to sample distortion effect.) that has reached a desired Level of Confidence, or LoC (95% is the most commonly used standard).

In practice, at 95% LoC, the 95% confidence interval for the difference between control and treatment rates of the key performance indicator (KPI) does not include zero.

A simple way to think of this is that a conclusive test means you are 95% confident the treatment will perform at least as well as the control on the primary KPI.  So the performance you’ll actually get, once it’s in production for all traffic, will be somewhere inside the Confidence Interval (shown in yellow above).  Determining level of confidence requires some math.

Why do different testing platforms and related tools offer such disparate estimates of required sample size? 

One of MECLABS Institute’s Research Partners who is president of an internet company recently asked our analysts about this topic. His team found a sample size calculator tool online from a reputable company and noticed how different its estimate of minimum sample size was compared to the internal tool MECLABS analysts use when working with Research Partners (MECLABS is the parent research organization of MarketingExperiments).

The simple answer is that the two tools approach the estimation problem using different assumptions and statistical models, much the way there are several competing models for predicting the path of hurricanes and tropical storms.

Living in Jacksonville, Florida, an area that is often under hurricane threats, I can tell you there’s been much debate over which among the several competing models is most accurate (and now there’s even a newer, Next Gen model). Similarly, there is debate in the optimization testing world about which statistical models are best.

The goal of this article isn’t to take sides, just to give you a closer look at why different tools produce different estimates. Not because the math is “wrong” in any of them, they simply employ different approaches.

“While the underlying philosophies supporting each differ, and they approach empirical inference in subtly different ways, both can be used profitably in marketing experimentation,” said Danitza Dragovic, Digital Optimization Specialist at MECLABS Institute.

In this case, in seeking to understand the business implications of test duration and confidence in results, it was understandably confusing for our Research Partner to see different sample size calculations based upon the tool used. It wasn’t clear that a pre-determined sample size is fundamental to testing in some calculations, while other platforms ultimately determine test results irrespective of pre-determined sample sizes, using prior probabilities assigned by the platform, and provide sample size calculators simply as a planning tool.

Let’s take a closer look at each …

Classical statistics 

The MECLABS Test Protocol employs a group of statistical methods based on the “Z-test,” arising from “classical statistics” principles that adopt a Frequentist approach, which makes predictions using only data from the current experiment.

With this method, recent traffic and performance levels are used to compute a single fixed minimum sample size before launching the test.  Status checks are made to detect any potential test setup or instrumentation problems, but LoC (level of confidence) is not computed until the test has reached the pre-established minimum sample size.

While historically the most commonly used for scientific and academic experimental research for the last century, this classical approach is now being met by theoretical and practical competition from tools that use (or incorporate) a different statistical school of thought based upon the principles of Bayesian probability theory. Though Bayesian theory is far from new (Thomas Bayes proposed its foundations more than 250 years ago), its practical application for real-time optimization research required computational speed and capacity only recently available.

Breaking Tradition: Toward optimization breakthroughs

“Among the criticisms of the traditional frequentist approach has been its counterintuitive ‘negative inference’ approach and thought process, accompanied by a correspondingly ‘backwards’ nomenclature. For instance, you don’t ‘prove your hypothesis’ (like normal people), but instead you ‘fail to reject your Null hypothesis’ — I mean, who talks (or thinks) like that?” Kemper said.

He continued, “While Bayesian probability is not without its own weird lexical contrivances (Can you say ‘posterior predictive’?), its inferential frame of reference is more consistent with the way most people naturally think, like assigning the ’probability of a hypothesis being True’ based on your past experience with such things. For a purist Frequentist, it’s impolite (indeed sacrilegious) to go into a test with a preconceived ‘favorite’ or ‘preferred answer.’ One must simply objectively conduct the test and ‘see what the data says.’ As a consequence, the statement of the findings from a typical Bayesian test — i.e., a Bayesian inference — is much more satisfying to a non-specialist in science or statistics than is an equivalent traditional/frequentist one.”

Hybrid approaches

Some platforms use a sequential likelihood ratio test that combines a Frequentist approach with a Bayesian approach. The adjective “sequential” refers to the approach’s continual recalculation of the minimum sample size for sufficiency as new data arrives, with the goal of minimizing the likelihood of a false positive arising from stopping data collection too soon.

Although an online test estimator using this method may give a rough sample size, this method was specifically designed to avoid having to rely on a predetermined sample size, or predetermined minimum effect size. Instead, the test is monitored, and the tool indicates at what point you can be confident in the results.

In many cases, this approach may result in shorter tests due to unexpectedly high effect sizes. But when tools employ proprietary methodologies, the way that minimum sample size is ultimately determined may be opaque to the marketer.

CONSIDERATIONS FOR EACH OF THESE APPROACHES

Classical “static” approaches

Classical statistical tests, such as Z-tests, are the de facto standard across a broad spectrum of industries and disciplines, including academia. They arise from the concepts of normal distribution (think bell curve) and probability theory described by mathematicians Abraham de Moivre and Carl Friedrich Gauss in the 17th to 19th centuries. (Normal distribution is also known as Gaussian distribution.)  Z-tests are commonly used in medical and social science research.

They require you to estimate the minimum detectable effect-size before launching the test and then refrain from “peeking at” Level of Confidence until the corresponding minimum sample size is reached.  For example, the MECLABS Sample Size Estimation Tool used with Research Partners requires that our analysts make pre-test estimates of:

  • The projected success rate — for example, conversion rate, clickthrough rate (CTR), etc.
  • The minimum relative difference you wish to detect — how big a difference is needed to make the test worth conducting? The greater this “effect size,” the fewer samples are needed to confidently assert that there is, in fact, an actual difference between the treatments. Of course, the smaller the design’s “minimum detectable difference,” the harder it is to achieve that threshold.
  • The statistical significance level — this is the probability of accidentally concluding there is a difference due to sampling error when really there is no difference (aka Type-I error). MECLABS recommends a five percent statistical significance which equates to a 95% desired Level of Confidence (LoC).
  • The arrival rate in terms of total arrivals per day — this would be your total estimated traffic level if you’re testing landing pages. “For example, if the element being tested is a page in your ecommerce lower funnel (shopping cart), then the ‘arrival rate’ would be the total number of visitors who click the ‘My Cart’ or ‘Buy Now’ button, entering the shopping cart section of the sales funnel and who will experience either the control or an experimental treatment of your test,” Kemper said.
  • The number of primary treatments — for example, this would be two if you’re running an A/B test with a control and one experimental treatment.

Typically, analysts draw upon a forensic data analysis conducted at the outset combined with test results measured throughout the Research Partnership to arrive at these inputs.

“Dynamic” approaches 

Dynamic, or “adaptive” sampling approaches, such as the sequential likelihood ratio test, are a more recent development and tend to incorporate methods beyond those recognized by classical statistics.

In part, these methods weren’t introduced sooner due to technical limitations. Because adaptive sampling employs frequent computational reassessment of sample size sufficiency and may even be adjusting the balance of incoming traffic among treatments, they were impractical until they could be hosted on machines with the computing capacity to keep up.

One potential benefit can be the test duration. “Under certain circumstances (for example, when actual treatment performance is very different from test-design assumptions), tests may be able to be significantly foreshortened, especially when actual treatment effects are very large,” Kemper said.

This is where prior data is so important to this approach. The model can shorten test duration specifically because it takes prior data into account. An attendant limitation is that it can be difficult to identify what prior data is used and exactly how statistical significance is calculated. This doesn’t necessarily make the math any less sound or valid, it just makes it somewhat less transparent. And the quality/applicability of the priors can be critical to the accuracy of the outcome.

As Georgi Z. Georgiev explains in Issues with Current Bayesian Approaches to A/B Testing in Conversion Rate Optimization, “An end user would be left to wonder: what prior exactly is used in the calculations? Does it concentrate probability mass around a certain point? How informative exactly is it and what weight does it have over the observed data from a particular test? How robust with regards to the data and the resulting posterior is it? Without answers to these and other questions an end user might have a hard time interpreting results.”

As with other things unique to a specific platform, it also impinges on the portability of the data, as Georgiev explains:

A practitioner who wants to do that [compare results of different tests run on different platforms] will find himself in a situation where it cannot really be done, since a test ran on one platform and ended with a given value of a statistic of interest cannot be compared to another test with the same value of a statistic of interest ran on another platform, due to the different priors involved. This makes sharing of knowledge between practitioners of such platforms significantly more difficult, if not impossible since the priors might not be known to the user.

Interpreting MECLABS (classical approach) test duration estimates 

At MECLABS, the estimated minimum required sample size for most experiments conducted with Research Partners is calculated using classical statistics. For example, the formula for computing the number of samples needed for two proportions that are evenly split (uneven splits use a different and slightly more complicated formula) is provided by:

Solving for n yields:

Variables:

  • n: the minimum number of samples required per treatment
  • z: the Z statistic value corresponding with the desired Level of Confidence
  • p: the pooled success proportion — a value between 0 – 1 — (i.e., of clicks, conversions, etc.)
  • δ: the difference of success proportions among the treatments

This formula is used for tests that have an even split among treatments.

Once “samples per treatment” (n) has been calculated, it is multiplied by the number of primary treatments being tested to estimate the minimum number of total samples required to detect the specified amount of “treatment effect” (performance lift) with at least the specified Level of Confidence, presuming the selection of test subjects is random.

The estimated test duration, typically expressed in days, is then calculated by dividing the required total sample size by the expected average traffic level, expressed as visitors per day arriving at the test.

Finding your way 

“As a marketer using experimentation to optimize your organization’s sales performance, you will find your own style and your own way to your destination,” Kemper said.

“Like travel, the path you choose depends on a variety of factors, including your skills, your priorities and your budget. Getting over the mountains, you might choose to climb, bike, drive or fly; and there are products and service providers who can assist you with each,” he advised.

Understanding sampling method and minimum required sample size will help you to choose the best path for your organization. This article is intended to provide a starting point. Take a look at the links to related articles below for further research on sample sizes in particular and testing in general.

Related Resources

17 charts and tools have helped capture more than $500 million in (carefully measured) test wins

MECLABS Institute Online Testing on-demand certification course

Marketing Optimization: How To Determine The Proper Sample Size

A/B Testing: Working With A Very Small Sample Size Is Difficult, But Not Impossible

A/B Testing: Split Tests Are Meaningless Without The Proper Sample Size

Two Factors that Affect the Validity of Your Test Estimation

Frequentist A/B test (good basic overview by Ethen Liu)

Bayesian vs Frequentist A/B Testing – What’s the Difference? (by Alex Birkett on ConversionXL)

Thinking about A/B Testing for Your Client? Read This First. (by Emīls Vēveris on Shopify)

On the scalability of statistical procedures: why the p-value bashers just don’t get it. (by Jeff Leek on SimplyStats)

Bayesian vs Frequentist Statistics (by Leonid Pekelis on Optimizely Blog)

Statistics for the Internet Age: The Story Behind Optimizely’s New Stats Engine (by Leonid Pekelis on Optimizely Blog)

Issues with Current Bayesian Approaches to A/B Testing in Conversion Rate Optimization (by Georgi Z. Georgiev on Analytics-Toolkit.com)

 

The post A/B Testing: Why do different sample size calculators and testing platforms produce different estimates of statistical significance? appeared first on MarketingExperiments.

Lead Nurturing Tested: How slight script tweaks increased response by 31%

The following research was first published in the MECLABS Quarterly Research Digest, July 2014.

When it comes to selling, marketers and salespeople seem to have the subtle, yet overwhelming, desire to put the proverbial cart before the horse. It’s easy enough to see why: The sale is the goal. It is the macro-yes we are all searching for. If a lead is generated but fails to close, it’s good for nothing but taking up space in the email database.

Yet, as any married person can tell you, a fruitful relationship rarely starts by seeking the macro-yes first. “Will you marry me?” may be the ultimate “ask” you want to make, but the first “ask” is likely something tamer, such as, “Would you like to go out sometime?”

It is the same with marketing. Some “match made in heaven” leads are ready to close right away, but most leads require some nurturing before coming to a buying decision. What is the most effective way to nurture leads? Are there concrete principles we can rely on to improve our nurturing processes? These are questions that need answers.

Experiment: Which voice email script will generate the most responses?

Our Research Partner for this experiment is a well-known insurance provider. One aspect of its sales process is connecting with businesses over the phone to see if it can become the provider of the business’ employee coverage. When a phone call is not answered, the phone representatives use a voicemail script to ensure the company consistently puts itself in a position to have the call returned.

The company wanted us to test its voicemail script against a different script to see if we could produce a lift in callbacks. Those making the calls clearly favored the control, because they believed it produced the best response.

Here is the control script:

Hello, ___, my name is Lisa and I am calling with [insurance company]. We are currently the fifth-largest life insurance carrier in the nation offering competitive rates and solutions to help ease administration burdens. When we last spoke, you told me that you work with a broker for your price quotes for the group life benefits. I would like to get your broker contact information in order to be in consideration when they next do their evaluations for you.

Here is the treatment script:

Hello, ___, my name is Lisa and I am calling with [insurance company]. When we last spoke, you told me that you work with a broker for your price quotes for the group life benefits. Since we do not nationally advertise and may not have had the opportunity to work with your consultant, we would like to share our information with them. I would like to get your broker contact information in order to be in consideration when they next do their evaluations for you. 

As you can see, the changes in the script are not massive. In most cases, we simply juggled the order of the sentences. Did these changes have an impact? Yes, the treatment script produced a 31% increase in callbacks.

What was it about the treatment script that nurtured the lead more effectively than the control script? We have arrived at two lead nurturing key principles that should help clarify the situation.

Key Principle #1. Lead nurturing is a process, not an event. The necessary timing allows for the necessary forming — the forming of the final conclusion.

Only 36% of marketers nurture leads, according to a 2012 MarketingSherpa survey. That means when 64% of companies acquire a lead, they send it straight to Sales. If that’s what you are doing, you are treating lead nurturing as an event that, once completed, qualifies the lead for closing. This is not how lead nurturing works. It is a process that builds upon itself, always moving the lead toward fostering a conclusion about your product: I need to buy this product.

Why does lead nurturing matter? Is it really a big deal if we do not nurture leads? Figure 1.2 explains that, yes, it is a big deal. You may be sacrificing a potential 45% lift in overall return on investment if you fail to nurture leads.

Figure 1.2

Key Principle #2. The “final conclusion” is different from the macro-yes, for the conclusion must precede a macro-yes. In the nurturing process, the marketing team fosters a conclusion, and the sales team converts it to a “yes.”

When we talk about a “macro-yes,” we are dealing with the ultimate goal of the sales process: the sale. This is when actual money is exchanged for your product or service. This is not the same event as the final conclusion. The final conclusion is “I want/need this product.” A great deal of distance still separates the conclusion and the actual sale. That is the job of the sales team.

A marketer’s job is not done when a prospect hears about a product. Our job is not finished when a prospect shows interest in a product. It is not even done when they actively seek out more information about it. A marketer’s job is only done when prospects decide to buy the product. Marketers foster conclusions. Salespeople turn those conclusions into sales.

If you follow MECLABS, you may be familiar with the imagery of the inverted funnel. The idea is that the “funnel” is a misleading symbol for the sales process. Gravity is not your friend. If people “fall” anywhere, it is out of your funnel, not into it. Reaching the macro-yes is more like climbing a mountain than slipping through a funnel. Figure 1.3 gives you an approximation of where the final conclusion in the process is and illustrates that many micro-yes(s) must be achieved to reach that goal.

Figure 1.3

In the lead nurturing voicemail experiment, we pulled three specific levers to help the lead foster a conclusion.

Lever #1. We anchored the message to the context

We use context to navigate the world. Context is how we know whether a joke is appropriate or if a friend needs our condolences. Without context, we cannot make sense of the world, and it is the same with marketing. The first script from the experiment begins selling right away without any context for the call, which may have caused listeners to tune out or delete the message before ever reaching the context buried toward the end of the script. We moved the context (“Last time we spoke …”) to the beginning of the treatment script, which helped produce the 31% lift.

In Test Protocol 2083 in MECLABS Research Library, we helped an event management software company optimize an email that went to leads who had abandoned the shopping cart. The original nurture email provides little to no context for the message — it moves directly to selling:

You’re just one step away from getting FREE access to [company], our award-winning Event Registration and Management Software. Quickly make an event website, try our event marketing tools, build a registration form template or even generate custom name badges.

Our control introduced context to the mix before moving into selling:

I noticed that you started the process of getting free access to [company] but weren’t able to finish. Are you concerned about giving out your phone number? Are you worried about high pressure sales tactics or mandatory contracts? We believe our product sells itself, so we’re just here to provide you with whatever assistance you need in getting your event up and running — in whatever way works best for you. We promise NEVER to sell or misuse your information.

By providing the context for the email and adapting the tone to match that context, we produced a 349% increase in conversion.

Lever #2. We connected the value proposition to the prospect

Your primary value proposition is how you answer this question: If I am your ideal prospect, why should I buy from you rather than any of your competitors? However, based on context, your value proposition should have different variations and finding the correct one is essential to maximizing your conversion rate. Keep in mind the three main variations:

  • Prospect-level: If I am [PROSPECT A], why should I buy from you rather than any of your competitors?
  • Product-level: If I am [PROSPECT A], why should I buy this product rather than any other product?
  • Process-level: If I am [PROSPECT A], why should I choose this PPC ad over any other PPC ad?

In lead nurturing, it is crucial to connect your value proposition to the specific prospect. In the case of the voicemail experiment, that was the person answering the phone. Our research revealed these people had high anxiety when they felt they were being “sold,” and they also had no interest in learning about an insurance company. They simply wanted to pass us on to their insurance broker. When we made that action the main component of the voicemail, callbacks increased 31%.

In Test Protocol 1483 located in the MECLABS Research Library, we worked with a physician-only social network to improve one of its landing pages. The control page focused on the product-level value proposition of a single report.

Figure 2.1

Figure 2.2 — Control

When we redesigned the page to focus on the specific needs of the prospect we added in many other reports the prospect might be interested in reading. The result was a 197% increase in lead generation.

 

Figure 2.3 — Treatment

Lever #3. We aligned the argument to the “ask”

People’s minds arrange arguments into the form of a story. If you present an argument that deviates from traditional story form, you will likely confuse or bore your prospect.

The four components of the control voicemail looked like this:

  • Greeting
  • Build company value
  • Give a reason for the call
  • Make the “ask”

This is out of order. Logically, we expect an argument to follow this pattern:

  • Greeting
  • Reason for call
  • Build company value
  • Make the “ask”

This is exactly the pattern we followed with the treatment message which led to a 31% increase in callbacks.

Figure 3.2

Similarly, in an email experiment for the same physician-only social media site mentioned previously, we noticed the control email seemed to be out of step with where it was actually positioned in the sales story. The email attempts to build value and get the prospect to click on a “Get Started” button. However, it conflates its objective with the objective of the landing page. The objective of the email is simply to get a click.

In our treatment, we dialed back the ambition of the email by simply providing value and a soft call-to-action to “See How [Product] Works.” The result of matching the email’s argument to the appropriate “ask” was a 104% increase in lead generation.

Figure 3.3 and Figure 3.4

Improve Your Lead Nurturing with This Checklist

We have established that 64% of companies do not engage in any lead nurturing to speak of. We also determined that those who do engage often rely on unsound principles that can be remedied with simple tweaks.

To create lead nurturing materials that form a process, and are also optimized to foster a final conclusion, there are three levers you must pull. We have created this simple checklist to ensure your lead nurturing is on track.

Lever #1. Anchor the message to the context

□ Is the message clear to your prospects?

□ Have you justified the reason for the message?

Lever #2. Connect the value proposition to the prospect

□ Is the message relevant to the prospect?

□ Does the message appeal to your prospect?

Lever #3. Align the argument to the “ask”

□ Is there a clear and logical argument in your material?

□ Does the “ask” logically flow from your argument?

 

Related Resources

Learn how to join MECLABS in its search for finding what works in marketing by applying for a Research Partnership

Download a free excerpt of the 2012 Lead Generation Benchmark Report

Learn more about building up to the ultimate yes on the MarketingExperiments Blog using the inverted funnel

Discover the methodology MECLABS uses to conduct its experiments

Find more experiments from the MECLABS Laboratory in the MECLABS Research Catalog

The post Lead Nurturing Tested: How slight script tweaks increased response by 31% appeared first on MarketingExperiments.

The Trust Trial: Could you sell an iChicken?

Would you buy an Apple iChicken? Our CEO, Flint McGlaughlin, often jokes that “If Apple released an iChicken, people would be lined up and down the streets to buy it.”

But why?

At some point or another, we’ve all bought a product because of the brand name. But why do we prefer name brand cereal over the store brand? Why are Yeezys so much cooler than the $20 knock-offs on Amazon? Most of the time, it’s because we expect a certain kind of experience from the brands we trust. Cute logos and catchy slogans cannot build a brand powerful enough to sell the world an iChicken. The only way to build an effective brand is to earn your customers’ trust.

Trust, however, is not a static element; it is constantly changing. Every interaction with your value proposition impacts your customers’ trust, and in turn, your brand. Consider, for instance, what might happen after people bought the iChicken. Assuming the metaphorical product is as useless as it sounds, customers’ expectations of Apple would likely be damaged. Next time a new product is released, customers might think twice before jumping in line. Apple may have spent years building trust, but if a brand fails to meet their customers’ expectations, that trust is diminished.

Thus, this raises a more important question for the marketer: How do you build a brand that could sell an iChicken?

The most successful companies in the world do not rely on a brand promise, they cultivate a brand expectation. In order to build a trustworthy brand, marketers must use inference as a tool with which to create an expectation in the mind of the customer, and then deliver on it consistently. But people are not simple, and likewise, this process of earning their trust is not simple.

The Trust Trial

When engaging in a decision-making process, customers begin a subconscious cycle in the mind called “The Trust Trial.” This trial goes through five repeating phases: Customers must (1) observe your offer, in order to form a (2) conclusion about that offer within the context of their own needs, leading them to (3) decide what action they will take. This decision is then paired with an (4) expectation of your offer, which is ultimately calibrated by the (5) experience. Once a customer has experienced your offer, the trust trial restarts.

Let’s take a closer look …

#1. Observation

While it may seem obvious, observation is a complex and important phase for your customers. Customers are not simply looking at your offer, they are searching for your value proposition — a reason to invest interest. Every piece of data presented to your customer must lead them to infer the value of your offer. Marketing cannot make claims, it must foster conclusions. We often focus so much on achieving a conversion, that we forget the many other things our customers are focused on. When a customer is in the observation phase of The Trust Trial, the marketer must present the right data, at the right time, in the right order, within the customer’s thought-sequence, to guide them toward the desired conclusions.

#2. Conclusion

A customer’s conclusions are inferred by the data that has been made available to them. It depends entirely on the marketer to encode their message in a way that the customer comes to two ultimate conclusions: The product can, and the company will. These two conclusions are happening in a sort of micro trust trial throughout the cycle of trust trials. The marketer has value that needs to be perceived and a truth that needs to be believed. Trust is contingent upon the marketer’s ability to foster these necessary conclusions. But the marketer must remember that the conclusion is tentative; it only locks when you experience it. And it must be consistently reinforced.

#3. Decision

The decision phase is more than just the final decision to purchase. While the final decision may be the most important for your bottom line, there are many micro-decisions customers must make first. Customers must decide whether to read past your headline, to click on “learn more,” to fill out a form, etc. If the marketer fails to carefully guide their customer through each of these micro-decisions, then they won’t even make it to the final decision. Using the power of inference, the marketer must leverage the observation and conclusion phases to reinforce, not only the company’s value proposition, but the particular product’s value proposition as well.

#4. Expectation

All of marketing serves to create an expectation in the mind of the customer. Many companies talk about their “brand-promise,” but most customers don’t even know what a brand-promise is. And if it doesn’t exist in the mind of the customer, then it really doesn’t exist at all. Companies should not be focused on creating a brand-promise, but rather on developing a brand-expectation through the consistent experience of the value proposition. Every interaction with your brand develops an expectation in the mind of the customer about what they are going to experience. Whether or not this expectation is met determines the strength of your brand.

#5 Experience

Ultimately, brand is the aggregate experience of the value proposition. Each experience, from the first to the last, compounds to either build or diminish trust. Sometimes, the longer you know someone, the more you trust them. And sometimes, it goes the other way … Consider, for instance, a president who wants to be re-elected. They spend months campaigning, making promises and creating an expectation of how they plan to run the country. Then, after being elected, they fail to keep those promises. Four years later and it’s time for the next election, but the country no longer trusts this president. The experience did not meet the expectation that was created for them. Now, what do you think the chances of that re-election would be? Every experience begins a new trust trial in the mind of the customer and determines the probability of a future engagement. While you can gain your customers’ confidence in the inference process, it is ultimately the experience that calibrates trust — and trust which drives the power of your brand.

An Experiment

We put this concept to the test in an experiment conducted by MECLABS Institute, the parent research organization of MarketingExperiments, with an organization that sells language learning products for people who want to learn a language fast. The group asked MECLABS to run a test with the goal of isolating the variable(s) leading to high order cancellations and fewer people choosing to keep their product.

When analyzing the original page, the team found a small disclaimer hidden beneath all their marketing copy. Their 30-day free trial wasn’t actually so free; for each course you keep during your trial, the company bills you four monthly payments of $64 dollars. The team hypothesized that customers may be missing this disclaimer until it shows up on their bill, leading them to doubt the company’s honesty. If the information was presented earlier in the funnel, customers may be less likely to convert — but the goal of a test is not to get a lift, but to get a learning. So, the team designed a treatment in which the disclaimer was communicated more clearly and emphasized earlier on the landing page. As expected, the results showed a 78.6% decrease in conversions.

While this result didn’t make the company more money, it revealed how their offer may have been misleading their customers. Marketers often tend to prioritize a conversion over the customer relationship, but this mistake can have long-term impacts on a business. You may be able to fool someone with your marketing once, but if you fail to deliver on your promises, you will fail your customers’ trust trial and lose their loyalty — which is worth far more over time than one conversion.

What does this mean for marketers?

The Trust Trial is not a marketing technique; it is the essence of building genuine relationships. Often, marketers forget the human-ness of our customers. We forget that trust is the currency of any relationship. Whether deciding on a college, a spouse or even a cell phone, people must be able to trust they have made the best choice.

If we hope to create sustainable competitive advantage and a lasting brand, we cannot treat customers merely as “leads” and “opportunities” whose sole purpose is to increase our revenue. We must understand and care for each milestone of the customer’s experience of our value proposition. In the end, regardless of the size of your company, your brand depends on your customer relationship, and in order to sustain it, the marketer must earn their customers’ trust. And then earn it again and again.

Related resources

The Prospect’s Perception Gap: How to bridge the gap between the results we want and the results we have

Customer-First Science: A conversation with Wharton about using marketing experimentation to discover why people say yes

Value Proposition: In which we examine a value prop fail and show you how to fix it

The post The Trust Trial: Could you sell an iChicken? appeared first on MarketingExperiments.

Marketing Multiple Products: How radical thinking about a multi-product offer led to a 70% increase in conversion

The following research was first published in the MECLABS Quarterly Research Digest, July 2014.

Many companies — large or small, B2C or B2B, ecommerce or subscription — have more than one product. If you fall into this category, you face a common challenge: finding the best way to market multiple products. You could take a couple of approaches:

  • Pack your pages with as many products as possible, hoping that the sheer numbers will pump up conversions
  • Slim down to the bare minimum, hoping to focus your prospects’ attention on just one product

Certainly, we can make experienced guesses based on intuition at the effectiveness of these approaches. But at the end of the day, that is not what MECLABS is about — nor you, we expect. We want hard numbers to guide our thinking, not intuition. This led us to an extremely interesting experiment and three key principles to observe when marketing multiple products.

Experiment: Which product presentation would increase revenue? 

This experiment, Test Protocol 1903 in the MECLABS Research Library, was conducted on behalf of our Research Partner, an independent manufacturer and distributor of vitamin supplements. Prospects only reached the page we tested after filling out a form and clicking a “Get My [Product] Now!” button. When reaching the page, the question prospects had to answer was not, “Do I want to buy this product?” It was more of, “Which version of this product do I want to purchase?”

The control page features a standard list format with radio buttons. The “Best Value” option, which is also the most expensive, is pre-checked. The treatment page uses a horizontal matrix that generated lifts in other tests.

Figure 1.1

Figure 1.2

The extensive form beneath the matrix auto-populates, so only the payment information also required on the control page needs to be entered.

Figure 1.3

Figure 1.4

Figure 1.5

Did we see a similar lift in this case? No, the control page outperformed the treatment by 70% in terms of revenue, while conversion changes were insignificant. 

Now the ultimate question: Why? A little digging on our part revealed three key principles.

Keys to Marketing Multiple Products

Key Principle #1. People do not buy from product pages; people buy from people. The art of marketing is not conversion; it is conversation.

We’ve covered this idea in many MECLABS Web clinics. The focus of our efforts as marketers cannot be on creating better pages; it must be on creating clearer, more guided conversations.

You want to have a conversation with the customer that allows them to understand the value that is built into the page. The page should be rooted in that value but presented in a thought sequence prospects will understand. By doing so, we’re able to guide them to the action we wish them to take.   

Key Principle#2. The goal of a product page is not simply to give prospects more options or products, but to lead them to the “one” option that is most relevant, important and urgent to them.

More options on a single page does not equate to more conversions. We create more conversions by guiding prospects to the best option for them. We do that through the conversations we build on our pages.

Presenting a dozen variations of the same product can be confusing for prospects, especially if there is no product-level value proposition. They may have questions like: Do I want any version of this product? If so, what’s the difference between them all? Which one is right for me?

All this confusion could lead the prospect to leave the page and, ultimately, your website.

We can decrease this confusion and friction by guiding them to the product that best suits them. We’ll learn how to effectively do that through the objectives provided in the next key principle.

Key Principle #3. Therefore, the marketer must use three key objectives when selling multiple products: eliminate, emphasize and express.

Eliminate means to minimize the number of competing choices on your pages as much as possible. Emphasize involves using visual weight to sequence the presentation of products. Lastly, express entails ensuring the product-level value proposition is clearly communicated.

If you can achieve these three objectives, you will have created a conversation that guides a prospect’s thinking and leads them to the best product for them, rather than simply a webpage that presents them with options. In the balance of this article, we will look at how to do this.

Objective #1. Eliminate competing choices

Many times, we unintentionally create too many equally weighted options on a product page. What does this mean? Look at the page depicted in Figure 2.1. On this page, three options make up the sidebar. The marketers hypothesized that they could achieve a lift in conversion by removing the equally weighted choices and simply placing those options in a drop-down box for the user to choose from, as shown in Figure 2.2.

Did it work? Yes, to the tune of a 24% increase in revenue.

Figure 2.1

Figure 2.2

 Indeed, by eliminating unnecessary choices, you can increase conversion. However, there is a caveat. It is possible to eliminate too much. In Figure 2.3, you see a page with three size options for a product. We hypothesized that we could increase conversion by simplifying things and just focusing on the most popular size of the product (while eliminating the extra options). That is not how it worked out. Instead, the new page delivered a 35% decrease in conversion.

Figure 2.3

Figure 2.4

Testing new designs is the only way to find out whether you can produce a lift by eliminating competing options, but the research shows in most cases, you can.

To discover whether your pages are good candidates for elimination, ask yourself the following five questions:

  1. Are the products on my page the ones my customers want?
  2. Can I visually group my products so they appear as one?
  3. Can I eliminate one or more competing products?
  4. Can I segment my traffic in the channel so products are more personalized?
  5. Is there a gap in my product mix that indicates I have eliminated too much?

Objective #2. Emphasize desired choices

When multiple products or options are necessary, you want to be careful not to make them equally weighted. That can lead to “unsupervised thinking” on the part of the prospect. Rather, you want to guide them to the option that best suits them. In Figure 3.1, the webpage has five options that are equally weighted. There is no guidance. The treatment in Figure 3.2 trims the options down to three, but, more importantly, it also adds emphasis to the option on the right, steering prospects in that direction. The change resulted in a 66% increase in conversions.

Figure 3.1

If you are not handling emphasis well on your pages, consider the five elements below to get back on track. Of course, test the results.

  • Size: How large is the product on the page?
  • Shape: Does the shape of the product distinguish it from others?
  • Motion: Is there a tasteful way to emphasize the product with motion?
  • Color: Does the color of the product distinguish it from others?
  • Position: Is the product being emphasized in the main eye-path?

Objective #3. Express product values

Three levels form a value proposition: process level, product level and prospect level (Figure 4.1). When working with a page consisting of multiple products, it is absolutely critical that the product-level value proposition is crystal clear. That’s the one that explains why a specific product is the best choice in a specific situation. If the product-level value is unclear, prospects will not understand the difference between products or which product best meets their current needs.

Figure 3.2

Figure 4.1 – The Proposition Spectrum

In Figures 4.2 and 4.3, you see the control and treatment pages of a couple recent experiments. The control pages fail to clearly demonstrate the product-level value propositions of the products. The treatments take a more copy-heavy approach, allowing us to really flesh out the specific value propositions for each product. The result was a 61% increase in purchases for the first page and a 93% conversion lift for the second.

 

Figure 4.2

Figure 4.3

Experiment: How did our treatment meet the three objectives?

Let’s come full circle, back to the experiment that started it all. We wanted to understand why the control page outperformed the treatment that was based on other successful experimentation. Let’s look at how it meets the three objectives outlined previously.

Figure 5.1 and Figure 5.2

Eliminate

We did not eliminate any products from the control to the treatment, so that did not factor into this specific situation.

Emphasize

The control page visually emphasizes the most expensive “best value” version of the product by automatically checking the radio button of that choice. Our treatment, however, visually emphasizes the second “most popular” choice, as noted by the red boxes. This change guided more prospects toward choosing the less expensive option, which explains why conversion was roughly equivalent while revenue significantly decreased.

Express

Finally, in the treatment, we added a small “cost per serving” feature to the products. This showed that the “most popular” option produced a $0.66 savings per serving over the cheapest option. However, the “best value” option only produced a $0.12 savings over the middle option.

Figure 5.3

Increasing Conversion on Multiple Product Pages

If you find yourself in the same boat as most marketers (i.e., having to market multiple products), remember these key principles:

First, people do not buy from product pages; people buy from people. The art of marketing is not conversion; it is conversation.

Second, we must understand that our goal is to guide prospects through the multiple products to the “one” option that is most relevant, important and urgent to them.

Third, the marketer must use three key objectives when selling multiple products: eliminate, emphasize and express.

Related Resources

See more experiments in the MECLABS Research Catalog: www.meclabs.com/catalog

See how another Research Partner tested radio buttons and dropdowns against one another

Explore the MarketingExperiments Research Directory to see past clinics

Learn more about product-level value propositions

Download a special report by MarketingExperiments on unsupervised thinking, “No Unsupervised Thinking: How to increase conversion by guiding your audience”

 

For permissions: research@meclabs.com

The post Marketing Multiple Products: How radical thinking about a multi-product offer led to a 70% increase in conversion appeared first on MarketingExperiments.

Optimizing Web Forms: How one company generated 226% more leads from a complex web form (without significantly reducing fields)

The following research was first published in the MECLABS Quarterly Research Digest, July 2014.

It has long been an axiom of mine that the little things are infinitely the most important.

— Sir Arthur Conan Doyle, A Case of Identity

It is highly unlikely that Sir Arthur was contemplating website forms when making the above statement. However, the sentiment certainly translates to our digital world, and is spot on in the case of web forms. Web forms might not elevate your heart rate, but these mundane little segments of your website actually contain some of your best opportunities to increase conversion.

In marketing, “friction” is anything that slows down the mental momentum that is driving the customer toward a buying decision. Web forms are essentially the definition of friction. By nature, people are wary of sharing their personal information. Asking them to enter that information requires two commitments from them:

  1. To decide if your offer is valuable enough to give out their contact Information
  2. To then take the time and effort to fill out the form

So to some extent, web forms cause mental and physical friction.

This is widely accepted. Most optimization strategies focus on reducing friction to increase conversion. This is worthy, but could there be a different approach? A strategy that is ultimately more effective? We ran a test to find out.

Experiment: Can form friction be negated?

The experiment, Test Protocol 1636, was conducted in partnership with a large news syndication company. The goal was to increase the number of leads the company received from a “Request More Information” web form. The form itself was not a core cog in its lead generation process, but the form received enough traffic to run a valid experiment.

The control form, as seen in Figure 1.1, contains 11 form fields, with 10 being required. It also features sidebar navigation and three equally weighted calls-to-action at the bottom of the page. We ran two treatments against this page.

Figure 1.1  (Click on images to enlarge)

 

Treatment A (Figure 1.2) eliminates the navigation and calls-to-action, but it also increases the number of total form fields to 15 — nine required ones. In Figure 1.3, Treatment B is similarly designed, but reduces the total number of form fields from 11 to 10, all of which are required.

Figure 1.2

 

Figure 1.3

Did either treatment improve the lead rate? If so, which treatment?

Both treatments outperformed the control. Treatment A produced a 109% lift, while Treatment B generated 87% more leads. Clearly, removing the side navigation and distracting calls-to-action were major contributors to the treatments’ successes, but when we further examined the test metrics, we discovered two puzzling and fascinating anomalies.

Web Form Anomalies that Impact Conversion

Anomaly #1. While Treatment A contained six optional form fields, every prospect who landed on the page filled in every field — without exception.

 Marketing intuition, and prior testing to some extent, trains us to assume that every additional form field decreases the probability of a prospect completing the form. Conversion rate decreases for every extra field you add. We know this, but now the data from this experiment stares us in the face and causes us to ask a simple question: Why did everyone who saw this form fill out every field, even the optional ones?

Anomaly #2. Even though both treatments outperformed the control, there was no statistically significant difference when we compared Treatment A to Treatment B.

The results conclusively showed the treatments both performed significantly better than the control.

However, when compared to each other, the treatments performed in a statistical dead-heat — we could not declare a winner. This raised another simple question: Why did the higher number of fields (more friction) not affect conversion rate?

Getting Higher-quality Leads without Sacrificing Conversion Rate

When we first began to analyze the control, we considered the objective of the form, which was to set up a phone call between the prospect and the business. Our analysts hypothesized that by helping the customer through the form, we could set an expectation for a productive in-person conversation. This led to the design choices you see in Treatment A (Figure 2.1). The tone of the copy is conversational, and at each step of the process, we ask a question that both personalizes the form and explains why the information is being sought from the prospect.

Figure 2.1

The result of this tone change is that the questions actually helped to reinforce the value proposition of the phone call the prospect was being asked to set up.

With whom will we be speaking?
(We collect your general information so we know with whom we will be speaking and how best to reach you.)

 Where are you located? 
(We collect information about your location in order to route you to the appropriate [company] representative.)

 What information are you interested in discussing?
(In order to make our conversation as productive as possible, we would like to know a couple of pieces of relevant information.)

The last question in particular was a key factor in building value. By explaining what they wished to discuss, prospects began to visualize the conversation and see themselves receiving the information they needed. This section consisted of the new optional fields added to the control’s form fields.

Fascinatingly, we increased the cost, in the minds of prospects, of filling out the form in Treatment A by adding form fields, while we decreased the cost in Treatment B by eliminating the optional field, as you can see represented in Figure 2.2. Yet, we generated the same lead rate on both pages because the value offered in Treatment A outweighed the value in Treatment B, as seen in Figure 2.3.

Figure 2.2 – Cost force

 

Figure 2.3 – Value force vs. Cost force

By guiding the visitor through the form and increasing the process-level value proposition, we were able to counterbalance the additional friction in the form — capturing higher-quality leads without sacrificing quantity.

How to Increase Conversion on Your Own Forms

In a bubble, this is an interesting case to study. But what does it mean for your own web forms? We uncovered three key principles to help you in your efforts.

Key Principle #1. Every action a customer is asked to take — even completing a form field — creates a psychological question in the mind of the customer: Is this really worth it?

It’s a weighing question – that’s what the fulcrum represents in the Value Proposition Heuristic (Figure 3.1). Is the cost greater than the value? Is the value outweighing the cost? Those are the questions the customer is asking.

If you can start seeing and breaking down your pages by cost and value, then you have a lot of control as a marketer.

Key Principle #2. Thus, optimizing web forms transcends simply reducing the number of fields. We must ensure that we build the right amount of value to offset the cost. Sometimes, the right “ask” at the right time can actually imply value.

 The way we present and communicate the “ask” can greatly impact how prospects interpret the amount of value. By communicating the benefits prospects will experience by answering the questions, we can offset the cost of giving up that information.

This resulted in two positives for this experiment. We could ask more questions without hurting conversion, and this, in turn, led to higher-quality leads.

Key Principle #3. Finally, we must see through our customers’ eyes. Our prospects personify our forms. They give them a tone, a voice, a personality. It is more than a transaction; it is a conversation — a conversation that you must guide.

How do we see through our customers’ eyes? We can’t think about our products and services in a company-centric manner. We should go through our checkout processes and fill out our forms as if we are the customers. Experience the experience of the customer. Only then can we engage them in meaningful conversation that will guide them in a way that makes sense.

When you begin to see your web forms through the lens of a conversation, rather than a transaction, you will immediately be better equipped to communicate value to the prospect. When you communicate value, you might actually increase cost and friction in the mind of the customer by asking for more information, and still keep your conversion rate the same.

If you are ready to tackle your forms and gain more leads from the traffic you already have, we have provided a seven-question checklist to get you started:

  • Does my form gather the information my company needs?
  • Can I reduce the number of required fields?
  • Should I increase the number of required fields for a higher-quality lead?
  • Can I group similar form fields and reduce the perceived length of my form?
  • Is there a justification (direct or implied) for why each field is presented?
  • How can I increase the perceived value of every field in my form?
  • Does the form logically guide the visitor through the process of filling it out?
Related Resources

Learn more about friction in the MarketingExperiments web clinic replay, Hidden Friction: The 6 silent killers of conversion

Review the methodology MECLABS uses when running tests for Research Partners

Learn about online testing and how to run a valid experiment in the MECLABS Online Testing Course

See how testing form field length reduced cost-per-lead by $10.66

Learn how optional form fields can affect form completions in the MarketingExperiments web clinic replay, Do Optional Form Fields Help (or Hurt) Conversion? How one required form field was hindering a 275% lift in conversion:

Read this MarketingExperiments Blog post to learn more about process-level value propositions, as well as the three other essential levels of value propositions:

Discover seven ways to reduce the perceived cost of lead generation offers

The post Optimizing Web Forms: How one company generated 226% more leads from a complex web form (without significantly reducing fields) appeared first on MarketingExperiments.

A/B Testing Prioritization: The surprising ROI impact of test order

I want everything. And I want it now.

I’m sure you do, too.

But let me tell you about my marketing department. Resources aren’t infinite. I can’t do everything right away. I need to focus myself and my team on the right things.

Unless you found a genie in a bottle and wished for an infinite marketing budget (right after you wished for unlimited wishes, natch), I’m guessing you’re in the same boat.

When it comes to your conversion rate optimization program, it means running the most impactful tests. As Stephen Walsh said when he wrote about 19 possible A/B tests for your website on Neil Patel’s blog, “testing every random aspect of your website can often be counter-productive.”

Of course, you probably already know that. What may surprise you is this …

It’s not enough to run the right tests, you will get a higher ROI if you run them in the right order

To help you discover the optimal testing sequence for your marketing department, we’ve created the free MECLABS Institute Test Planning Scenario Tool (MECLABS is the parent research organization of MarketingExperiments).

Let’s look at a few example scenarios.

Scenario #1: Level of effort and level of impact

Tests will have different levels of effort to run. For example, it’s easier to make a simple copy change to a headline than to change a shopping cart.

This level of effort (LOE) sometimes correlates to the level of impact the test will have to your bottom line. For example, a radical redesign might be a higher LOE to launch, but it will also likely produce a higher lift than a simple, small change.

So how does the order you run a high effort, high return, and low effort, low return test sequence affect results? Again, we’re not saying choose one test over another. We’re simply talking about timing. To the test planning scenario tool …

Test 1 (Low LOE, low level of impact)

  • Business impact — 15% more revenue than the control
  • Build Time — 2 weeks

Test 2 (High LOE, high level of impact)

  • Business impact — 47% more revenue than the control
  • Build Time — 6 weeks

Let’s look at the revenue impact over a six-month period. According to the test planning tool, if the control is generating $30,000 in revenue per month, running a test where the treatment has a low LOE and a low level of impact (Test 1) first will generate $22,800 more revenue than running a test where the treatment has a high LOE and a high level of impact (Test 2) first.

Scenario #2: An even larger discrepancy in the level of impact

It can be hard to predict the exact level of business impact. So what if the business impact differential between the higher LOE test is even greater than in Scenario #1, and both treatments perform even better than they did in Scenario #1? How would test sequence affect results in that case?

Let’s run the numbers in the Test Planning Scenario Tool.

Test 1 (Low LOE, low level of impact)

  • Business impact — 25% more revenue than the control
  • Build Time — 2 weeks

Test 2 (High LOE, high level of impact)

  • Business impact — 125% more revenue than the control
  • Build Time — 6 weeks

According to the test planning tool, if the control is generating $30,000 in revenue per month, running Test 1 (low LOE, low level of impact) first will generate $45,000 more revenue than running Test 2 (high LOE, high level of impact) first.

Again, same tests (over a six-month period) just a different order. And you gain $45,000 more in revenue.

“It is particularly interesting to see the benefits of running the lower LOE and lower impact test first so that its benefits could be reaped throughout the duration of the longer development schedule on the higher LOE test. The financial impact difference — landing in the tens of thousands of dollars — may be particularly shocking to some readers,” said Rebecca Strally, Director, Optimization and Design, MECLABS Institute.

Scenario #3: Fewer development resources

In the above two examples, the tests were able to be developed simultaneously. What if the test cannot be developed simultaneously (must be developed sequentially) and can’t be developed until the previous test has been implemented? Perhaps this is because of your organization’s development methodology (Agile vs. Waterfall, etc.), or there is just simply a limit on your development resources. (They likely have many other projects than just developing your tests.)

Let’s look at that scenario, this time with three treatments.

Test 1 (Low LOE, low level of impact)

  • Business impact — 10% more revenue than the control
  • Build Time — 2 weeks

Test 2 (High LOE, high level of impact)

  • Business impact — 360% more revenue than the control
  • Build Time — 6 weeks

Test 3 (Medium LOE, medium level of impact)

  • Business impact — 70% more revenue than the control
  • Build Time — 3 weeks

In this scenario, Test 2 first, then Test 1 and finally Test 3, along with Test 2, then Test 3, then Test 1 were the highest-performing scenarios. The lowest-performing scenario was Test 3, Test 1, Test 2. The difference was $894,000 more revenue from using one of the highest-performing test sequences versus the lowest-performing test sequence.

“If development for tests could not take place simultaneously, there would be a bigger discrepancy in overall revenue from different test sequences,” Strally said.

“Running a higher LOE test first suddenly has a much larger financial payoff. This is notable because once the largest impact has been achieved, it doesn’t matter in what order the smaller LOE and impact tests are run, the final dollar amounts are the same. Development limitations (although I’ve rarely seen them this extreme in the real world) created a situation where whichever test went first had a much longer opportunity to impact the final financial numbers. The added front time certainly helped to push running the highest LOE and impact test first to the front of the financial pack,” she added.

The Next Scenario Is Up To You: Now forecast your own most profitable test sequences

You likely don’t have the exact perfect information we provided in the scenarios. We’ve provided model scenarios above, but the real world can be trickier. After all, as Nobel Prize-winning physicist Niels Bohr said, “Prediction is very difficult, especially if it’s about the future.”

“We rarely have this level of information about the possible financial impact of a test prior to development and launch when working to optimize conversion for MECLABS Research Partners. At best, the team often only has a general guess as to the level of impact expected, and it’s rarely translated into a dollar amount,” Strally said.

That’s why we’re providing the Test Planning Scenario Tool as a free, instant download. It’s easy to run a few different scenarios in the tool based on different levels of projected results and see how the test order can affect overall revenue. You can then use the visual charts and numbers created by the tool to make the case to your team, clients and business leaders about what order you should run your company’s tests.

Don’t put your tests on autopilot

Of course, things don’t always go according to plan. This tool is just a start. To have a successful conversion optimization practice, you have to actively monitor your tests and advocate for the results because there are a number of additional items that could impact an optimal testing sequence.

“There’s also the reality of testing which is not represented in these very clean charts. For example, things like validity threats popping up midtest and causing a longer run time, treatments not being possible to implement, and Research Partners requesting changes to winning treatments after the results are in, all take place regularly and would greatly shift the timing and financial implications of any testing sequence,” Strally said.

“In reality though, the number one risk to a preplanned DOE (design of experiments) in my experience is an unplanned result. I don’t mean the control winning when we thought the treatment would outperform. I mean a test coming back a winner in the main KPI (key performance indicator) with an unexpected customer insight result, or an insignificant result coming back with odd customer behavior data. This type of result often creates a longer analysis period and the need to go back to the drawing board to develop a test that will answer a question we didn’t even know we needed to ask. We are often highly invested in getting these answers because of their long-term positive impact potential and will pause all other work — lowering financial impact — to get these questions answered to our satisfaction,” she said.

Related Resources

MECLABS Institute Online Testing on-demand certification course

Offline and Online Optimization: Cabela’s shares tactics from 51 years of offline testing, 7 years of digital testing

Landing Page Testing: Designing And Prioritizing Experiments

Email Optimization: How To Prioritize Your A/B Testing

The post A/B Testing Prioritization: The surprising ROI impact of test order appeared first on MarketingExperiments.

Product Pages Tested: How carefully pinpointing customer anxiety led to a 78% increase in conversion

The following research was first published in the MECLABS Quarterly Research Digest, July 2014.

 Product pages are a staple in nearly every business website in existence. Oftentimes, they represent the final hurdle before a prospect clicks “add to cart” or fills out your form. Therefore, if we can improve the performance of these key pages, we can see substantial increases in conversion and sales.

 

Figure 1.1

Look at the three pages in Figure 1.1. What do they have in common?

Granted, there could be multiple correct answers to this question. However, one similarity may have escaped your notice: anxiety. In every page, especially product pages, certain elements raise the anxiety level of the prospect. This should concern you for two very good reasons:

  1. In our experience, when we correct for anxiety, we see gains.
  2. The needed correction often involves only simple and small changes.

z

Figure 1.2

 

In the Marketingsherpa E-commerce Benchmark Study, we found ecommerce marketers are employing a variety of page elements that can be used to reduce anxiety (Figure 1.3).

Figure 1.3 – Page elements used by successful and unsuccessful ecommerce companies on product pages.
Anxiety-reducing elements highlighted.

We tested four of those minor elements to correct specific points of anxiety on the same page and to help us understand the interplay of anxiety and the corrections we make.

Experiment: Which anxiety correction had the biggest impact?

The experiment sought to improve the sales of e-books from a well-known retailer in the market. Our approach was to test four different variations of the product page against the control, with each treatment correcting for a different form of hypothesized anxiety:

  • Version A: Adjusting for anxiety regarding site security (Figure 1)
  • Version B: Adjusting for anxiety that the e-book would not be compatible  with their reading device (Figure 2.2)
  • Version C: Adjusting for anxiety that the e-book would not be of interest or value to them (Figure 2.3)
  • Version D: Adjusting for anxiety regarding the shipping time frame of the e-book (Figure 4)

What does your instinct tell you? Which, if any, of the corrections would most improve conversion?

The result: Version C was the winner, increasing conversion by 78%

After our complete analysis, we discovered three key principles as to why Version C was victorious, as well as what we can learn from the success of the other treatments.

How to correct for anxiety on product pages

Key Principle #1. Every element we tested on the page overcorrected some type of customer anxiety, with various elements performing more effectively than others.

It is crucial to note that while Version C produced the largest increase, each treatment page outperformed the control.  In other words, in every case where we took steps to alleviate customer anxiety, conversion went up. These results underscore the importance of this effort, as well as the relative ease with which gains can be achieved.

It is important to note the use of the term “overcorrect” here because anxiety is not always rational. You may know that flying in a plane is statistically safer than riding in your car, but, for many of us, our anxiety level is much higher in an airplane. Is it rational? No. Is it still very real? Yes. You may see no reason for concern about a given aspect of your page, but that does not mean anxiety is absent for customers.

Key Principle #2. The effectiveness of each corrective is directly related to how it matched the specific concern in the mind of the customer.

While all cases of anxiety correction produced lifts, one change impacted conversion significantly more than the others. Version C overcorrected for a concern that was most immediate to the prospect at the time. Therefore, it is crucial to discover the specific anxieties your customers are experiencing on your product pages. Among a plethora of options,  we have found some standard minor corrections you can make for specific anxieties:

Source                                                                             Correction

Product Quality Anxiety  ———————————> Satisfaction Guarantee

Product Reliability Anxiety  —————————–>  Customer Testimonials

Website or Form Security Anxiety  ——————->  Third-party Security Seals or Certificates

Price Anxiety  ————————————————>  Low-price Guarantee

Additionally, customer testimonials can be used to alleviate several different concerns. You want to choose testimonials that specifically deal with the point of anxiety the customer is experiencing (Figure 3.1).

Figure 3.1 – Examples of testimonials addressing specific points of customer anxiety

 

Key Principle #3. Location plays an important role. You can more effectively correct anxiety by moving a corrective element within close proximity to where the concern is experienced.

As in real estate, location is of utmost importance when correcting for anxiety on product pages. If you are correcting for form security concerns, you want the correction element right where the customer must click to submit the form. In Version C, we simply added a plot synopsis above the fold rather than farther down the page, and it led to the biggest jump in conversion. It’s not always about creating new elements, but instead, placing existing ones in a location that better serves the thought sequence of customers.

Overcorrecting for product page anxiety

Anxiety is lethal to product page conversion. It is always present, and it is not always rational.  By overcorrecting for predictable or discovered customer anxiety, you will empower more prospects to complete the sale.

The effectiveness of an anxiety corrective is dependent on two essential factors:

Specificity – How specific is the corrective to the source of anxiety?
Proximity – How close is the corrective to the moment of concern?

If you can identify the main cause of anxiety on the page and implement an overcorrection element in close proximity, you are on your way to higher conversion and more sales.

Related Resources

Landing Page Optimization: Addressing Customer Anxiety

MECLABS Research Catalog — Learn about other experiments performed in the MECLABS Laboratory

MECLABS methodology

Conversion Rate Optimization — Read how anxiety plays a role in building the “Ultimate Yes” to conversion

Online Testing: 6 Test Ideas To Optimize The Value Of Testimonials On Your Site

To learn more about anxiety and the factors that affect it, enroll in the MECLABS Landing Page Optimization Online Certification Course

The post Product Pages Tested: How carefully pinpointing customer anxiety led to a 78% increase in conversion appeared first on MarketingExperiments.