Tag Archives: Analysis

Conversion Optimization: Eight considerations to take into account when A/B testing in mobile

I’m writing this article on a laptop computer at my desk. And in your marketing department or agency, you likely do most of your work on a computer as well.

This can cause a serious disconnect with your customers as you design A/B tests.

Because more than half (52.4% according to Statista) of global internet traffic comes from a mobile device.

So, I interviewed Rebecca Strally, Director of Optimization and Design, and Todd Barrow, Director of Application Development, for tips on what considerations you should make for mobile devices when you’re planning and rolling out your tests. Rebecca and Todd are my colleagues here at MECLABS Institute (parent research organization of MarketingExperiments).

Consideration #1: Amount of mobile traffic and conversions

Just because half of global traffic is from mobile devices doesn’t mean half of your site’s traffic is from mobile devices. It could be considerably less. Or more.

Not to mention, traffic is far from the only consideration. “You might get only 30% of traffic from mobile but 60% of conversions, for example. Don’t just look at traffic. Understand the true impact of mobile on your KPIs,” Rebecca said.

Consideration #2: Mobile first when designing responsive

Even if mobile is a minority of your traffic and/or conversions, Rebecca recommends you think mobile first. For two reasons.

First, many companies measure KPIs (key performance indicators) in the aggregate, so underperformance on mobile could torpedo your whole test if you’re not careful. Not because the hypothesis didn’t work, but because you didn’t translate it well for mobile.

Second, it’s easier to go from simpler to more complex with your treatments. And mobile’s smaller form factor necessitates simplicity.

“Desktop is wide and shallow. Mobile is tall and thin. For some treatments, that can really affect how value is communicated.”  — Rebecca Strally

“Desktop is wide and shallow. Mobile is tall and thin. For some treatments, that can really affect how value is communicated,” she said.

Rebecca gave an example of a test that was planned on desktop first for a travel website. There were three boxes with value claims, and a wizard below it. On desktop, visitors could quickly see and use the wizard. The boxes offered supporting value.

But on mobile, the responsive design stacked the boxes shifting the wizard far down the page. “We had to go back to the drawing board. We didn’t have to change the hypothesis, but we had to change how it was executed on mobile,” Rebecca said.

Consideration #3: Unique impacts of mobile on what you’re testing

A smartphone isn’t just a smaller computer. It’s an entirely different device that offers different functionality. So, it’s important to consider how that functionality might affect conversions and to keep mobile-specific functionality in mind when designing tests that will be experienced by customers on both platforms — desktop and mobile.

Some examples include:

  • With the prevalence of digital wallets like Apple Pay and Google Pay, forms and credit card info is more likely to prefill. This could reduce friction in a mobile experience, and make the checkout process quicker. So while some experiences might require more value on desktop to help keep the customer’s momentum moving through the checkout process, including that value on mobile could actually slow down an otherwise friction-lite experience.
  • To speed load time and save data, customers are more likely to use ad blockers that can block popups and hosted forms. If those popups and forms contain critical information, visitors may assume your site is having a problem and not realize they are blocking this information. This may require clearly providing text explaining about the form or providing an alternative way to get the information, a step that may not be necessary on desktop.
  • Customers are touching and swiping, not typing and clicking. So information and navigation requests need to be kept simpler and lighter than on desktop.
  • Visitors can click to call. You may want to test making a phone call a more prominent call to action in mobile, while on desktop that same CTA may induce too much friction and anxiety.
  • Location services are more commonly used, providing the opportunity to better tap into customer motivation by customizing offers and information in real time and more prominently featuring brick-and-mortar related calls to action, as opposed to desktop, which is in a static location, and the user may be interested in obtaining more information before acting (which may require leaving their current location).
  • Users are accustomed to app-based experiences, so the functionality of the landing page may be more important on mobile than it is on desktop.

Consideration #4: The device may not be the only thing that’s different

“Is mobile a segment or device?” Rebecca pondered in my office.

She expanded on that thought, “Do we treat mobile like it is the same audience with the same motivations, expected actions, etc., but just on a different physical device? Or should we be treating those on mobile like a completely different segment/audience of traffic because their motivations, expected actions, etc., are different?”

She gave an example of working with a company her team was performing research services for. On this company’s website, younger people were visiting on mobile while older people were visiting on desktop. “It’s wasn’t just about a phone, it was a different collection of human beings,” she said.

Consideration #5: QA to avoid validity threats

When you’re engaged in conversion optimization testing, don’t overlook the need for quality assurance (QA) testing. If a treatment doesn’t render correctly on a mobile device, it could be that the technical difficulty is causing the change in results, not the changes you made to the treatment. If you are unaware of this, it will mislead you about the effectiveness of your changes.

This is a validity threat known as instrumentation effect.

Here are some of the devices our developers use for QAing.

(side note: That isn’t a stock photo. It’s an actual picture by Senior Designer James White. When I said it looked too much like a stock image, Associate Director of Design Lauren Leonard suggested I let the readers know “we let the designers get involved, and they got super excited about it.”)

 

“If your audience are heavy users of Safari on iPhone, then check on the actual device. Don’t rely on an emulator.”   — Todd Barrow

“Know your audience. If your audience are heavy users of Safari on iPhone, then check on the actual device. Don’t rely on an emulator. It’s rare, but depending on what you’re doing, there are things that won’t show up as a problem in an emulator. Understand what your traffic uses and QA your mobile landing pages on the actual physical devices for the top 80%,” Todd advised.

Consideration #6: The customer’s mindset

Customers may go to the same exact landing page with a very different intent when they’re coming from mobile. For example, Rebecca recounted an experiment with an auto repair chain. For store location pages, desktop visitors tended to look for coupons or more info on services. But mobile visitors just wanted to make a quick call.

“Where is the customer in the thought sequence? Mobile can do better with instant gratification campaigns related to brick-and-mortar products and services,” she said.

Consideration #7: Screen sizes and devices are not the same things

Most analytics platforms give you an opportunity to monitor your metrics based on device types, like desktop, mobile and tablet. They likely also give you the opportunity to get metrics on screen resolutions (like 1366×768 or 1920×1080).

Just keep in mind, people aren’t always viewing your websites at the size of their screen. You only know the size of the monitor not the size of the browser.

“The user could be recorded as a full-size desktop resolution, but only be viewing in a shrunken window, which may be shrunk down enough to see the tablet experience or even phone experience,” Todd said. “Bottom line is you can’t assume the screen resolutions reported in the analytics platform is actually what they were viewing the page at.”

Consideration #8: Make sure your tracking is set up correctly

Mobile can present a few unique challenges for tracking your results through your analytics and testing platforms. So make sure your tracking is set up correctly before you launch the test.

For example, if you’re using a tag control manager and tagging things through it based on CSS properties, if the page shifts at different breakpoints that change the page structure, you could have an issue.

“If you’re tagging a button based on its page location at the bottom right, but then it gets relocated on mobile, make sure you’re accounting for that,” Todd advised.

Also, understand how the data is being communicated. “Because Google Tag Manager and Google Optimize are asynchronous, you can get mismatched data if you don’t follow the best practices,” Todd said.

“If you see in your data that the control has twice as many hits as the treatment, there is a high probability you’ve implemented something in a way that didn’t account for the way asynchronous tags work.”                  —Todd Barrow

Todd provided a hard-coded page view as an example. “Something to watch for when doing redirect testing … a tracking pixel could fire before the page loads and does the split. If you see in your data that the control has twice as many hits as the treatment, there is a high probability you’ve implemented something in a way that didn’t account for the way asynchronous tags work. This is really common,” Todd said.

“If you know that’s going to happen, you can segment the data to clean it,” he said.

Related Resources

Free Mobile Conversion Micro Class from MECLABS Institute

Mobile Marketing: What a 34% increase in conversion rate can teach you about optimizing for video

Mobile Marketing: Optimizing the evolving landscape of mobile email marketing

Mobile Conversion Optimization Research Services: Get better mobile results from deeper customer understanding

The post Conversion Optimization: Eight considerations to take into account when A/B testing in mobile appeared first on MarketingExperiments.

Mobile A/B Testing: Quality assurance checklist

Real-world behavioral tests are an effective way to better understand your customers and optimize your conversion rates. But for this testing to be effective, you must make sure it is accurately measuring customer behavior.

One reason these A/B split tests fail to give a correct representation of customer behavior is because of validity threats. This series of checklists is designed to help you overcome Instrumentation Effect. It is based on actual processes used by MECLABS Institute’s designers, developers and analysts when conducting our research services to help companies improve marketing performance.

MECLABS defines Instrumentation Effect as “the effect on the test variable caused by a variable external to an experiment, which is associated with a change in the measurement instrument.” In other words, the results you see do not come from the change you made (say, a different headline or layout), but rather, because some of your technology has affected the results (slowed load time, miscounted analytics, etc.)

Avoiding Instrumentation Effect is even more challenging for any test that will have traffic from mobile devices (which today is almost every test). So, to help you avoid the Instrumentation Effect validity threat, we’re providing the following QA checklist. This is not meant for you to follow verbatim, but to serve as a good jumping-off point to make sure your mobile tests are technically sound. For example, other browsers than the ones listed here may be more important for your site’s mobile functionality. Maybe your landing page doesn’t have a form, or you may use different testing tools, etc.

Of course, effective mobile tests require much more than thorough QA — you also must know what to test to improve results. If you’re looking for ideas for your tests that include mobile traffic, you can register for the free Mobile Conversion micro course from MECLABS Institute based on 25 years of conversion optimization research (with increasing emphases on mobile traffic in the last half decade or so).

There’s a lot of information here, and different people will want to save this checklist in different ways. You can scroll through the article you’re on to see the key steps of the checklist. Or use the form on this page to download a PDF of the checklist.

 

Roles Defined

The following checklists are broken out by teams serving specific roles in the overall mobile development and A/B testing process. The checklists are designed to help cross-functional teams, with the benefit being that multiple people in multiple roles bring their own viewpoint and expertise to the project and evaluate whether the mobile landing page and A/B testing are functioning properly before launch and once it is live.

For this reason, if you have people serving multiple roles (or you’re a solopreneur and do all the work yourself), these checklists may be repetitive for you.

Here is a quick look at each team’s overall function in the mobile landing page testing process, along with the unique value it brings to QA:

Dev Team These are the people who build your software and websites, which could include both front-end development and back-end development. They use web development skills to create websites, landing pages and web applications.

For many companies, quality assurance (QA) would fall in this department as well, with the QA team completing technical and web testing. While a technical QA person is an important member of the team for ensuring you run valid mobile tests, we have included other functional areas in this QA checklist because different viewpoints from different departments will help decrease the likelihood of error. Each department has its own unique expertise and is more likely to notice specific types of errors.

Value in QA: The developers and technological people are most likely to notice any errors in the code or scripts and make sure that the code is compatible with all necessary devices.

 

Project Team – Depending on the size of the organization, this may be a dedicated project management team, a single IT or business project manager, or a passionate marketing manager keeping track of and pushing to get everything done.

It is the person or team in your organization that coordinates work and manages timelines across multiple teams, ensures project work is progressing as planned and that project objectives are being met.

Value in QA: In addition to making sure the QA doesn’t take the project off track and threaten the launch dates of the mobile landing page test, the project team are the people most likely to notice when business requirements are not being met.

 

Data Team The data scientist(s), analyst(s) or statistician(s) helped establish the measure of success (KPI – key performance indicator) and will monitor the results for the test. They will segment and gather the data in the analytics platform and assemble the report explaining the test results after they have been analyzed and interpreted.

Value in QA: They are the people most likely to notice any tracking issues from the mobile landing page not reporting events and results correctly to the analytics platform.

 

Design Team The data scientist(s), analyst(s) or statistician(s) helped establish the measure of success (KPI – key performance indicator) and will monitor the results for the test. They will segment and gather the data in the analytics platform and assemble the report explaining the test results after they have been analyzed and interpreted.

Value in QA: They are the people most likely to notice any tracking issues from the mobile landing page not reporting events and results correctly to the analytics platform.

 

DEV QA CHECKLIST

Pre-launch, both initial QA and on Production where applicable

Visual Inspection and Conformity to Design of Page Details

  • Verify latest copy in place
  • Preliminary checks in a “reference browser” to verify design matches latest comp for desktop/tablet/mobile layouts
  • Use the Pixel Perfect Overlay function in Firefox Developer Tools – The purpose of this tool is to take an image that was provided by the designer and lay it over the website that was produced by the developer. The image is a transparency which you can use to point out any differences or missing elements between the design images and the webpage.
  • Displaying of images – Make sure that all images are displaying, aligned and up to spec with the design.
  • Forms, List and Input Elements (Radio Buttons, Click Boxes) – Radio buttons (Dots and Circles) and Checkboxes (Checks and Boxes) are to be tested thoroughly as they may trigger secondary actions. For example, selecting a “Pay by Mail” radio button will sometimes automatically hide the credit card form.
  • Margins and Borders – Many times, you will notice that a portion of the body or perhaps a customer review or image is surrounded by a border or maybe even the whole page. It is our duty to inspect them so that there are no breaks and that they’re prominent enough for the user to decipher each bordered section.
  • Copy accuracy – Consistency between typography, capitalization, punctuation, quotations, hyphens, dashes, etc. The copy noted in the webpage should match any documents provided pertaining to copy and text unless otherwise noted or verified by the project manager/project sponsor.
  • Font styling (Font Color, Format, Style and Size) – To ensure consistency with design, make sure to apply the basic rules of hierarchy for headers across different text modules such as titles, headers, body paragraphs and legal copies.
  • Link(s) (Color, Underline, Clickable)

Web Page Functionality: Verify all page functionality works as expected (ensure treatment changes didn’t impact page functionality)

  • Top navigation functionality – Top menu, side menu, breadcrumb, anchor(s)
  • Links and redirects are correct
  • Media – Video, images, slideshow, PDF, audio
  • Form input elements – drop down, text fields, check and radio module, fancy/modal box
  • Form validation – Error notification, client-side errors, server-side errors, action upon form completion (submission confirmation), SQL injection
  • Full Page Functionality – Search function, load time, JavaScript errors
  • W3C Validation – CSS Validator (http://jigsaw.w3.org/css-validator/), markup validator (http://validator.w3.org/)
  • Verify split functional per targeting requirements
  • Verify key conversion scenario (e.g., complete a test order, send test email from email system, etc.) – If not already clear, QA lead should verify with project team how test orders should be placed
  • Where possible, visit the page as a user would to ensure targeting parameters are working properly (e.g., use URL from the PPC ad or email, search result, etc.)

Tracking Metrics

  • Verify tracking metrics are firing in browser, and metric names match requirements – Check de-bugger to see firing as expected
  • Verify reporting within the test/analytics tool where possible – Success metrics and click tracking in Adobe Target, Google Content Experiments, Google Analytics, Optimizely, Floodlight analytics, email data collection, etc.

Back End Admin Panel

Notify Project Team and Data Team it is ready for their QA (via email preferably) – indicate what reference browser is. After Project Team initial review, complete full cross browser/ cross device checks using “reference browser” as a guide:

Browser Functionality – Windows

  • Internet Explorer 7 (IE7)
  • IE8
  • IE9
  • IE10
  • IE11
  • Modern Firefox
  • Modern Chrome

Browser Functionality – macOS

  • Modern Safari
  • Modern Chrome
  • Modern Firefox8

Mobile Functionality – Tablet

  • Android
  • Windows
  • iOS

Mobile Functionality – Mobile phone

  • Android
  • Windows
  • iOS

Post-launch, after the test is live to the public:

  • Notify Project Team & Data Team the test is live and ready for post-launch review (via email preferably)
  • Verify split is open to public Verify split functional per targeting requirements
  • Where possible, visit the page as a user would to ensure targeting parameters are working properly (e.g., use URL from the PPC ad or email, search result, etc.)
  • Test invalid credit cards on a production environment
PROJECT TEAM QA CHECKLIST:

Pre-Launch and Post-Launch QA:

  • Check that copy and design are correct for control and treatments in the “reference browser”:
  • Ensure all added copy/design elements are there and correct
  • Ensure all removed copy/design elements are gone
  • Ensure all changed copy/design elements are correct
  • Ensure control experience is as intended for the test
  • Check page functionality:
  • Ensure all added/changed functionality is working as expected
  • Ensure all standard/business as usual – BAU_ functionality is working as expected:
  • Go through the typical visitor path (even beyond the testing page/ location) and ensure everything functions as expected
  • Make sure links go where supposed to, fields work as expected, data passes as expected from page to page.
  • Check across multiple browser sizes (desktop, tablet, mobile)
  • If site is responsive, scale the browser from full screen down to mobile and check to ensure all the page breaks look correct
  • Where possible, visit the page the way a typical visitor would hit the page (e.g., through PPC Ad, organic search result, specific link/button on site, through email)
DATA QA CHECKLIST:

Pre-Launch QA Checklist (complete on Staging and Production as applicable):

  • Verify all metrics listed in the experiment design are present in analytics portal
  • Verify all new tracking metrics’ names match metrics’ names from tracking document
  • Verify all metrics are present in control and treatment(s) (where applicable)
  • Verify conversion(s) are present in control and treatment(s) (where possible)
  • Verify any metrics tracked in a secondary analytics portal (where applicable)
  • Immediately communicate any issues that arise to the dev lead and project team
  • Notify dev lead and project team when Data QA is complete (e-mail preferably)

Post-Launch QA / First Data Pull:

  • Ensure all metrics for control and treatment(s) are receiving traffic
  • Ensure traffic levels are in line with the pre-test levels used for test duration estimation
  • Update Test Duration Estimation if necessary
  • Immediately communicate any issues that arise to the project team
  • Notify dev lead and project team when first data pull is complete (e-mail preferably)
DESIGN QA CHECKLIST:

Pre-Launch Review:

  • Verify intended desktop functionality (if applicable)
  • Accordions
  • Error states
  • Fixed Elements (nav, growler, etc.)
  • Form fields
  • Hover states – desktop only
  • Links
  • Modals
  • Sliders
  • Verify intended tablet functionality (if applicable)
  • Accordions
  • Error states
  • Fixed Elements (nav, growler, etc.)
  • Form fields
  • Gestures – touch device only
  • Links
  • Modals
  • Responsive navigation
  • Sliders
  • Verify intended mobile functionality (if applicable)
  • Accordions
  • Error states
  • Fixed Elements (nav, growler, etc.)
  • Form fields
  • Gestures – touch device only
  • Links
  • Modals
  • Responsive navigation
  • Sliders
  • Verify layout, spacing and flow of elements
  • Padding/Margin
  • “In-between” breakpoint layouts (as these are not visible in the comps)
  • Any “of note” screen sizes that may affect test goals (For example: small laptop 1366×768 pixels, 620px of height visibility)
  • Verify imagery accuracy, sizing and placement
  • Images (Usually slices Design provided to Dev)
  • Icons (Could be image, svg or font)
  • Verify Typography
  • Color
  • Font-size
  • Font-weight
  • Font-family
  • Line-height

Qualifying questions, if discrepancies are found:

  • Is there an extremely strict adherence to brand standards?
  • Does it impact the hierarchy of the page information?
  • Does it appear broken/less credible?
  • Immediately communicate any issues that arise to the dev lead and project team
  • Notify dev lead and project team when data QA is complete (e-mail preferably)

To download a free PDF of this checklist, simply complete the below form.


___________________________________________________________________________________

Increase Your Mobile Conversion Rates: New micro course 

Hopefully, this Mobile QA Checklist helps your team successfully launch tests that have mobile traffic. But you still may be left with the question — what should I test to increase conversion?

MECLABS Institute has created five micro classes (each under 12 minutes) based on 25 years of research to help you maximize the impact of your messages in a mobile environment.

In the complimentary Mobile Conversion Micro Course, you will learn:

  • The 4 most important elements to consider when optimizing mobile messaging
  • How a large telecom company increased subscriptions in a mobile cart by 16%
  • How the same change in desktop and mobile environments had opposing effects on conversion

Register Now for Free

The post Mobile A/B Testing: Quality assurance checklist appeared first on MarketingExperiments.

Most Popular MarketingExperiments Articles of 2018

Let’s get right into it. Here are your marketing peers’ favorite articles from 2018 …

Heuristic Cheat Sheet: 10 methods for improving your marketing

Marketing — far more so than other business disciplines — seems to be driven by gut. Or the individual star performer.

Marketing embraces a far less methodological approach than say accounting or manufacturing.

In this article, we provide a quick look at heuristics (aka methodology-based thought tools) created by MECLABS Institute (parent research organization of MarketingExperiments) to help marketing teams consistently deliver at a high level.

In this article, you’ll find heuristics to help you increase conversion, create effective email messaging, launch projects in the most effective order and more.

READ THE ARTICLE

 

Conversion Lifts in 10 Words or Less: Real-world experiments where minor copy changes produced major conversion lifts

Sometimes it can seem like a massive lift to really move the needle. A new technology implementation. Investing in a vast campaign to drive more interest.

But marketing, at its core, is communication. Get that right and you can drive a significant difference in your marketing results.

This 13-minute video examines five experiments where small copywriting changes had a large impact

WATCH THE VIDEO

 

Mental Cost: Your customers pay more than just money

The monetary price of a product isn’t the only cost for customers. Understanding (and optimizing for) non-monetary costs can lead to significant conversion gains

What costs are you inadvertently thrusting on your customers? And how can you reduce them?

READ THE ARTICLE

 

Not all of the most impactful articles from 2018 were published this year. Here are some evergreen topics that were especially popular with your peers …

A/B Testing: Example of a good hypothesis

Hypotheses should be an evergreen topic for marketers engaged in A/B testing. If you’re unfamiliar with hypotheses-based testing, this article offers a simple process to start shaping your thinking.

Raphael Paulin-Daigle advises in his blog article 41 Detailed A/B Testing Strategies to Skyrocket Your Testing Skills, “A trick to formulate a good hypothesis is to follow MarketingExperiment’s formula.”

Read this article to learn what a hypothesis is, and a simple method for formulating a good hypothesis.

READ THE ARTICLE

(Editor’s Note: Our hypothesis methodology has advanced further since this article was published in 2013. You can find a more advanced explanation of hypothesis methodology in The Hypothesis and the Modern-Day Marketer as well as a discussion of hypothesis-driven testing in action in Don’t Test Marketing Ideas, Test Customer Hypotheses.)

 

Interpreting Results: Absolute difference versus relative difference

“NASA lost its $125-million Mars Climate Orbiter because spacecraft engineers failed to convert from English to metric measurements when exchanging vital data before the craft was launched,” Robert Lee Holtz reported in the Los Angeles Times.

Numbers are crucial for A/B Testing and CRO as well. So make sure you understand the vital distinction between absolute difference and relative difference. Much like English and metric measurements, they measure the same thing but in a different way.

I have interviewed marketers before who bragged about a 3% conversion increase from a test, and I mentioned that while I was happy for them, it didn’t seem huge. Only then did they explain that their company’s conversion rate had been 2% and they increased it to 5%.

While that’s a 3% actual difference, it’s a 150% relative difference. The relative difference communicates the true impact of the test, and every business leader who learns of it will better understand the impact when the 150% number is used instead of the 3% number.

READ THE ARTICLE

 

6 Good (and 2 Bad) B2B and B2C Value Proposition Examples

What does a good value proposition look like? It’s a question we get asked often, and the article that answers that question was popular among marketers.

Check out these B2B and B2C examples. We included some bad examples for balance as well.

READ THE ARTICLE

 

Customer Value: The 4 essential levels of value propositions

Some marketers think that the only value proposition that matters is the overall unique value proposition for the company. This can be disheartening because it is difficult for the average marketer to have a significant impact on that value prop (especially in a very large company).

In this article, we explore different levels of value proposition, including ones that even the more junior marketer impacts on an almost daily basis. At work, and even in life.

READ THE ARTICLE

 

Related Resources

Here is some more content that was popular with the MarketingExperiments audience this year …

Conversion Marketing Methodology

Powerful Value Propositions: How to optimize this critical marketing element – and lift your results

Research Archive

The post Most Popular MarketingExperiments Articles of 2018 appeared first on MarketingExperiments.

Designing Hypotheses that Win: A four-step framework for gaining customer wisdom and generating marketing results

There are smart marketers everywhere testing many smart ideas — and bad ones. The problem with ideas is that they are unreliable and unpredictable. Knowing how to test is only half of the equation. As marketing tools and technology evolve rapidly offering new, more powerful ways to measure consumer behavior and conduct more sophisticated testing, it is becoming more important than ever to have a reliable system for deciding what to test.

Without a guiding framework, we are left to draw ideas almost arbitrarily from competitors, brainstorms, colleagues, books and any other sources without truly understanding what makes them good, bad or successful. Ideas are unpredictable because until you can articulate a forceful “because” statement to why your ideas will work, regardless of how good, they are nothing more than a guess, albeit educated, but most often not by the customer.

20+ years of in-depth research, testing, optimization and over 20,000+ sales path experiments have taught us that there is an answer to this problem, and that answer involves rethinking how we view testing and optimization. This short article touches on the keynote message MECLABS Institute’s founder Flint McGlaughlin will give at the upcoming 2018 A/B Testing Summit virtual conference on December 12-13th.  You can register for free at the link above.

Marketers don’t need better ideas; they need a better understanding of their customer.

So if understanding your customer is the key to efficient and effective optimization and ideas aren’t reliable or predictable, what then? We begin with the process of intensively analyzing existing data, metrics, reports and research to construct our best Customer Theory, which is the articulation of our understanding of our customer and their behavior toward our offer.

Then, as we identify problems/focus areas for higher performance in our funnel, we transform our ideas for solving them into a hypothesis containing four key parts:

  1. If [we achieve this in the mind of the consumer]
  2. By [adding, subtracting or changing these elements]
  3. Then [this result will occur]
  4. Because [that will confirm or deny this belief/hypothesis about the customer]

By transforming ideas into hypotheses, we orient our test to learn about our customer rather than merely trying out an idea. The hypothesis grounds our thinking in the psychology of the customer by providing a framework that forces the right questions into the equation of what to test. “The goal of a test is not to get a lift, but to get a learning,” says Flint McGlaughlin, “and learning compounds over time.”

Let’s look at some examples of what to avoid in your testing, along with good examples of hypotheses.

Not this:

“Let’s advertise our top products in our rotating banner — that’s what Competitor X is doing.”

“We need more attractive imagery … Let’s place a big, powerful hero image as our banner. Everyone is doing it.”

“We should go minimalist … It’s modern, sleek and sexy, and customers love it. It’ll be good for our brand. Less is more.”

But this:

 “If we emphasize and sample the diversity of our product line by grouping our top products from various categories in a slowly rotating banner, we will increase clickthrough and engagement from the homepage because customers want to understand the range of what we have to offer (versus some other value, e.g., quality, style, efficacy, affordability, etc.).”

“If we reinforce the clarity of the value proposition by using more relevant imagery to draw attention to the most important information, we will increase clickthrough and ultimately conversion because the customer wants to quickly understand why we’re different in such a competitive space.”

“If we better emphasize the primary message be reducing unnecessary, less-relevant page elements and changing to a simpler, clearer more readable design, we will increase clickthrough and engagement on the homepage because customers are currently overwhelmed by too much friction on this page.”

The golden rule of optimization is “Specificity converts. The more specific/relevant you can be to the individual wants and needs of your ideal customer, the more likely the probability of conversion. To be as specific and relevant as possible to a consumer, we use testing not as merely an idea-trial hoping for positive results, but as a mechanism to fill in the gaps of our understanding that existing data can’t answer. Our understanding of the customer is what powers the efficiency and efficacy of our testing.

In Summary …

Smart ideas only work sometimes, but a framework based on understanding your customer will yield more consistent, more rewarding results that only improve over time. The first key to rethinking your approach to optimization is to construct a robust customer theory articulating your best understanding of your customer. From this, you can transform your ideas into hypotheses that will begin producing invaluable insights to lay the groundwork for how you communicate with your customer.

Looking for ideas to inform your hypotheses? We have created and compiled a 60-page guide that contains 21 crafted tools and concepts, and outlines the unique methodology we have used and tested with our partners for 20+ years. You can download the guide for free here: A Model of Your Customer’s Mind

Related Resources

A/B Testing Summit free online conference – Research your seat to see Flint McGlaughlin’s keynote Design Hypotheses that Win: A 4-step framework for gaining customer wisdom and generating significant results

The Hypothesis and the Modern-Day Marketer

Customer Theory: How we learned from a previous test to drive a 40% increase in CTR

The post Designing Hypotheses that Win: A four-step framework for gaining customer wisdom and generating marketing results appeared first on MarketingExperiments.

Low-Hanging Fruit for the Holiday Season: Four simple marketing changes with significant impact

Testing and optimization can be difficult — from the challenges of deciding what to test amidst a dizzying array of priorities and ideas, to the time-intensive manual labor of implementing sophisticated back-end changes. With the year coming to a close and big marketing plans in the making, it’s important before making big changes and commitments to be sure that you have the right foundation to maximize the return from your grander strategy. That’s why we created this list of simple changes that can produce significant results and set your marketing strategy for the next stage on the right foundation.

20+ years and over 20,000+ sales paths have taught us that one of the foundational principles in which all marketing should be grounded is that marketers must always be at war with the temptation to prioritize company logic over customer logic. Over time, we grow so familiar with our product, our process, our brand and our own objective that we risk severe and expensive assumptions about what the customer wants and needs to know to make a purchase decision.

The goal of optimization is not to make changes to a page — but to make changes in the mind of the customer. Changing even a few words can alter the conclusion formed by the customer depending upon their levels of motivation and expectation. This means that even minor changes to our message can produce radical lifts in performance, as we have seen in thousands of experiments time and time again. So here are some simple, easy-to-implement ways you can shift from communicating company logic to customer logic and optimize the thought-sequence of your offer:

  1. Headlines — From hype to conversation

“Headlines are first impressions, pick-up lines. Use buzz/power-words, use numbers, make it value-first, make a promise, etc.,” are all ideas espoused often by some successful marketers. While some of these might be good ideas of how you could write a headline, they often leave us asking, “Why should I use this tactic over another?” When and how we deploy our tactics is determined by our objective, and all communication should be grounded in an understanding of your audience and a rationale for why.

Any idea might be considered a good or bad one until you have a purpose against which to evaluate it. Years of incrementally testing and refining research questions have demonstrated that a headline has at least two fundamental, primary purposes: 1) To capture attention, and then 2) convert it to interest. There are dozens of ways an effective headline could be crafted, but ultimately, it should be measured by how much attention it captures (from the right people) and how effectively that converts to a committed interest

  1. Copy — From marketer-value to customer-value

While variables like long copy versus short copy, or hero imagery, or ideal eye-path structure and prioritization of value are questions which can only be truly answered through testing and understanding, one universal mistake often made is failing to translate generalized claims and specific features about our product or service into clear benefits to the consumer. The customer is never simply choosing which product to buy, but also which product from who, how and when. It is critical to understand that your offer and the consequent micro-decisions required of the customer are always perceived in the context of their competing alternatives.

A simple but fun and effective question that MECLABS founder Flint McGlaughlin says should be applied to every marketing claim is, “So what?” That is to say, the customer is always asking, and we must always be answering the question: Why should I do what you want me to do rather than anything else right now?

“So what that you’re an industry leader?”

“So what that your product has these specifications?”

“So what that you offer a personalized solution, customer-service or integrated functionality”

On any given website, customers often expect to find words like “most,” “best,” “fastest,” “trusted,” “leader,” “all” and “customer-first.” Qualifying claims like this carry no measurable weight and, ironically, set a precedent of distrust unless somehow validated. Customers want to believe you, so you must give them reasons by clarifying your qualifying claims with measurable evidence. Quantify and specify wherever possible and appropriate so that your customers have no need to question the credibility of your claims, and they will trust you when you make others.

  1. Images — From irrelevant art to relevant messaging

Images are not only highly valuable real-estate but one of the marketer’s most effective tools for guiding the eye path. Yet so often, images are chosen based on personal opinion, the design department’s decision, how it looks and feels on the page, or its color scheme, cleverness, or worse, simply because it’s supposed to be there. Images, like each and every element in your marketing funnel, are part of and should contribute to the overall value proposition of your organization/solution.

When used properly, images are not merely decorative accents to liven a webpage’s personality; they should illustrate or support the core marketing message, and therefore be measured primarily by relevance and clarity. Ultimately, your core message (your value proposition) should be supported 1) Continuously, and 2) Congruently.

Continuity – The Continuity principle posits that your value proposition should be stated or supported continuously throughout each step of your sales process.

Congruence – The Congruence Principle posits that each element of your page or collateral should either state or support your value proposition (this is particularly relevant for imagery).

  1. Objectives — From multiple options to the primary focus

“What is the goal of this page or email?” A question we’ve asked countless times when working with marketers and organizations, and we’ve found surprisingly often that either the goal of the page is unclear or attempting to fulfill numerous goals other than its primary purpose. Since ideally each element of your page should move the target customer toward the “macro-yes” — conversion. Each distraction we place in the customers’ path risks leading them into tangential and unsupervised thinking rather than a controlled thought-sequence toward the objective.

The objective of the page is the benchmark against which we measure the relevance and efficacy of all the supporting elements. Avoiding things like evenly weighted calls-to-action, distracting images, competing ads and irrelevant page elements streamlines the customers’ path toward the objective. Clearly defining the action you want the customer to take and stripping away unnecessary elements to organize around the objective can be powerfully impactful in the psychology of the consumer.

Together, each one these subtle shifts in communication can produce outstanding lifts when executed well and set your messaging on the right foundation. We hope that you’ll find the same amazing results from becoming more customer-oriented that we have seen from testing these core principles time and time again.

In the meantime, Happy Holidays from MarketingExperiments!

Related Resources

Design Hypotheses that Win: A 4-Step Framework for Gaining Customer Wisdom and Generating Significant Results (register for the free A/B Testing Summit online conference and hear Flint McGlaughlin’s keynote session)

Ecommerce: 6 takeaways from a 42-day analysis of major retailers’ holiday marketing

Email Marketing: Last-minute holiday deals preview wins with customer-centric approach

Increase Mobile Conversion Rates (free micro course from MECLABS Institute)

The post Low-Hanging Fruit for the Holiday Season: Four simple marketing changes with significant impact appeared first on MarketingExperiments.

Green Marketing: The psychological impact of an eco-conscious marketing campaign

The following research was first published in the MECLABS Quarterly Research Digest, July 2014.

Almost every industry has seen a shift toward “green technology” or “eco-friendly materials.” While this is certainly a positive step for the earth, it can rightly be questioned whether the marketing that touts this particular aspect of the business is really effective.

Marketing offices across the globe face some very real questions:

  • Does highlighting your green practices actually cause more people to buy from you?
  • Does it have any impact at all?
  • Does it, much to our shock and dismay, temper conversion?

When we find an issue like this, we are inclined to run a test rather than trust our marketing intuition.

Experiment: Does green marketing impact conversion?

The Research Partner for Test Protocol (TP) 11009 is a furniture company wanting to increase sales of its eco-friendly mattresses. Our key tracking metric was simple: purchases. Our research question was this: Which landing page would create more mattress sales, A or B?

As you can see in Figure 1.1, the pages were identical save for one key aspect: Version B included an extra section that Version A left out. In this section, we went into more detail about the green aspects of the mattress. It should be noted, however, that both pages included the “GreenGuard Gold Certification Seal,” so it is not as if Version A is devoid of the green marketing angle. Version B simply spelled it out more clearly.

Figure 1.1

Did the change make a difference? Yes, Version B outperformed Version A by 46%. Remember, this lift is in purchases, not simply clickthrough.

 

 

We have established that green marketing can be effective. But in what cases? How can we put that knowledge to good use and navigate the waters of green marketing with a repeatable methodology?

Four ways to create effective green marketing campaigns

In the test above, green marketing made a clear and significant difference. We made four observations as to why this particular green marketing strategy succeeded. You can use them as guides toward your own green marketing success.

Key Observation #1. The value was tangible. The value created by the copy was directly connected to the customer experience.

In the case of the GreenGuard Certified mattress, the value of being green was not solely based on its being eco-friendly. It also was customer-friendly. The green nature of the manufacturing process directly affected and increased the quality of the product. The copy stated that the mattress “meets the world’s most rigorous, third-party chemical emissions standards with strict low emission levels for over 360 volatile organic compounds.” Not only is it good for the earth, but it is also good for your toddler and your grandmother.

This tangible benefit to the customer experience is not always present in green marketing. In Figure 2.1, you see three examples of green marketing that fail to leverage a tangible benefit to the customer:

Figure 2.1

 

  1. When a hotel encourages you to reuse your towels to “save water,” it does nothing to improve the value of your experience with them. If anything, it may come off as an attempt to guilt the guest into reducing the hotel’s water bill.
  2. GE’s “Ecomagination” campaign is devoid of a tangible benefit to the customer. How does GE being green make my microwave better for me? The campaign doesn’t offer an answer.
  3. Conversely, “100% recycled toilet tissue” not only does not offer a tangible benefit to the customer, it also implies that the customer might not receive the same quality experience they would have with a non-green option.

For green marketing to optimally operate, you must be able to a point out a tangible benefit to the customer, in addition to the earth-friendly nature of the product.

Key Observation #2. The issue was relevant. The issue addressed by the copy dealt with a key concern already present in the mind of the prospect.

For people in the market for a new mattress, especially those with young children, sensitive skin or allergies, there are well-founded concerns regarding the chemicals and other materials that go into the production of the mattress. This concern already exists in the mind of the customer. It does not need to be raised or hyped by the marketer. Again, not all green marketing campaigns address relevant concerns.

Figure 3.1

 

  1. People are more concerned with safety, comfort and affordability when traveling. Whether the airline is green or not is not generally a concern.
  2. When choosing a sunscreen, most people don’t go in with aspirations of choosing a green option. Their top concern is sun protection, and biodegradable sunscreen doesn’t appear to meet that need as well as another option can.
  3. Again, “biodegradable” is not a common concern brought to the table by people buying pens.

All of these, while potentially noble causes, do not directly connect to a relevant problem the customer experiences. On the other hand, the GreenGuard Certified mattress immediately addressed a pressing concern held by the customer. It is “perfect for those with skin sensitivity or allergies.”

Key Observation #3. The claim was unique. The claim of exclusivity in the copy intensified the “only” factor of the product itself.

Just like any other benefit, green marketing benefits gain or lose value based on how many others can make the claim. If a web hosting platform touts itself as green or eco-friendly, the claim doesn’t hold as much force because the industry is saturated with green options (Figure 4.1). The same is true of BPA-free water bottles (Figure 4.2).

 

Figure 4.1

 

Figure 4.2

 

However, in the case of our Research Partner, not many of its competitors could make the “GreenGuard Gold Certification” claim (Figure 4.3). This added exclusivity — not to mention that Gold status implied they achieved the highest level of certification. Uniqueness drives value up, as long as the benefit in question is actually in demand.

Figure 4.3

 

Key Observation #4. The evidence was believable. The evidence provided in the copy lent instant credibility to any of the claims.

After the initial wave of green marketing techniques and practices took the industry by storm, there was a very justified backlash against those simply trying to cash in on the trend. Lawsuits were filed against marketers exaggerating their green-ness, including the likes of SC Johnson, Fiji Water, Hyundai and others. As a result, consumers became wary of green claims and must be persuaded otherwise by believable data.

In the winning design above, we did this in three ways:

  1. Verification: “100% Certified by GreenGuard Gold”
  2. Specification: “Our mattresses get reviewed quarterly to maintain this seal of approval. Last certification: January 4th, 2014.”
  3. Quantification: “Low emission levels for over 360 volatile organic compounds.”

The ability to prove that your green practices or eco-friendly products are truly as earth-friendly — and tangibly beneficial — as you claim is a crucial component in creating a green marketing angle that produces a significant increase in conversion.

How to approach your green marketing challenges

We have seen that green marketing can work. Still, this is not a recommendation to throw green marketing language into everything you put out. Green marketing is not a cure-all.

However, given the right circumstances, the right green positioning can certainly achieve lifts, and we want you to be able to capitalize on that. Therefore, we have created this checklist to help you analyze and improve your green marketing tactics.

☐  Is your green marketing tangible?

Does the nature of the green claims actually make the end product more appealing?

☐  Is your green marketing relevant?

Does the fact that your offer is green solve an important problem in the mind of the customer?

☐  Is your green marketing unique?

Can anyone else in your vertical make similar claims? If so, how do your claims stand apart?

☐  Is your green marketing believable?

Are your claims actually true? If so, how can you quantify, verify or specify your particular claims?

Of course, this checklist is only a starting point. Testing your results is the only true way to discover if your new green techniques are truly improving conversion.

Related Resources

Learn how Research Partnerships work, and how you can join MECLABS in discovering what really works in marketing

Read this MarketingExperiments Blog post to learn how to craft the right research question

Sometimes we only have intangible benefits to market. In this interview, Tim Kachuriak, Founder and Chief Innovation & Optimization Officer, Next After, explains how to get your customers to say, “heck yes”

One way to be relevant is better understand your customers is through data-driven marketing

Discover three techniques for standing out a competitive marketing, including focusing on your “only” factor

Read on for nine elements that help make your marketing claims more believable

The post Green Marketing: The psychological impact of an eco-conscious marketing campaign appeared first on MarketingExperiments.

Green Marketing: The psychological impact of an eco-conscious marketing campaign

The following research was first published in the MECLABS Quarterly Research Digest, July 2014.

Almost every industry has seen a shift toward “green technology” or “eco-friendly materials.” While this is certainly a positive step for the earth, it can rightly be questioned whether the marketing that touts this particular aspect of the business is really effective.

Marketing offices across the globe face some very real questions:

  • Does highlighting your green practices actually cause more people to buy from you?
  • Does it have any impact at all?
  • Does it, much to our shock and dismay, temper conversion?

When we find an issue like this, we are inclined to run a test rather than trust our marketing intuition.

Experiment: Does green marketing impact conversion?

The Research Partner for Test Protocol (TP) 11009 is a furniture company wanting to increase sales of its eco-friendly mattresses. Our key tracking metric was simple: purchases. Our research question was this: Which landing page would create more mattress sales, A or B?

As you can see in Figure 1.1, the pages were identical save for one key aspect: Version B included an extra section that Version A left out. In this section, we went into more detail about the green aspects of the mattress. It should be noted, however, that both pages included the “GreenGuard Gold Certification Seal,” so it is not as if Version A is devoid of the green marketing angle. Version B simply spelled it out more clearly.

Figure 1.1

Did the change make a difference? Yes, Version B outperformed Version A by 46%. Remember, this lift is in purchases, not simply clickthrough.

 

 

We have established that green marketing can be effective. But in what cases? How can we put that knowledge to good use and navigate the waters of green marketing with a repeatable methodology?

Four ways to create effective green marketing campaigns

In the test above, green marketing made a clear and significant difference. We made four observations as to why this particular green marketing strategy succeeded. You can use them as guides toward your own green marketing success.

Key Observation #1. The value was tangible. The value created by the copy was directly connected to the customer experience.

In the case of the GreenGuard Certified mattress, the value of being green was not solely based on its being eco-friendly. It also was customer-friendly. The green nature of the manufacturing process directly affected and increased the quality of the product. The copy stated that the mattress “meets the world’s most rigorous, third-party chemical emissions standards with strict low emission levels for over 360 volatile organic compounds.” Not only is it good for the earth, but it is also good for your toddler and your grandmother.

This tangible benefit to the customer experience is not always present in green marketing. In Figure 2.1, you see three examples of green marketing that fail to leverage a tangible benefit to the customer:

Figure 2.1

 

  1. When a hotel encourages you to reuse your towels to “save water,” it does nothing to improve the value of your experience with them. If anything, it may come off as an attempt to guilt the guest into reducing the hotel’s water bill.
  2. GE’s “Ecomagination” campaign is devoid of a tangible benefit to the customer. How does GE being green make my microwave better for me? The campaign doesn’t offer an answer.
  3. Conversely, “100% recycled toilet tissue” not only does not offer a tangible benefit to the customer, it also implies that the customer might not receive the same quality experience they would have with a non-green option.

For green marketing to optimally operate, you must be able to a point out a tangible benefit to the customer, in addition to the earth-friendly nature of the product.

Key Observation #2. The issue was relevant. The issue addressed by the copy dealt with a key concern already present in the mind of the prospect.

For people in the market for a new mattress, especially those with young children, sensitive skin or allergies, there are well-founded concerns regarding the chemicals and other materials that go into the production of the mattress. This concern already exists in the mind of the customer. It does not need to be raised or hyped by the marketer. Again, not all green marketing campaigns address relevant concerns.

Figure 3.1

 

  1. People are more concerned with safety, comfort and affordability when traveling. Whether the airline is green or not is not generally a concern.
  2. When choosing a sunscreen, most people don’t go in with aspirations of choosing a green option. Their top concern is sun protection, and biodegradable sunscreen doesn’t appear to meet that need as well as another option can.
  3. Again, “biodegradable” is not a common concern brought to the table by people buying pens.

All of these, while potentially noble causes, do not directly connect to a relevant problem the customer experiences. On the other hand, the GreenGuard Certified mattress immediately addressed a pressing concern held by the customer. It is “perfect for those with skin sensitivity or allergies.”

Key Observation #3. The claim was unique. The claim of exclusivity in the copy intensified the “only” factor of the product itself.

Just like any other benefit, green marketing benefits gain or lose value based on how many others can make the claim. If a web hosting platform touts itself as green or eco-friendly, the claim doesn’t hold as much force because the industry is saturated with green options (Figure 4.1). The same is true of BPA-free water bottles (Figure 4.2).

 

Figure 4.1

 

Figure 4.2

 

However, in the case of our Research Partner, not many of its competitors could make the “GreenGuard Gold Certification” claim (Figure 4.3). This added exclusivity — not to mention that Gold status implied they achieved the highest level of certification. Uniqueness drives value up, as long as the benefit in question is actually in demand.

Figure 4.3

 

Key Observation #4. The evidence was believable. The evidence provided in the copy lent instant credibility to any of the claims.

After the initial wave of green marketing techniques and practices took the industry by storm, there was a very justified backlash against those simply trying to cash in on the trend. Lawsuits were filed against marketers exaggerating their green-ness, including the likes of SC Johnson, Fiji Water, Hyundai and others. As a result, consumers became wary of green claims and must be persuaded otherwise by believable data.

In the winning design above, we did this in three ways:

  1. Verification: “100% Certified by GreenGuard Gold”
  2. Specification: “Our mattresses get reviewed quarterly to maintain this seal of approval. Last certification: January 4th, 2014.”
  3. Quantification: “Low emission levels for over 360 volatile organic compounds.”

The ability to prove that your green practices or eco-friendly products are truly as earth-friendly — and tangibly beneficial — as you claim is a crucial component in creating a green marketing angle that produces a significant increase in conversion.

How to approach your green marketing challenges

We have seen that green marketing can work. Still, this is not a recommendation to throw green marketing language into everything you put out. Green marketing is not a cure-all.

However, given the right circumstances, the right green positioning can certainly achieve lifts, and we want you to be able to capitalize on that. Therefore, we have created this checklist to help you analyze and improve your green marketing tactics.

☐  Is your green marketing tangible?

Does the nature of the green claims actually make the end product more appealing?

☐  Is your green marketing relevant?

Does the fact that your offer is green solve an important problem in the mind of the customer?

☐  Is your green marketing unique?

Can anyone else in your vertical make similar claims? If so, how do your claims stand apart?

☐  Is your green marketing believable?

Are your claims actually true? If so, how can you quantify, verify or specify your particular claims?

Of course, this checklist is only a starting point. Testing your results is the only true way to discover if your new green techniques are truly improving conversion.

Related Resources

Learn how Research Partnerships work, and how you can join MECLABS in discovering what really works in marketing

Read this MarketingExperiments Blog post to learn how to craft the right research question

Sometimes we only have intangible benefits to market. In this interview, Tim Kachuriak, Founder and Chief Innovation & Optimization Officer, Next After, explains how to get your customers to say, “heck yes”

One way to be relevant is better understand your customers is through data-driven marketing

Discover three techniques for standing out a competitive marketing, including focusing on your “only” factor

Read on for nine elements that help make your marketing claims more believable

The post Green Marketing: The psychological impact of an eco-conscious marketing campaign appeared first on MarketingExperiments.

Landing Page Optimization: Free worksheet to help you balance segmentation and resources

All things being equal, the more segmented and targeted your landing page is, the higher your conversion rate will be. Everyone in marketing knows that.

However, the other part of the equation that is rarely talked about — the more segmented and targeted your landing page is, the more resources (time, focus, development, agency hours, etc.) it will likely take.

Sure, there are some tools that will automate this process by automatically displaying, say, a relevant product recommendation. There are some that will reduce, but not eliminate, extra work by pulling in anything from a relevant dynamic keyword change to entirely different content blocks.

But for most companies today, getting more segmented with their landing pages is going to take time or money that could be spent on something else.

So how do you find the balance? When is it worth launching a new landing page to serve a new motivation or when can you just try to make your current landing pages serve multiple motivations?

We’ve created a free worksheet to help you make that decision and (if necessary) get budget approval on that decision from a business leader or client.

 

Click Here to Download Your FREE Landing Page Segmentation Worksheet Instantly

(no form to fill out, just click to get your instant download of this PDF-based tool)

 

This quick-and-easy tool helps you decide when you need a new landing page to target a more specific audience. Here is a quick breakdown of some of the fields you will find in the worksheet, which has fillable form fields to make recording all the info easy for you.

Step 1: What do you know about the customers?

Who are your ideal customers? It’s important to know which customers your product can best serve so you can make the right promise with your marketing.

Possible sources of data to answer this question include transactional data, social media reviews, customer interviews, customer service interactions, and A/B testing. The most popular way to learn about customers is with an internal metric analysis, which is used by 69% of companies.

You’ll want to know demographics like age(s), gender(s), education, income(s), location(s) and other factors that are important to your product.

You’ll also want to know psychographics like what they move toward (their goals), what they move away from (their pains) and what value(s) they derive from your product purchase.

You also want to know who the page needs to serve from among the customers. Is it someone who has never visited before and is unaware of the category value? A repeat purchaser? And so on. Knowing their previous relationship to the landing page, your company and your products is important to creating high-converting landing pages.

Step 2: Based on what you know, what can you hypothesize about the customers?

What are the motivations of visitors? Visitor motivation has the greatest impact on conversion, according to the MECLABS Institute Conversion Sequence Heuristic. You can get indications about what motivations these visitors might have, based on sources like inbound traffic sources, previous pages viewed, A/B testing results, site search keywords, PPC keywords, customer service question, and testing, not to mention the previous info you’ve already completed about demographics, psychographics and the like.

You want to hypothesize what different motivations visitors might have, and why they have that motivation (keep asking why until you get to the core motivation, this can be very informative).

For example, I have a Nissan LEAF. I had multiple motivations for buying a LEAF. Motivation A was to get a zero-emission car. Motivation B was to save money on gas, maintenance, etc.

Drilling down into Motivation A, why did I want a zero-emission car? Because I didn’t want to pollute. Why? Because I didn’t want to increase local air pollution or add to climate change. Why? Because my kids breathe the local air and will be impacted by climate change.

Getting down to the core motivation might create messaging that taps deeper into your customers’ wants and needs than simply mentioning the features of the product.

Which brings up the next question. What must the landing page do to serve these motivations? You can use the previous info, previous customers, analytics, previous purchases — and intuition to answer that question.

Essentially, you want to be able to fill in the blanks: The landing page must do ________________ so customers can ______________. Use as many as apply to the motivations you are trying to meet. Is there a natural grouping? Are they very different?

Using the car example previously, the landing page must do a good job tapping into customers desire for a better, cleaner world so customers can see the deeper environmental impact of a driving a zero-emissions vehicle.

Step 3: Based on customer motivations, does it make business sense to create a new landing page?

This is where the rubber meets the road (car analogies notwithstanding). All marketers are pro segmentation. But you can’t do everything.

On the flip side, marketers can underinvest in their landing pages and overinvest in traffic driving and ultimately leak money by having too few, unsegmented landing pages that are trying to do too much for too many different motivations — and thus, doing none of them well.

Does it make business sense to make a new, more segmented landing page? Three more landing pages?  Dozens of dynamically generated content boxes or headlines targeting different motivations for a specific landing page?

Now that you have a sense of the different motivations you’re trying to serve, you should ask what distinct customer sets these customers represent, and what percent of profits each generates. If it helps to identify them, assign a name to customer sets that have similar motivations. Whether it’s something like Aspirational Suburbanites or Laid-back Lindas, some element of personification can help you feel closer to the customer. You should combine your transactional and analytics data with the previously completed info to arrive at the customer sets and percent of profit generated by each.

This is the value side of the equation.

For the cost side of the equation, you need to ask how many resources it takes to create a new landing page? Based on your work with web or design agencies, outside consultants and internal development teams, it helps to put a cost to the work even if it’s internal salaried employee time that you won’t technically be billed for. That will help you understand if there is an ROI for the work. Costs you want to consider are your marketing team, copy, design, development, conversion optimization and A/B testing.

Decision: Do I need a new landing page?

With this info, you can decide if you need a new landing page. Does the landing page you already have or the one you are currently developing closely enough match the motivations of the profitable core of customers? Will the landing page work with editors to match the motivations of the profitable core of customers? Or is a new landing page needed to more closely serve motivations of a profitable subgroup of customers?

Seeing the amount of business you can get — and the cost it will take to get you there — can help you get past the simple idea that segmentation is good or that your current landing page is good enough for all customers. You can move on with a deeper understanding of whether or not your business should invest in a more segmented landing page(s) to better tap into motivations of a uniquely motivated (and profitable) set of customers.

Use this worksheet to make the decision for yourself and make the case for budget to your business leaders and clients.

 

Click Here to Download Your FREE Landing Page Segmentation Worksheet Instantly

(no form to fill out, just click to get your instant download of this PDF-based tool)

 

Special thanks to MECLABS Web Designer Chelsea Schulman for designing this sharp-looking interactive worksheet.

Related Resources

Lead your team to breakthrough results with A Model of your Customer’s Mind – These 21 charts and tools have helped capture more than $500 million in (carefully measured) test wins.

B2B Marketing: Homepage segmentation effort increases time spent on site 171%

The Benefits of Combining Content Marketing and Segmentation

MECLABS Landing Page Optimization online certification course

The post Landing Page Optimization: Free worksheet to help you balance segmentation and resources appeared first on MarketingExperiments.

Get Your Free Test Discovery Tool to Help Log all the Results and Discoveries from Your Company’s Marketing Tests

Come budget time, do you have an easy way to show all the results from your testing? Not just conversion lifts, but the golden intel that senior business leaders crave — key insights into customer behavior.

To help you do that, we’ve created the free MECLABS Institute Test Discovery Tool, so you can build a custom discovery library for your organization. This simple tool is an easy way of helping your company create a repository of discoveries from its behavioral testing with customers and showing business leaders all the results of your testing efforts. Just click the link below to get yours.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

In addition to enabling you to show comprehensive test results to business leaders, a custom test discovery library for your brand helps improve your overall organization’s performance. You probably have an amazing amount of institutional knowledge stuck in your cranium. From previous campaigns and tests, you have a good sense of what will work with your customers and what will not. You probably use this info to inform future tests and campaigns, measure what works and build your knowledge base even more.

But to create a truly successful organization, you have to get that wisdom out of your head and make sure everyone in your marketing department and at your agencies has access to that valuable intel. Plus, you want the ability to learn from everyone in your organization as well.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

This tool was created to help a MECLABS Research Partner keep track of all the lessons learned from its tests.

“The goal of building this summary spreadsheet was to create a functional and precise approach to document a comprehensive summary of results. The template allows marketers to form a holistic understanding of their test outcomes in an easily digestible format, which is helpful when sharing and building upon future testing strategy within your organization. The fields within the template are key components that all testing summaries should possess to clearly understand what the test was measuring and impacting, and the validity of the results,” said Delaney Dempsey, Data Scientist, MECLABS Institute.

“Basically, the combination of these fields provides a clear understanding of what worked and what did not work. Overall, the biggest takeaway for marketers is that having an effective approach to documenting your results is an important element in creation of your customer theory and impactful marketing strategies. Ultimately, past test results are the root of our testing discovery about our customers,” she explained.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

Here is a quick overview for filling out the fields in this tool (we’ve also included this info in the tool) …

Click on the image to enlarge in new window

How to use this tool to organize your company’s customer discoveries from real-world behavioral tests

For a deeper exploration of testing, and to learn where to test, what to test and how to turn basic testing data into customer wisdom, you can take the MECLABS Institute Online Testing on-demand certification course.

Test Dashboard: This provides an overview of your tests. The info automatically pulls from the information you input for each individual test on the other sheets in this Excel document. You may decide to color code each test stream (say blue for email, green for landing pages, etc.) to more easily read the dashboard. (For instructions on adding more rows to the Test Dashboard, and thus more test worksheets to the Excel tool, scroll down to the “Adding More Tests” section.)

Your Test Name Here: Create a name for each test you run. (To add more tabs to run more tests, scroll down to the “Adding More Tests” section.)

Test Stream: Group tests in a way that makes the most sense for your organization. Some examples might be the main site, microsite, landing pages, homepage, email, specific email lists, PPC ads, social media ads and so on.

Test Location: Where in your test stream did this specific test occur? For example, if the Test Stream was your main site, the Test Location may have been on product pages, a shopping page or on the homepage. If one of your testing streams is Landing Pages, the test location may have been a Facebook landing page for a specific product.

Test Tracking Number: To organize your tests, it can help to assign each test a unique tracking number. For example, every test MECLABS Institute conducts for a company has a Test Protocol Number.

Timeframe Run: Enter the dates the test ran and the number of days it ran. MECLABS recommends you run your tests for at least a week, even if it reaches a statistically significant sample size, to help reduce the chances of a validity threat known as History Effect.

Hypothesis: The reason to run a test is to prove or disprove a hypothesis.

Do you know how you can best serve your customer to improve results? What knowledge gaps do you have about your customer? What internal debates do you have about the customer? What have you debated with your agency or vendor partner? Settle those debates and fill those knowledge gaps by crafting a hypothesis and running a test to measure real-world customer behavior.

Here is the approach MECLABS uses to formulate a hypothesis, with an example filled in …

# of Treatments: This is the number of versions you are testing. For example, if you had Landing Page A and Landing Page B, that would be two treatments. The more treatments you test in one experiment, the more samples you need to avoid a Sampling Distortion Effect validity threat, which can occur when you do not collect a significant number of observations.

Valid/Not Valid: A valid test measures what it claims to measure. Valid tests are well-founded and correspond accurately to the real world. Results of a valid test can be trusted to be accurate and to represent real-world conditions. Invalid tests fail to measure what they claim to measure and cannot be trusted as being representative of real-world conditions.

Conclusive/Inconclusive: A Conclusive Test is a valid test that has reached the desired Level of Confidence (95% is the most commonly used standard). An Inconclusive Test is a valid test that failed to reach the desired Level of Confidence for the primary KPI (95% is the most commonly used standard). Inconclusive tests, while not the marketer’s goal, are not innately bad. They offer insights into the cognitive psychology of the customer. They help marketers discover which mental levers do not have a significant impact on the decision process.

KPIs — MAIN, SECONDARY, TERTIARY

Name: KPIs are key performance indicators. They are the yardstick for measuring your test. The main KPI is what ultimately determines how well your test performed, but secondary and tertiary KPIs can be insightful as well. For example, the main KPI for a product page test might be the add-to-cart rate. That is the main action you are trying to influence with your test treatment(s). A secondary KPI might be a change in revenue. Perhaps you get fewer orders, but at a higher value per order, and thus more revenue. A tertiary KPI might be checkout rate, tracking how many people complete the action all the way through the funnel. There may be later steps in the funnel that are affecting that checkout rate beyond what you’re testing, which is why it is not the main KPI of the test but still important to understand. (Please note, every test does not necessarily have to have a main, secondary and tertiary KPI, but every test should at least have a main KPI.)

Key Discoveries: This is the main benefit of running tests — to make new discoveries about customer behavior. This Test Discovery Library gives you a central, easily accessible place to share those discoveries with the entire company. For example, you could upload this document to an internal SharePoint or intranet, or even email it around every time a test is complete.

The hypothesis will heavily inform the key discoveries section, but you may also learn something you weren’t expecting, especially from secondary KPIs.

What did the test results tell you about the perceived credibility of your product and brand? The level of brand exposure customers have previously had? Customers’ propensity to buy or become a lead? The difference in the behavior of new and returning visits to your website? The preference for different communication mechanisms (e.g., live chat vs. video chat)? Behavior on different devices (e.g., desktop vs. mobile)? These are just examples; the list could go on forever … and you likely have some that are unique to your organization.

Experience Implemented? This is pretty straightforward. Has the experience that was tested been implemented as the new landing page, home page, etc., after the test closed?

Date of implementation: If the experience has been implemented, when was it implemented? Recording this information can help you go back and make sure overall performance correlated with your expectations from the test results.

ADDING MORE TESTS TO THE TOOL

The Test Dashboard tab dynamically pulls in all information from the subsequent test worksheets, so you do not need to manually enter any data here except for the test sequence number in Column A. If you want to create a new test tab and the corresponding row in the “Test Dashboard,” follow these instructions:

    • Right click on the bottom tab titled “Template – Your Test Name Here.” Choose “Move or Copy.” From the list of sheets, choose “Template – Your Test Name Here.” Check the box “Create a Copy” and click OK. Right click on your new “Template – Your Test Name Here (2)” tab and rename as “Your Test Name Here (7).”
    • Now, you’ll need to add a new row to your “Test Dashboard” tab. Copy the last row. For example, select row 8 on the “Test Dashboard” tab, copy/paste those contents into row 9. You will need to make the following edits to reference your new tab, “Your Test Name Here (7).” This can be done in the following way:
      • Manually enter the test as “7” in cell A9.
      • The remaining cells dynamically pull the data in. However, since you copy/paste, they are still referencing the test above. To update this, highlight select row 9 again. On the Home Tab>Editing, select “Find & Select (located on the far right)>”Replace,” or use “CTRL+F”>Replace.
      • On the Replace tab of the box, enter Find What: “Your Test Name (6)” and Replace with: “Your Test Name (7).”
      • Click “Replace All”
      • All cells in the row should now reference your new tab, “Your Test Name (7)” properly.

 

Click Here to Download Your FREE Test Discovery Tool Instantly

(no form to fill out, just click to get your instant download of this Excel-based tool)

 

Special thanks to Research Manager Alissa Shaw, Data Scientist Delaney Dempsey, Associate Director of Design Lauren Leonard, Senior Director of Research Partnerships Austin McCraw, and Copy Editor Linda Johnson for helping to create the Test Discovery Library tool.

If you have any questions, you can email us at info@MECLABS.com. And here are some more resources to help with your testing …

Lead your team to breakthrough results with A Model of your Customer’s Mind: These 21 charts and tools have helped capture more than $500 million in (carefully measured) test wins

Test Planning Scenario Tool – This simple tool helps you visualize factors that affect the ROI implications of test sequencing

Customer Theory: How we learned from a previous test to drive a 40% increase in CTR

The post Get Your Free Test Discovery Tool to Help Log all the Results and Discoveries from Your Company’s Marketing Tests appeared first on MarketingExperiments.

Value Proposition: How to find the best expression of your value

The value proposition “why” — why should customers choose your product — can be answered in 100 different ways. But how do you determine the most effective answer?

Often, it is determined in a conference room with a rigorous debate amongst leaders and experts. But experts do not have the answers to this question — only your customers do. So, we turned to the customer to answer this question by conducting an experiment with a global news distributor.

EXPERIMENT

In the five-minute video below, Flint McGlaughlin explains how determining the best expression of value generated a 22% increase in conversions, and an important learning.

 Let’s take a closer look at the experiments featured in this video …

THE CONTROL

This global news distributor came to the research team at MECLABS Institute, the parent company of marketing experiments,with the goal of determining which element of their value proposition was most appealing to their customers.
So, they developed three different articulations of their core offer, using the homepage to test which one will have the most impact on conversions. 

TREATMENTS

(click on the images to enlarge in new window)

 

Treatment 1 tested a hypothesis that the group’s authority was the most appealing element of their offer, using phrases like “For almost 60 years,” “inventing the industry” and “most authoritative source of news.” Each of these points serves to foster a conclusion in the mind of the customer about the authority of this organization.

Treatment 2 had a different hypothesis, challenging the idea that the group’s comprehensive network was more appealing than any other element. This hypothesis was supported with phrases like “over 200,000 media outlets,” “hundreds of thousands of journalists,” “170 different countries” and “most comprehensive media network in the world.”

Finally, Treatment 3 argued that the group’s superior customer service was the most important element to its customers. Like the other treatments, this version used key supporting phrases like “exceptional customer service,” “work personally … one-on-one,” “200,000 errors caught each year” and “available 24 hours a day, 365 days a year.”

RESULTS

In the end, the treatment focused on the organization’s authority, and experiments outperformed all other treatments with 26% more conversions. While the conversion lift itself was impactful for this organization, the approach to achieving it is what provided the most valuable learning.  

The team not only determined the best articulation of their value proposition, they learned that clearly displaying the right value proposition articulation can maximize the force of your offer. And in order to uncover what is right for YOUR customers, marketers must engage in a mental dialogue. People don’t want to be talked at; they want to be communicated with. The marketer asks questions with their message, and the customer answers with their behavioral data.

RELATED RESOURCES

Learn how the MECLABS methodology can transform your business results

6 Good (and 2 Bad) B2B and B2C Value Proposition Examples

Value Force: How to win on value proposition and not just price

Form Optimization: The importance of communicating value before making the “ask”

The post Value Proposition: How to find the best expression of your value appeared first on MarketingExperiments.