Author: Daniel Burstein

15 Resources to Help You Use a Repeatable Process for Conversion Rate Optimization

If only marketing were more like tires.

I recently discovered that the tires on my Nissan LEAF were recalled, not the specific tires on my car, just that tire model. And it turns out, only the tires manufactured between February 5th and February 18th in a specific plant of that model were included in the recall.

That is impressive. In general, tires are manufactured with such repeatable high quality, that defects can be pinpointed to just a 13-day span among years and years of tire production.

Marketing is not nearly as consistent.

One way to improve the consistency of your marketing is with a repeatable methodology. And if you’re a repeat reader of MarketingExperiments, I’m sure you’re familiar with the MECLABS Institute Conversion Sequence heuristic which can bring structure, clarity and a repeatable framework to any marketing conversion goal you have (MECLABS is the parent research organization of MarketingExperiments).

This is more than just a tool you can use on landing pages. In fact, people around MECLABS have discussed using it to get their children to eat healthy. I’ve used it on college recruiting trips to help students understand elements to consider when choosing a first job.

Since its introduction more than a decade ago, we’ve written and talked about the heuristic a lot on MarketingExperiments. But others have as well. So let’s take a look at some advice from around the web suggesting ways to increase conversion — whatever your conversion goal may be — with this repeatable methodology:

How to create winning ad copy using a scientific approach by Microsoft’s Pruna Virij on Search Engine Watch

“The folks at MECLABS came up with a conversion formula that can be a framework for ad copy creation.”

Six ways to improve value and trust for your brand’s website by Tamar Weinberg on ClickZ

“Having a quality value proposition is vital for a website. Researchers at MarketingExperiments concluded that value proposition is key to your conversion rate. Using its ‘conversion heuristic,’ they found that value proposition was second in importance, just behind a consumer’s motivation when visiting your website.”

3 free AdWords testing tools to adopt today by AdHawk’s Todd Saunders on Search Engine Land

“Each text ad should convey enough information to your audience before you pay for a click. What information is ‘enough?’ Try out this formula from MECLABS Institute.”

5-Step Guide to Optimizing Landing Pages by Magdalena Georgieva on HubSpot

“While we keep advising marketers to test with their specific audiences, there are actually a few best practices you should take into consideration. In fact, the folks at MECLABS came up with a formula to create top-performing landing pages.”

6 Ways to Use Clarity to Improve Your Conversion Rate by Shanelle Mullin on ConversionXL

“Similarly, Marketing Experiments created this conversion formula, which puts a focus [on] clarity as well …”

Why You Need to Know Heuristics for Conversion Optimization by Jeremy Smith of Jeremy Said

“One of the most popular conversion optimization heuristics is an equation. MarketingExperiments calls it a sequence. You could call it a shortcut. It summarizes the main factors in the conversion process.”

Conversion Optimization Overview – Applying a Conversion Heuristic to SMB Marketing by Marketing 360

“The conversion heuristic developed by MECLABS Institute is interesting. By definition, a heuristic is a problem-solving approach which concedes that an optimal, logical, and certainly exact solution isn’t possible. Heuristics use guesstimates; measurements are often rule of thumb.”

Conversion Rate Optimization: Three Strategies by Nathan Hill of NextAfter

“This heuristic, created by MECLABS, assigns relative weight to the variables at play in the conversion decision.”

How conversion heuristics apply to email marketing content by Shireen Qudosi of Benchmark Email

“The best way to understand the formula though isn’t by the ‘C’ for conversion — it’s at the opposite end; 2a is where the formula starts and the ‘a’ stands for anxiety.”

Real Estate Lead Generation by Travis Thom

“We used this formula to create our newest line of high converting Real Estate single property sites.”

An Introduction to Referral Marketing Landing Page Optimization by Jeff Epstein of Ambassador Software

“Each landing page should be targeted to a specific segment of your customer base, meaning there’s no exact science to a perfect landing page optimization. But our statistician friends over at MECLABS have come pretty close. They’ve developed a formula for creating an optimized landing page for any marketing campaign.”

Anatomy of a Conversion Optimization Formula by Diego Praderi of Tavano Team

“If you’re not a mathematician, don’t freak out, as this is not a problem you solve in the traditional sense. It’s a heuristic problem, meaning it’s a more concrete way to look at an abstract concept, such as the way we make decisions.”

Landing Page Optimization Conversion Index by Kim Mateus on Mequoda

“As with all marketing functions, landing page optimization is a constant work in progress. We don’t learn until we test and test again and sometimes it’s useful to have a mathematical formula assisting an otherwise creative process.”

A “formula” for landing page optimization by Dave Chaffey on Smart Insights

“To think through the fundamentals of what makes a successful landing page I think this formula developed by Flint McGlaughlin and team at Marketing Experiments is great. We use it in the latest update to our Guide to Landing Page Optimisation to set the scene.  We really like the way it simplifies the whole interplay between what the landing page needs to achieve for the business and what the visitor is seeking.”

Optimizing Landing Pages to Match Customer Motivation by Linda Bustos on GetElastic

“Today I want to look at motivation from a different angle. I want you to choose a landing page that is top priority for you to optimize. For example, your most profitable product with the highest abandonment rate. I want to get you thinking about which customer motivations are most likely to match your business, your products, your typical customer and your landing page presentation.”

Related Resources

And of course, we’ve written about the Conversion Sequence heuristic as well …

Marketing Management: Can you create a marketing factory?

Mobile Ad Campaign Optimization: 6 tactics from a high-performing marketer to increase conversion

How to Consistently Increase Conversion

Heuristic Cheat Sheet: 10 methods for improving your marketing

And we even have an entire course that teaches the Conversion Sequence heuristic …

Landing Page Optimization on-demand certification course

The post 15 Resources to Help You Use a Repeatable Process for Conversion Rate Optimization appeared first on MarketingExperiments.

CRO Cheat Sheet: Customer thinking guide for conversion rate optimization

All conversion optimization begins with the customer. Why are some customers leaving your website? Why are others buying? Why are they coming to your website in the first place?

A repeatable methodology to help you focus on the customer is the MECLABS Institute Conversion Index heuristic. Longtime readers of MarketingExperiments are already familiar with the Conversion Index.

But today we’re releasing a new tool to help you get the most value from it – a two-page PDF you can print out and hang in your office. It’s a simple guide for how to use the Conversion Index, with a deeper focus on the most powerful element — motivation. Use it to give you ideas for implementing the Conversion Index in your long-term strategy as well as your day-to-day marketing.

Here is an instant download of the free PDF. Read below for a deeper explanation to help you put this information into action.

 

Click Here to Download Your FREE CRO Cheat Sheet Instantly

(no form to fill out, just click to get your instant download)

 

Make sure to think about motivation

The Conversion Index is C=4m+3v+2(i-f)-2a. Here is the Conversion Index in a more graphically appealing form:

The “m” stands for “motivation of the user/customer.” It has the highest coefficient — 4 — because motivation has the highest impact on the probability of conversion. Simply put, the better you can tap into a customer’s motivation with your conversion rate optimization, the more likely you are to increase the conversion rate.

Here are some questions to ask to help you optimize for customer motivation:

Who are you optimizing for? You should seek to create a model of your customer’s mind. The more you focus on the customer (versus your product, company, offer, etc.) the more successful you will be. A CEO will need a different landing page than a junior employee. A millennial has different wants and needs than a senior citizen. A risk-averse or cost-conscious buyer will make different decisions than a reward-seeking or risk-taking buyer even when presented with the same information.

Where is your customer in the though sequence? Is this the first time they are interacting with your brand? Or have they had a long relationship with your brand and are ready to buy? Understanding where your customers are in their thought sequence when they get to your landing page — and as they make their way through your landing page — can affect everything on your landing page from the headline to the call-to-action.

Where is the traffic coming from? MarketingSherpa research has shown that organic search has the highest conversion rate, likely because customers are actively looking for something specific when coming from that source. Understanding where your traffic is coming from is another way to understand where they are in the thought sequence. This knowledge can help you provide relevant information to serve their needs and, ultimately, increase conversion.

What conclusions do they need to make? Every product, service, and conversion action has a prospect conclusion funnel. Whether you have mapped it out in detail or not, there are a set of conclusions your customers need to reach before taking any action.

What is the level of urgency? Adding urgency can increase conversion rates. But only if it taps into the customer’s natural motivation. Understand where they might have urgency around the conversion action and use that to optimize your messaging.

What are key pain points and values? What values can you tap into? What pain can your solution help customers overcome? This is key information to ensure your landing page copy and design squarely ties into customer motivations. MarketingSherpa discovered that 23% of marketers consider key pain point an important lead generation form field. This information helps them better meet their prospects’ needs.

What characteristics of your prospect (do you know)? Many marketers tend to focus on demographics and firmographics. But you can’t use that information in a vacuum. For example, you can use prospect characteristics to determine who your best customers will be and treat those prospects different from other prospects.

Once you have the above information, here are some activities to help better understand customers’ motivations and put that understanding into action with your marketing.

Empathize with the customer – Change the way you look at your prospect. They are not simply a target or a lead. Prospects are people just like you. It might seem silly to read that last sentence. Of course, prospects are people. But when we’re relying on databases and technology and driving so hard to meet our goals, it can be easy to overlook the need to empathize with potential customers.

Find someone in that customer type – When you try to create something for the many, it can water down its power. Try to find someone specific in the customer type and write directly to that person. That will help hone your copy. And if possible, talk to them. Get on a call. Meet in person. Try to understand them better. It’s all too easy to assume other people are like us, but they’re not. We are often not the customer, and our goals, fears, vocabulary, patience, drivers and many other characteristics can be very different from the ideal customer type for that product.

Role play with a group – This is a frequent tactic used in everything from boxing to debates to football. Assign roles based on the motivations you’ve discovered about the customer and see how they react to different messaging and offers.

Use personas from Market Intelligence – After analyzing the market, create personas that represent customer motivations. This is a popular marketing tactic. Some marketers like to create specific names for different personas or tie their personas to celebrity examples. Other marketers create personas around different industries or interests. However you create personas, the same general principle applies — writing to a specific customer/customer type will help hone your copy to their motivations.

Now that you have a firm understanding of your customers’ motivations, conduct an analysis of your current and soon-to-launch marketing. Identify gaps between the landing page (and other marketing communications) and the customer’s motivation. These are your opportunities for conversion rate optimization.

Always start with the Conversion Index … but which element?

While motivation is the most impactful element that affects conversion, it doesn’t mean you should necessarily start your CRO right out of the gate by trying to optimize for motivation. There is lower-hanging fruit.

Start with friction and anxiety first, because they are the easiest to see if you put yourself in the customer’s shoes. What can you remove, add or change to reduce these negative elements that hinder conversion?

When you’ve removed and/or changed page elements to fully minimize friction and eliminate anxiety, then you move on to value proposition (and incentive) second. What can you add, remove or change to optimize these positive elements that help increase the probability of conversion? Here is a simple worksheet you can fill in to keep track of the CRO changes you would like to make.

“If you ignore motivation, it is like you are multiplying by 0 — you are undermining all your other CRO work. If you simply understand and know motivation, it is like you are multiplying by 1 — you aren’t hurting your other CRO work, but you aren’t really helping either.” — Daniel Burstein

Third, leveraging your knowledge of and maximizing for visitor motivation can multiply your business results. This is one reason understanding motivation can be so powerful. You are essentially multiplying with motivation.

“If you ignore motivation, it is like you are multiplying by 0 — you are undermining all your other CRO work. If you simply understand and know motivation, it is like you are multiplying by 1 — you aren’t hurting your other CRO work, but you aren’t really helping either.”

But if you leverage and maximize what you’ve discovered about customer motivation using the previously discussed tactics, it is like you are doubling the impact of all your previous CRO work. Then, you are creating a fluid customer experience that provides value specifically for the reasons your customers want to buy. Not your reasons, but their reasons.If you ignore motivation, it is like you are multiplying by 0 — you are undermining all your other CRO work. If you simply understand and know motivation, it is like you are multiplying by 1 — you aren’t hurting your other CRO work, but you aren’t really helping either.

And that is the most powerful marketing — serving your customers’ motivations.

 

Click Here to Download Your FREE CRO Cheat Sheet Instantly

(no form to fill out, just click to get your instant download)

 

Related resources

MECLABS Institute Research Services – Get better business results from deeper customer understanding

Customer Service Can Be a Treasure Trove of Ideas For CRO

Most Popular MarketingExperiments Articles of 2018

The post CRO Cheat Sheet: Customer thinking guide for conversion rate optimization appeared first on MarketingExperiments.

Conversion Optimization: Eight considerations to take into account when A/B testing in mobile

I’m writing this article on a laptop computer at my desk. And in your marketing department or agency, you likely do most of your work on a computer as well.

This can cause a serious disconnect with your customers as you design A/B tests.

Because more than half (52.4% according to Statista) of global internet traffic comes from a mobile device.

So, I interviewed Rebecca Strally, Director of Optimization and Design, and Todd Barrow, Director of Application Development, for tips on what considerations you should make for mobile devices when you’re planning and rolling out your tests. Rebecca and Todd are my colleagues here at MECLABS Institute (parent research organization of MarketingExperiments).

Consideration #1: Amount of mobile traffic and conversions

Just because half of global traffic is from mobile devices doesn’t mean half of your site’s traffic is from mobile devices. It could be considerably less. Or more.

Not to mention, traffic is far from the only consideration. “You might get only 30% of traffic from mobile but 60% of conversions, for example. Don’t just look at traffic. Understand the true impact of mobile on your KPIs,” Rebecca said.

Consideration #2: Mobile first when designing responsive

Even if mobile is a minority of your traffic and/or conversions, Rebecca recommends you think mobile first. For two reasons.

First, many companies measure KPIs (key performance indicators) in the aggregate, so underperformance on mobile could torpedo your whole test if you’re not careful. Not because the hypothesis didn’t work, but because you didn’t translate it well for mobile.

Second, it’s easier to go from simpler to more complex with your treatments. And mobile’s smaller form factor necessitates simplicity.

“Desktop is wide and shallow. Mobile is tall and thin. For some treatments, that can really affect how value is communicated.”  — Rebecca Strally

“Desktop is wide and shallow. Mobile is tall and thin. For some treatments, that can really affect how value is communicated,” she said.

Rebecca gave an example of a test that was planned on desktop first for a travel website. There were three boxes with value claims, and a wizard below it. On desktop, visitors could quickly see and use the wizard. The boxes offered supporting value.

But on mobile, the responsive design stacked the boxes shifting the wizard far down the page. “We had to go back to the drawing board. We didn’t have to change the hypothesis, but we had to change how it was executed on mobile,” Rebecca said.

Consideration #3: Unique impacts of mobile on what you’re testing

A smartphone isn’t just a smaller computer. It’s an entirely different device that offers different functionality. So, it’s important to consider how that functionality might affect conversions and to keep mobile-specific functionality in mind when designing tests that will be experienced by customers on both platforms — desktop and mobile.

Some examples include:

  • With the prevalence of digital wallets like Apple Pay and Google Pay, forms and credit card info is more likely to prefill. This could reduce friction in a mobile experience, and make the checkout process quicker. So while some experiences might require more value on desktop to help keep the customer’s momentum moving through the checkout process, including that value on mobile could actually slow down an otherwise friction-lite experience.
  • To speed load time and save data, customers are more likely to use ad blockers that can block popups and hosted forms. If those popups and forms contain critical information, visitors may assume your site is having a problem and not realize they are blocking this information. This may require clearly providing text explaining about the form or providing an alternative way to get the information, a step that may not be necessary on desktop.
  • Customers are touching and swiping, not typing and clicking. So information and navigation requests need to be kept simpler and lighter than on desktop.
  • Visitors can click to call. You may want to test making a phone call a more prominent call to action in mobile, while on desktop that same CTA may induce too much friction and anxiety.
  • Location services are more commonly used, providing the opportunity to better tap into customer motivation by customizing offers and information in real time and more prominently featuring brick-and-mortar related calls to action, as opposed to desktop, which is in a static location, and the user may be interested in obtaining more information before acting (which may require leaving their current location).
  • Users are accustomed to app-based experiences, so the functionality of the landing page may be more important on mobile than it is on desktop.

Consideration #4: The device may not be the only thing that’s different

“Is mobile a segment or device?” Rebecca pondered in my office.

She expanded on that thought, “Do we treat mobile like it is the same audience with the same motivations, expected actions, etc., but just on a different physical device? Or should we be treating those on mobile like a completely different segment/audience of traffic because their motivations, expected actions, etc., are different?”

She gave an example of working with a company her team was performing research services for. On this company’s website, younger people were visiting on mobile while older people were visiting on desktop. “It’s wasn’t just about a phone, it was a different collection of human beings,” she said.

Consideration #5: QA to avoid validity threats

When you’re engaged in conversion optimization testing, don’t overlook the need for quality assurance (QA) testing. If a treatment doesn’t render correctly on a mobile device, it could be that the technical difficulty is causing the change in results, not the changes you made to the treatment. If you are unaware of this, it will mislead you about the effectiveness of your changes.

This is a validity threat known as instrumentation effect.

Here are some of the devices our developers use for QAing.

(side note: That isn’t a stock photo. It’s an actual picture by Senior Designer James White. When I said it looked too much like a stock image, Associate Director of Design Lauren Leonard suggested I let the readers know “we let the designers get involved, and they got super excited about it.”)

 

“If your audience are heavy users of Safari on iPhone, then check on the actual device. Don’t rely on an emulator.”   — Todd Barrow

“Know your audience. If your audience are heavy users of Safari on iPhone, then check on the actual device. Don’t rely on an emulator. It’s rare, but depending on what you’re doing, there are things that won’t show up as a problem in an emulator. Understand what your traffic uses and QA your mobile landing pages on the actual physical devices for the top 80%,” Todd advised.

Consideration #6: The customer’s mindset

Customers may go to the same exact landing page with a very different intent when they’re coming from mobile. For example, Rebecca recounted an experiment with an auto repair chain. For store location pages, desktop visitors tended to look for coupons or more info on services. But mobile visitors just wanted to make a quick call.

“Where is the customer in the thought sequence? Mobile can do better with instant gratification campaigns related to brick-and-mortar products and services,” she said.

Consideration #7: Screen sizes and devices are not the same things

Most analytics platforms give you an opportunity to monitor your metrics based on device types, like desktop, mobile and tablet. They likely also give you the opportunity to get metrics on screen resolutions (like 1366×768 or 1920×1080).

Just keep in mind, people aren’t always viewing your websites at the size of their screen. You only know the size of the monitor not the size of the browser.

“The user could be recorded as a full-size desktop resolution, but only be viewing in a shrunken window, which may be shrunk down enough to see the tablet experience or even phone experience,” Todd said. “Bottom line is you can’t assume the screen resolutions reported in the analytics platform is actually what they were viewing the page at.”

Consideration #8: Make sure your tracking is set up correctly

Mobile can present a few unique challenges for tracking your results through your analytics and testing platforms. So make sure your tracking is set up correctly before you launch the test.

For example, if you’re using a tag control manager and tagging things through it based on CSS properties, if the page shifts at different breakpoints that change the page structure, you could have an issue.

“If you’re tagging a button based on its page location at the bottom right, but then it gets relocated on mobile, make sure you’re accounting for that,” Todd advised.

Also, understand how the data is being communicated. “Because Google Tag Manager and Google Optimize are asynchronous, you can get mismatched data if you don’t follow the best practices,” Todd said.

“If you see in your data that the control has twice as many hits as the treatment, there is a high probability you’ve implemented something in a way that didn’t account for the way asynchronous tags work.”                  —Todd Barrow

Todd provided a hard-coded page view as an example. “Something to watch for when doing redirect testing … a tracking pixel could fire before the page loads and does the split. If you see in your data that the control has twice as many hits as the treatment, there is a high probability you’ve implemented something in a way that didn’t account for the way asynchronous tags work. This is really common,” Todd said.

“If you know that’s going to happen, you can segment the data to clean it,” he said.

Related Resources

Free Mobile Conversion Micro Class from MECLABS Institute

Mobile Marketing: What a 34% increase in conversion rate can teach you about optimizing for video

Mobile Marketing: Optimizing the evolving landscape of mobile email marketing

Mobile Conversion Optimization Research Services: Get better mobile results from deeper customer understanding

The post Conversion Optimization: Eight considerations to take into account when A/B testing in mobile appeared first on MarketingExperiments.

Mobile A/B Testing: Quality assurance checklist

Real-world behavioral tests are an effective way to better understand your customers and optimize your conversion rates. But for this testing to be effective, you must make sure it is accurately measuring customer behavior.

One reason these A/B split tests fail to give a correct representation of customer behavior is because of validity threats. This series of checklists is designed to help you overcome Instrumentation Effect. It is based on actual processes used by MECLABS Institute’s designers, developers and analysts when conducting our research services to help companies improve marketing performance.

MECLABS defines Instrumentation Effect as “the effect on the test variable caused by a variable external to an experiment, which is associated with a change in the measurement instrument.” In other words, the results you see do not come from the change you made (say, a different headline or layout), but rather, because some of your technology has affected the results (slowed load time, miscounted analytics, etc.)

Avoiding Instrumentation Effect is even more challenging for any test that will have traffic from mobile devices (which today is almost every test). So, to help you avoid the Instrumentation Effect validity threat, we’re providing the following QA checklist. This is not meant for you to follow verbatim, but to serve as a good jumping-off point to make sure your mobile tests are technically sound. For example, other browsers than the ones listed here may be more important for your site’s mobile functionality. Maybe your landing page doesn’t have a form, or you may use different testing tools, etc.

Of course, effective mobile tests require much more than thorough QA — you also must know what to test to improve results. If you’re looking for ideas for your tests that include mobile traffic, you can register for the free Mobile Conversion micro course from MECLABS Institute based on 25 years of conversion optimization research (with increasing emphases on mobile traffic in the last half decade or so).

There’s a lot of information here, and different people will want to save this checklist in different ways. You can scroll through the article you’re on to see the key steps of the checklist. Or use the form on this page to download a PDF of the checklist.

 

Roles Defined

The following checklists are broken out by teams serving specific roles in the overall mobile development and A/B testing process. The checklists are designed to help cross-functional teams, with the benefit being that multiple people in multiple roles bring their own viewpoint and expertise to the project and evaluate whether the mobile landing page and A/B testing are functioning properly before launch and once it is live.

For this reason, if you have people serving multiple roles (or you’re a solopreneur and do all the work yourself), these checklists may be repetitive for you.

Here is a quick look at each team’s overall function in the mobile landing page testing process, along with the unique value it brings to QA:

Dev Team These are the people who build your software and websites, which could include both front-end development and back-end development. They use web development skills to create websites, landing pages and web applications.

For many companies, quality assurance (QA) would fall in this department as well, with the QA team completing technical and web testing. While a technical QA person is an important member of the team for ensuring you run valid mobile tests, we have included other functional areas in this QA checklist because different viewpoints from different departments will help decrease the likelihood of error. Each department has its own unique expertise and is more likely to notice specific types of errors.

Value in QA: The developers and technological people are most likely to notice any errors in the code or scripts and make sure that the code is compatible with all necessary devices.

 

Project Team – Depending on the size of the organization, this may be a dedicated project management team, a single IT or business project manager, or a passionate marketing manager keeping track of and pushing to get everything done.

It is the person or team in your organization that coordinates work and manages timelines across multiple teams, ensures project work is progressing as planned and that project objectives are being met.

Value in QA: In addition to making sure the QA doesn’t take the project off track and threaten the launch dates of the mobile landing page test, the project team are the people most likely to notice when business requirements are not being met.

 

Data Team The data scientist(s), analyst(s) or statistician(s) helped establish the measure of success (KPI – key performance indicator) and will monitor the results for the test. They will segment and gather the data in the analytics platform and assemble the report explaining the test results after they have been analyzed and interpreted.

Value in QA: They are the people most likely to notice any tracking issues from the mobile landing page not reporting events and results correctly to the analytics platform.

 

Design Team The data scientist(s), analyst(s) or statistician(s) helped establish the measure of success (KPI – key performance indicator) and will monitor the results for the test. They will segment and gather the data in the analytics platform and assemble the report explaining the test results after they have been analyzed and interpreted.

Value in QA: They are the people most likely to notice any tracking issues from the mobile landing page not reporting events and results correctly to the analytics platform.

 

DEV QA CHECKLIST

Pre-launch, both initial QA and on Production where applicable

Visual Inspection and Conformity to Design of Page Details

  • Verify latest copy in place
  • Preliminary checks in a “reference browser” to verify design matches latest comp for desktop/tablet/mobile layouts
  • Use the Pixel Perfect Overlay function in Firefox Developer Tools – The purpose of this tool is to take an image that was provided by the designer and lay it over the website that was produced by the developer. The image is a transparency which you can use to point out any differences or missing elements between the design images and the webpage.
  • Displaying of images – Make sure that all images are displaying, aligned and up to spec with the design.
  • Forms, List and Input Elements (Radio Buttons, Click Boxes) – Radio buttons (Dots and Circles) and Checkboxes (Checks and Boxes) are to be tested thoroughly as they may trigger secondary actions. For example, selecting a “Pay by Mail” radio button will sometimes automatically hide the credit card form.
  • Margins and Borders – Many times, you will notice that a portion of the body or perhaps a customer review or image is surrounded by a border or maybe even the whole page. It is our duty to inspect them so that there are no breaks and that they’re prominent enough for the user to decipher each bordered section.
  • Copy accuracy – Consistency between typography, capitalization, punctuation, quotations, hyphens, dashes, etc. The copy noted in the webpage should match any documents provided pertaining to copy and text unless otherwise noted or verified by the project manager/project sponsor.
  • Font styling (Font Color, Format, Style and Size) – To ensure consistency with design, make sure to apply the basic rules of hierarchy for headers across different text modules such as titles, headers, body paragraphs and legal copies.
  • Link(s) (Color, Underline, Clickable)

Web Page Functionality: Verify all page functionality works as expected (ensure treatment changes didn’t impact page functionality)

  • Top navigation functionality – Top menu, side menu, breadcrumb, anchor(s)
  • Links and redirects are correct
  • Media – Video, images, slideshow, PDF, audio
  • Form input elements – drop down, text fields, check and radio module, fancy/modal box
  • Form validation – Error notification, client-side errors, server-side errors, action upon form completion (submission confirmation), SQL injection
  • Full Page Functionality – Search function, load time, JavaScript errors
  • W3C Validation – CSS Validator (http://jigsaw.w3.org/css-validator/), markup validator (http://validator.w3.org/)
  • Verify split functional per targeting requirements
  • Verify key conversion scenario (e.g., complete a test order, send test email from email system, etc.) – If not already clear, QA lead should verify with project team how test orders should be placed
  • Where possible, visit the page as a user would to ensure targeting parameters are working properly (e.g., use URL from the PPC ad or email, search result, etc.)

Tracking Metrics

  • Verify tracking metrics are firing in browser, and metric names match requirements – Check de-bugger to see firing as expected
  • Verify reporting within the test/analytics tool where possible – Success metrics and click tracking in Adobe Target, Google Content Experiments, Google Analytics, Optimizely, Floodlight analytics, email data collection, etc.

Back End Admin Panel

Notify Project Team and Data Team it is ready for their QA (via email preferably) – indicate what reference browser is. After Project Team initial review, complete full cross browser/ cross device checks using “reference browser” as a guide:

Browser Functionality – Windows

  • Internet Explorer 7 (IE7)
  • IE8
  • IE9
  • IE10
  • IE11
  • Modern Firefox
  • Modern Chrome

Browser Functionality – macOS

  • Modern Safari
  • Modern Chrome
  • Modern Firefox8

Mobile Functionality – Tablet

  • Android
  • Windows
  • iOS

Mobile Functionality – Mobile phone

  • Android
  • Windows
  • iOS

Post-launch, after the test is live to the public:

  • Notify Project Team & Data Team the test is live and ready for post-launch review (via email preferably)
  • Verify split is open to public Verify split functional per targeting requirements
  • Where possible, visit the page as a user would to ensure targeting parameters are working properly (e.g., use URL from the PPC ad or email, search result, etc.)
  • Test invalid credit cards on a production environment
PROJECT TEAM QA CHECKLIST:

Pre-Launch and Post-Launch QA:

  • Check that copy and design are correct for control and treatments in the “reference browser”:
  • Ensure all added copy/design elements are there and correct
  • Ensure all removed copy/design elements are gone
  • Ensure all changed copy/design elements are correct
  • Ensure control experience is as intended for the test
  • Check page functionality:
  • Ensure all added/changed functionality is working as expected
  • Ensure all standard/business as usual – BAU_ functionality is working as expected:
  • Go through the typical visitor path (even beyond the testing page/ location) and ensure everything functions as expected
  • Make sure links go where supposed to, fields work as expected, data passes as expected from page to page.
  • Check across multiple browser sizes (desktop, tablet, mobile)
  • If site is responsive, scale the browser from full screen down to mobile and check to ensure all the page breaks look correct
  • Where possible, visit the page the way a typical visitor would hit the page (e.g., through PPC Ad, organic search result, specific link/button on site, through email)
DATA QA CHECKLIST:

Pre-Launch QA Checklist (complete on Staging and Production as applicable):

  • Verify all metrics listed in the experiment design are present in analytics portal
  • Verify all new tracking metrics’ names match metrics’ names from tracking document
  • Verify all metrics are present in control and treatment(s) (where applicable)
  • Verify conversion(s) are present in control and treatment(s) (where possible)
  • Verify any metrics tracked in a secondary analytics portal (where applicable)
  • Immediately communicate any issues that arise to the dev lead and project team
  • Notify dev lead and project team when Data QA is complete (e-mail preferably)

Post-Launch QA / First Data Pull:

  • Ensure all metrics for control and treatment(s) are receiving traffic
  • Ensure traffic levels are in line with the pre-test levels used for test duration estimation
  • Update Test Duration Estimation if necessary
  • Immediately communicate any issues that arise to the project team
  • Notify dev lead and project team when first data pull is complete (e-mail preferably)
DESIGN QA CHECKLIST:

Pre-Launch Review:

  • Verify intended desktop functionality (if applicable)
  • Accordions
  • Error states
  • Fixed Elements (nav, growler, etc.)
  • Form fields
  • Hover states – desktop only
  • Links
  • Modals
  • Sliders
  • Verify intended tablet functionality (if applicable)
  • Accordions
  • Error states
  • Fixed Elements (nav, growler, etc.)
  • Form fields
  • Gestures – touch device only
  • Links
  • Modals
  • Responsive navigation
  • Sliders
  • Verify intended mobile functionality (if applicable)
  • Accordions
  • Error states
  • Fixed Elements (nav, growler, etc.)
  • Form fields
  • Gestures – touch device only
  • Links
  • Modals
  • Responsive navigation
  • Sliders
  • Verify layout, spacing and flow of elements
  • Padding/Margin
  • “In-between” breakpoint layouts (as these are not visible in the comps)
  • Any “of note” screen sizes that may affect test goals (For example: small laptop 1366×768 pixels, 620px of height visibility)
  • Verify imagery accuracy, sizing and placement
  • Images (Usually slices Design provided to Dev)
  • Icons (Could be image, svg or font)
  • Verify Typography
  • Color
  • Font-size
  • Font-weight
  • Font-family
  • Line-height

Qualifying questions, if discrepancies are found:

  • Is there an extremely strict adherence to brand standards?
  • Does it impact the hierarchy of the page information?
  • Does it appear broken/less credible?
  • Immediately communicate any issues that arise to the dev lead and project team
  • Notify dev lead and project team when data QA is complete (e-mail preferably)

To download a free PDF of this checklist, simply complete the below form.


___________________________________________________________________________________

Increase Your Mobile Conversion Rates: New micro course 

Hopefully, this Mobile QA Checklist helps your team successfully launch tests that have mobile traffic. But you still may be left with the question — what should I test to increase conversion?

MECLABS Institute has created five micro classes (each under 12 minutes) based on 25 years of research to help you maximize the impact of your messages in a mobile environment.

In the complimentary Mobile Conversion Micro Course, you will learn:

  • The 4 most important elements to consider when optimizing mobile messaging
  • How a large telecom company increased subscriptions in a mobile cart by 16%
  • How the same change in desktop and mobile environments had opposing effects on conversion

Register Now for Free

The post Mobile A/B Testing: Quality assurance checklist appeared first on MarketingExperiments.

Most Popular MarketingExperiments Articles of 2018

Let’s get right into it. Here are your marketing peers’ favorite articles from 2018 …

Heuristic Cheat Sheet: 10 methods for improving your marketing

Marketing — far more so than other business disciplines — seems to be driven by gut. Or the individual star performer.

Marketing embraces a far less methodological approach than say accounting or manufacturing.

In this article, we provide a quick look at heuristics (aka methodology-based thought tools) created by MECLABS Institute (parent research organization of MarketingExperiments) to help marketing teams consistently deliver at a high level.

In this article, you’ll find heuristics to help you increase conversion, create effective email messaging, launch projects in the most effective order and more.

READ THE ARTICLE

 

Conversion Lifts in 10 Words or Less: Real-world experiments where minor copy changes produced major conversion lifts

Sometimes it can seem like a massive lift to really move the needle. A new technology implementation. Investing in a vast campaign to drive more interest.

But marketing, at its core, is communication. Get that right and you can drive a significant difference in your marketing results.

This 13-minute video examines five experiments where small copywriting changes had a large impact

WATCH THE VIDEO

 

Mental Cost: Your customers pay more than just money

The monetary price of a product isn’t the only cost for customers. Understanding (and optimizing for) non-monetary costs can lead to significant conversion gains

What costs are you inadvertently thrusting on your customers? And how can you reduce them?

READ THE ARTICLE

 

Not all of the most impactful articles from 2018 were published this year. Here are some evergreen topics that were especially popular with your peers …

A/B Testing: Example of a good hypothesis

Hypotheses should be an evergreen topic for marketers engaged in A/B testing. If you’re unfamiliar with hypotheses-based testing, this article offers a simple process to start shaping your thinking.

Raphael Paulin-Daigle advises in his blog article 41 Detailed A/B Testing Strategies to Skyrocket Your Testing Skills, “A trick to formulate a good hypothesis is to follow MarketingExperiment’s formula.”

Read this article to learn what a hypothesis is, and a simple method for formulating a good hypothesis.

READ THE ARTICLE

(Editor’s Note: Our hypothesis methodology has advanced further since this article was published in 2013. You can find a more advanced explanation of hypothesis methodology in The Hypothesis and the Modern-Day Marketer as well as a discussion of hypothesis-driven testing in action in Don’t Test Marketing Ideas, Test Customer Hypotheses.)

 

Interpreting Results: Absolute difference versus relative difference

“NASA lost its $125-million Mars Climate Orbiter because spacecraft engineers failed to convert from English to metric measurements when exchanging vital data before the craft was launched,” Robert Lee Holtz reported in the Los Angeles Times.

Numbers are crucial for A/B Testing and CRO as well. So make sure you understand the vital distinction between absolute difference and relative difference. Much like English and metric measurements, they measure the same thing but in a different way.

I have interviewed marketers before who bragged about a 3% conversion increase from a test, and I mentioned that while I was happy for them, it didn’t seem huge. Only then did they explain that their company’s conversion rate had been 2% and they increased it to 5%.

While that’s a 3% actual difference, it’s a 150% relative difference. The relative difference communicates the true impact of the test, and every business leader who learns of it will better understand the impact when the 150% number is used instead of the 3% number.

READ THE ARTICLE

 

6 Good (and 2 Bad) B2B and B2C Value Proposition Examples

What does a good value proposition look like? It’s a question we get asked often, and the article that answers that question was popular among marketers.

Check out these B2B and B2C examples. We included some bad examples for balance as well.

READ THE ARTICLE

 

Customer Value: The 4 essential levels of value propositions

Some marketers think that the only value proposition that matters is the overall unique value proposition for the company. This can be disheartening because it is difficult for the average marketer to have a significant impact on that value prop (especially in a very large company).

In this article, we explore different levels of value proposition, including ones that even the more junior marketer impacts on an almost daily basis. At work, and even in life.

READ THE ARTICLE

 

Related Resources

Here is some more content that was popular with the MarketingExperiments audience this year …

Conversion Marketing Methodology

Powerful Value Propositions: How to optimize this critical marketing element – and lift your results

Research Archive

The post Most Popular MarketingExperiments Articles of 2018 appeared first on MarketingExperiments.

Landing Page Optimization: Free worksheet to help you balance segmentation and resources

All things being equal, the more segmented and targeted your landing page is, the higher your conversion rate will be. Everyone in marketing knows that.

However, the other part of the equation that is rarely talked about — the more segmented and targeted your landing page is, the more resources (time, focus, development, agency hours, etc.) it will likely take.

Sure, there are some tools that will automate this process by automatically displaying, say, a relevant product recommendation. There are some that will reduce, but not eliminate, extra work by pulling in anything from a relevant dynamic keyword change to entirely different content blocks.

But for most companies today, getting more segmented with their landing pages is going to take time or money that could be spent on something else.

So how do you find the balance? When is it worth launching a new landing page to serve a new motivation or when can you just try to make your current landing pages serve multiple motivations?

We’ve created a free worksheet to help you make that decision and (if necessary) get budget approval on that decision from a business leader or client.

 

Click Here to Download Your FREE Landing Page Segmentation Worksheet Instantly

(no form to fill out, just click to get your instant download of this PDF-based tool)

 

This quick-and-easy tool helps you decide when you need a new landing page to target a more specific audience. Here is a quick breakdown of some of the fields you will find in the worksheet, which has fillable form fields to make recording all the info easy for you.

Step 1: What do you know about the customers?

Who are your ideal customers? It’s important to know which customers your product can best serve so you can make the right promise with your marketing.

Possible sources of data to answer this question include transactional data, social media reviews, customer interviews, customer service interactions, and A/B testing. The most popular way to learn about customers is with an internal metric analysis, which is used by 69% of companies.

You’ll want to know demographics like age(s), gender(s), education, income(s), location(s) and other factors that are important to your product.

You’ll also want to know psychographics like what they move toward (their goals), what they move away from (their pains) and what value(s) they derive from your product purchase.

You also want to know who the page needs to serve from among the customers. Is it someone who has never visited before and is unaware of the category value? A repeat purchaser? And so on. Knowing their previous relationship to the landing page, your company and your products is important to creating high-converting landing pages.

Step 2: Based on what you know, what can you hypothesize about the customers?

What are the motivations of visitors? Visitor motivation has the greatest impact on conversion, according to the MECLABS Institute Conversion Sequence Heuristic. You can get indications about what motivations these visitors might have, based on sources like inbound traffic sources, previous pages viewed, A/B testing results, site search keywords, PPC keywords, customer service question, and testing, not to mention the previous info you’ve already completed about demographics, psychographics and the like.

You want to hypothesize what different motivations visitors might have, and why they have that motivation (keep asking why until you get to the core motivation, this can be very informative).

For example, I have a Nissan LEAF. I had multiple motivations for buying a LEAF. Motivation A was to get a zero-emission car. Motivation B was to save money on gas, maintenance, etc.

Drilling down into Motivation A, why did I want a zero-emission car? Because I didn’t want to pollute. Why? Because I didn’t want to increase local air pollution or add to climate change. Why? Because my kids breathe the local air and will be impacted by climate change.

Getting down to the core motivation might create messaging that taps deeper into your customers’ wants and needs than simply mentioning the features of the product.

Which brings up the next question. What must the landing page do to serve these motivations? You can use the previous info, previous customers, analytics, previous purchases — and intuition to answer that question.

Essentially, you want to be able to fill in the blanks: The landing page must do ________________ so customers can ______________. Use as many as apply to the motivations you are trying to meet. Is there a natural grouping? Are they very different?

Using the car example previously, the landing page must do a good job tapping into customers desire for a better, cleaner world so customers can see the deeper environmental impact of a driving a zero-emissions vehicle.

Step 3: Based on customer motivations, does it make business sense to create a new landing page?

This is where the rubber meets the road (car analogies notwithstanding). All marketers are pro segmentation. But you can’t do everything.

On the flip side, marketers can underinvest in their landing pages and overinvest in traffic driving and ultimately leak money by having too few, unsegmented landing pages that are trying to do too much for too many different motivations — and thus, doing none of them well.

Does it make business sense to make a new, more segmented landing page? Three more landing pages?  Dozens of dynamically generated content boxes or headlines targeting different motivations for a specific landing page?

Now that you have a sense of the different motivations you’re trying to serve, you should ask what distinct customer sets these customers represent, and what percent of profits each generates. If it helps to identify them, assign a name to customer sets that have similar motivations. Whether it’s something like Aspirational Suburbanites or Laid-back Lindas, some element of personification can help you feel closer to the customer. You should combine your transactional and analytics data with the previously completed info to arrive at the customer sets and percent of profit generated by each.

This is the value side of the equation.

For the cost side of the equation, you need to ask how many resources it takes to create a new landing page? Based on your work with web or design agencies, outside consultants and internal development teams, it helps to put a cost to the work even if it’s internal salaried employee time that you won’t technically be billed for. That will help you understand if there is an ROI for the work. Costs you want to consider are your marketing team, copy, design, development, conversion optimization and A/B testing.

Decision: Do I need a new landing page?

With this info, you can decide if you need a new landing page. Does the landing page you already have or the one you are currently developing closely enough match the motivations of the profitable core of customers? Will the landing page work with editors to match the motivations of the profitable core of customers? Or is a new landing page needed to more closely serve motivations of a profitable subgroup of customers?

Seeing the amount of business you can get — and the cost it will take to get you there — can help you get past the simple idea that segmentation is good or that your current landing page is good enough for all customers. You can move on with a deeper understanding of whether or not your business should invest in a more segmented landing page(s) to better tap into motivations of a uniquely motivated (and profitable) set of customers.

Use this worksheet to make the decision for yourself and make the case for budget to your business leaders and clients.

 

Click Here to Download Your FREE Landing Page Segmentation Worksheet Instantly

(no form to fill out, just click to get your instant download of this PDF-based tool)

 

Special thanks to MECLABS Web Designer Chelsea Schulman for designing this sharp-looking interactive worksheet.

Related Resources

Lead your team to breakthrough results with A Model of your Customer’s Mind – These 21 charts and tools have helped capture more than $500 million in (carefully measured) test wins.

B2B Marketing: Homepage segmentation effort increases time spent on site 171%

The Benefits of Combining Content Marketing and Segmentation

MECLABS Landing Page Optimization online certification course

The post Landing Page Optimization: Free worksheet to help you balance segmentation and resources appeared first on MarketingExperiments.

A/B Testing: Why do different sample size calculators and testing platforms produce different estimates of statistical significance?

A/B testing is a powerful way to increase conversion (e.g., 638% more leads, 78% more conversion on a product page, etc.).

Its strength lies in its predictive ability. When you implement the alternate version suggested by the test, your conversion funnel actually performs the way the test indicated that it would.

To help determine that, you want to ensure you’re running valid tests. And before you decide to implement related changes, you want to ensure your test is conclusive and not just a result of random chance. One important element of a conclusive test is that the results show a statistically significant difference between the control and the treatment.

Many platforms will include something like a “statistical significance status” with your results to help you determine this. There are also several sample size calculators available online, and different calculators may suggest you need different sample sizes for your test.

But what do those numbers really mean? We’ll explore that topic in this MarketingExperiments article.

A word of caution for marketing and advertising creatives: This article includes several paragraphs that talk about statistics in a mathy way — and even contains a mathematical equation (in case these may pose a trigger risk for you). Even so, we’ve done our best to use them only where they serve to clarify rather than complicate.

Why does statistical significance matter?

To set the stage for talking about sample size and statistical significance, it’s worth mentioning a few words about the nature and purpose of testing (aka inferential experimentation) and the nomenclature we’ll use.

We test in order to infer some important characteristics about a whole population by observing a small subset of members from the population called a “Sample.”

MECLABS metatheory dubs a test that successfully accomplishes this purpose a “Useful” test.

The Usefulness (predictiveness) of a test is affected by two key features: “Validity” and “Conclusiveness.”

Statistical significance is one factor that helps to determine if a test is useful. A useful test is one that can be trusted to accurately reflect how the “system” will perform under real-world conditions.

Having an insufficient sample size presents a validity threat known as Sample Distortion Effect. This is a danger because if you don’t get a large enough sample size, any apparent performance differences may have been due to random variation and not true insights into your customers’ behavior. This could give you false confidence that a landing page change that you tested will improve your results if you implement it, when it actually won’t.

“Seemingly unlikely things DO sometimes happen, purely ‘by coincidence’ (aka due to random variation). Statistical methods help us to distinguish between valuable insights and worthless superstitions,” said Bob Kemper, Executive Director, Infrastructure Support Services at MECLABS Institute.

“By our very nature, humans are instinctively programmed to seek out and recognize patterns: think ‘Hmm, did you notice that the last five people who ate those purplish berries down by the river died the next day?’” he said.

A conclusive test is a valid test (There are other validity threats in addition to sample distortion effect.) that has reached a desired Level of Confidence, or LoC (95% is the most commonly used standard).

In practice, at 95% LoC, the 95% confidence interval for the difference between control and treatment rates of the key performance indicator (KPI) does not include zero.

A simple way to think of this is that a conclusive test means you are 95% confident the treatment will perform at least as well as the control on the primary KPI.  So the performance you’ll actually get, once it’s in production for all traffic, will be somewhere inside the Confidence Interval (shown in yellow above).  Determining level of confidence requires some math.

Why do different testing platforms and related tools offer such disparate estimates of required sample size? 

One of MECLABS Institute’s Research Partners who is president of an internet company recently asked our analysts about this topic. His team found a sample size calculator tool online from a reputable company and noticed how different its estimate of minimum sample size was compared to the internal tool MECLABS analysts use when working with Research Partners (MECLABS is the parent research organization of MarketingExperiments).

The simple answer is that the two tools approach the estimation problem using different assumptions and statistical models, much the way there are several competing models for predicting the path of hurricanes and tropical storms.

Living in Jacksonville, Florida, an area that is often under hurricane threats, I can tell you there’s been much debate over which among the several competing models is most accurate (and now there’s even a newer, Next Gen model). Similarly, there is debate in the optimization testing world about which statistical models are best.

The goal of this article isn’t to take sides, just to give you a closer look at why different tools produce different estimates. Not because the math is “wrong” in any of them, they simply employ different approaches.

“While the underlying philosophies supporting each differ, and they approach empirical inference in subtly different ways, both can be used profitably in marketing experimentation,” said Danitza Dragovic, Digital Optimization Specialist at MECLABS Institute.

In this case, in seeking to understand the business implications of test duration and confidence in results, it was understandably confusing for our Research Partner to see different sample size calculations based upon the tool used. It wasn’t clear that a pre-determined sample size is fundamental to testing in some calculations, while other platforms ultimately determine test results irrespective of pre-determined sample sizes, using prior probabilities assigned by the platform, and provide sample size calculators simply as a planning tool.

Let’s take a closer look at each …

Classical statistics 

The MECLABS Test Protocol employs a group of statistical methods based on the “Z-test,” arising from “classical statistics” principles that adopt a Frequentist approach, which makes predictions using only data from the current experiment.

With this method, recent traffic and performance levels are used to compute a single fixed minimum sample size before launching the test.  Status checks are made to detect any potential test setup or instrumentation problems, but LoC (level of confidence) is not computed until the test has reached the pre-established minimum sample size.

While historically the most commonly used for scientific and academic experimental research for the last century, this classical approach is now being met by theoretical and practical competition from tools that use (or incorporate) a different statistical school of thought based upon the principles of Bayesian probability theory. Though Bayesian theory is far from new (Thomas Bayes proposed its foundations more than 250 years ago), its practical application for real-time optimization research required computational speed and capacity only recently available.

Breaking Tradition: Toward optimization breakthroughs

“Among the criticisms of the traditional frequentist approach has been its counterintuitive ‘negative inference’ approach and thought process, accompanied by a correspondingly ‘backwards’ nomenclature. For instance, you don’t ‘prove your hypothesis’ (like normal people), but instead you ‘fail to reject your Null hypothesis’ — I mean, who talks (or thinks) like that?” Kemper said.

He continued, “While Bayesian probability is not without its own weird lexical contrivances (Can you say ‘posterior predictive’?), its inferential frame of reference is more consistent with the way most people naturally think, like assigning the ’probability of a hypothesis being True’ based on your past experience with such things. For a purist Frequentist, it’s impolite (indeed sacrilegious) to go into a test with a preconceived ‘favorite’ or ‘preferred answer.’ One must simply objectively conduct the test and ‘see what the data says.’ As a consequence, the statement of the findings from a typical Bayesian test — i.e., a Bayesian inference — is much more satisfying to a non-specialist in science or statistics than is an equivalent traditional/frequentist one.”

Hybrid approaches

Some platforms use a sequential likelihood ratio test that combines a Frequentist approach with a Bayesian approach. The adjective “sequential” refers to the approach’s continual recalculation of the minimum sample size for sufficiency as new data arrives, with the goal of minimizing the likelihood of a false positive arising from stopping data collection too soon.

Although an online test estimator using this method may give a rough sample size, this method was specifically designed to avoid having to rely on a predetermined sample size, or predetermined minimum effect size. Instead, the test is monitored, and the tool indicates at what point you can be confident in the results.

In many cases, this approach may result in shorter tests due to unexpectedly high effect sizes. But when tools employ proprietary methodologies, the way that minimum sample size is ultimately determined may be opaque to the marketer.

CONSIDERATIONS FOR EACH OF THESE APPROACHES

Classical “static” approaches

Classical statistical tests, such as Z-tests, are the de facto standard across a broad spectrum of industries and disciplines, including academia. They arise from the concepts of normal distribution (think bell curve) and probability theory described by mathematicians Abraham de Moivre and Carl Friedrich Gauss in the 17th to 19th centuries. (Normal distribution is also known as Gaussian distribution.)  Z-tests are commonly used in medical and social science research.

They require you to estimate the minimum detectable effect-size before launching the test and then refrain from “peeking at” Level of Confidence until the corresponding minimum sample size is reached.  For example, the MECLABS Sample Size Estimation Tool used with Research Partners requires that our analysts make pre-test estimates of:

  • The projected success rate — for example, conversion rate, clickthrough rate (CTR), etc.
  • The minimum relative difference you wish to detect — how big a difference is needed to make the test worth conducting? The greater this “effect size,” the fewer samples are needed to confidently assert that there is, in fact, an actual difference between the treatments. Of course, the smaller the design’s “minimum detectable difference,” the harder it is to achieve that threshold.
  • The statistical significance level — this is the probability of accidentally concluding there is a difference due to sampling error when really there is no difference (aka Type-I error). MECLABS recommends a five percent statistical significance which equates to a 95% desired Level of Confidence (LoC).
  • The arrival rate in terms of total arrivals per day — this would be your total estimated traffic level if you’re testing landing pages. “For example, if the element being tested is a page in your ecommerce lower funnel (shopping cart), then the ‘arrival rate’ would be the total number of visitors who click the ‘My Cart’ or ‘Buy Now’ button, entering the shopping cart section of the sales funnel and who will experience either the control or an experimental treatment of your test,” Kemper said.
  • The number of primary treatments — for example, this would be two if you’re running an A/B test with a control and one experimental treatment.

Typically, analysts draw upon a forensic data analysis conducted at the outset combined with test results measured throughout the Research Partnership to arrive at these inputs.

“Dynamic” approaches 

Dynamic, or “adaptive” sampling approaches, such as the sequential likelihood ratio test, are a more recent development and tend to incorporate methods beyond those recognized by classical statistics.

In part, these methods weren’t introduced sooner due to technical limitations. Because adaptive sampling employs frequent computational reassessment of sample size sufficiency and may even be adjusting the balance of incoming traffic among treatments, they were impractical until they could be hosted on machines with the computing capacity to keep up.

One potential benefit can be the test duration. “Under certain circumstances (for example, when actual treatment performance is very different from test-design assumptions), tests may be able to be significantly foreshortened, especially when actual treatment effects are very large,” Kemper said.

This is where prior data is so important to this approach. The model can shorten test duration specifically because it takes prior data into account. An attendant limitation is that it can be difficult to identify what prior data is used and exactly how statistical significance is calculated. This doesn’t necessarily make the math any less sound or valid, it just makes it somewhat less transparent. And the quality/applicability of the priors can be critical to the accuracy of the outcome.

As Georgi Z. Georgiev explains in Issues with Current Bayesian Approaches to A/B Testing in Conversion Rate Optimization, “An end user would be left to wonder: what prior exactly is used in the calculations? Does it concentrate probability mass around a certain point? How informative exactly is it and what weight does it have over the observed data from a particular test? How robust with regards to the data and the resulting posterior is it? Without answers to these and other questions an end user might have a hard time interpreting results.”

As with other things unique to a specific platform, it also impinges on the portability of the data, as Georgiev explains:

A practitioner who wants to do that [compare results of different tests run on different platforms] will find himself in a situation where it cannot really be done, since a test ran on one platform and ended with a given value of a statistic of interest cannot be compared to another test with the same value of a statistic of interest ran on another platform, due to the different priors involved. This makes sharing of knowledge between practitioners of such platforms significantly more difficult, if not impossible since the priors might not be known to the user.

Interpreting MECLABS (classical approach) test duration estimates 

At MECLABS, the estimated minimum required sample size for most experiments conducted with Research Partners is calculated using classical statistics. For example, the formula for computing the number of samples needed for two proportions that are evenly split (uneven splits use a different and slightly more complicated formula) is provided by:

Solving for n yields:

Variables:

  • n: the minimum number of samples required per treatment
  • z: the Z statistic value corresponding with the desired Level of Confidence
  • p: the pooled success proportion — a value between 0 – 1 — (i.e., of clicks, conversions, etc.)
  • δ: the difference of success proportions among the treatments

This formula is used for tests that have an even split among treatments.

Once “samples per treatment” (n) has been calculated, it is multiplied by the number of primary treatments being tested to estimate the minimum number of total samples required to detect the specified amount of “treatment effect” (performance lift) with at least the specified Level of Confidence, presuming the selection of test subjects is random.

The estimated test duration, typically expressed in days, is then calculated by dividing the required total sample size by the expected average traffic level, expressed as visitors per day arriving at the test.

Finding your way 

“As a marketer using experimentation to optimize your organization’s sales performance, you will find your own style and your own way to your destination,” Kemper said.

“Like travel, the path you choose depends on a variety of factors, including your skills, your priorities and your budget. Getting over the mountains, you might choose to climb, bike, drive or fly; and there are products and service providers who can assist you with each,” he advised.

Understanding sampling method and minimum required sample size will help you to choose the best path for your organization. This article is intended to provide a starting point. Take a look at the links to related articles below for further research on sample sizes in particular and testing in general.

Related Resources

17 charts and tools have helped capture more than $500 million in (carefully measured) test wins

MECLABS Institute Online Testing on-demand certification course

Marketing Optimization: How To Determine The Proper Sample Size

A/B Testing: Working With A Very Small Sample Size Is Difficult, But Not Impossible

A/B Testing: Split Tests Are Meaningless Without The Proper Sample Size

Two Factors that Affect the Validity of Your Test Estimation

Frequentist A/B test (good basic overview by Ethen Liu)

Bayesian vs Frequentist A/B Testing – What’s the Difference? (by Alex Birkett on ConversionXL)

Thinking about A/B Testing for Your Client? Read This First. (by Emīls Vēveris on Shopify)

On the scalability of statistical procedures: why the p-value bashers just don’t get it. (by Jeff Leek on SimplyStats)

Bayesian vs Frequentist Statistics (by Leonid Pekelis on Optimizely Blog)

Statistics for the Internet Age: The Story Behind Optimizely’s New Stats Engine (by Leonid Pekelis on Optimizely Blog)

Issues with Current Bayesian Approaches to A/B Testing in Conversion Rate Optimization (by Georgi Z. Georgiev on Analytics-Toolkit.com)

 

The post A/B Testing: Why do different sample size calculators and testing platforms produce different estimates of statistical significance? appeared first on MarketingExperiments.

A/B Testing Prioritization: The surprising ROI impact of test order

I want everything. And I want it now.

I’m sure you do, too.

But let me tell you about my marketing department. Resources aren’t infinite. I can’t do everything right away. I need to focus myself and my team on the right things.

Unless you found a genie in a bottle and wished for an infinite marketing budget (right after you wished for unlimited wishes, natch), I’m guessing you’re in the same boat.

When it comes to your conversion rate optimization program, it means running the most impactful tests. As Stephen Walsh said when he wrote about 19 possible A/B tests for your website on Neil Patel’s blog, “testing every random aspect of your website can often be counter-productive.”

Of course, you probably already know that. What may surprise you is this …

It’s not enough to run the right tests, you will get a higher ROI if you run them in the right order

To help you discover the optimal testing sequence for your marketing department, we’ve created the free MECLABS Institute Test Planning Scenario Tool (MECLABS is the parent research organization of MarketingExperiments).

Let’s look at a few example scenarios.

Scenario #1: Level of effort and level of impact

Tests will have different levels of effort to run. For example, it’s easier to make a simple copy change to a headline than to change a shopping cart.

This level of effort (LOE) sometimes correlates to the level of impact the test will have to your bottom line. For example, a radical redesign might be a higher LOE to launch, but it will also likely produce a higher lift than a simple, small change.

So how does the order you run a high effort, high return, and low effort, low return test sequence affect results? Again, we’re not saying choose one test over another. We’re simply talking about timing. To the test planning scenario tool …

Test 1 (Low LOE, low level of impact)

  • Business impact — 15% more revenue than the control
  • Build Time — 2 weeks

Test 2 (High LOE, high level of impact)

  • Business impact — 47% more revenue than the control
  • Build Time — 6 weeks

Let’s look at the revenue impact over a six-month period. According to the test planning tool, if the control is generating $30,000 in revenue per month, running a test where the treatment has a low LOE and a low level of impact (Test 1) first will generate $22,800 more revenue than running a test where the treatment has a high LOE and a high level of impact (Test 2) first.

Scenario #2: An even larger discrepancy in the level of impact

It can be hard to predict the exact level of business impact. So what if the business impact differential between the higher LOE test is even greater than in Scenario #1, and both treatments perform even better than they did in Scenario #1? How would test sequence affect results in that case?

Let’s run the numbers in the Test Planning Scenario Tool.

Test 1 (Low LOE, low level of impact)

  • Business impact — 25% more revenue than the control
  • Build Time — 2 weeks

Test 2 (High LOE, high level of impact)

  • Business impact — 125% more revenue than the control
  • Build Time — 6 weeks

According to the test planning tool, if the control is generating $30,000 in revenue per month, running Test 1 (low LOE, low level of impact) first will generate $45,000 more revenue than running Test 2 (high LOE, high level of impact) first.

Again, same tests (over a six-month period) just a different order. And you gain $45,000 more in revenue.

“It is particularly interesting to see the benefits of running the lower LOE and lower impact test first so that its benefits could be reaped throughout the duration of the longer development schedule on the higher LOE test. The financial impact difference — landing in the tens of thousands of dollars — may be particularly shocking to some readers,” said Rebecca Strally, Director, Optimization and Design, MECLABS Institute.

Scenario #3: Fewer development resources

In the above two examples, the tests were able to be developed simultaneously. What if the test cannot be developed simultaneously (must be developed sequentially) and can’t be developed until the previous test has been implemented? Perhaps this is because of your organization’s development methodology (Agile vs. Waterfall, etc.), or there is just simply a limit on your development resources. (They likely have many other projects than just developing your tests.)

Let’s look at that scenario, this time with three treatments.

Test 1 (Low LOE, low level of impact)

  • Business impact — 10% more revenue than the control
  • Build Time — 2 weeks

Test 2 (High LOE, high level of impact)

  • Business impact — 360% more revenue than the control
  • Build Time — 6 weeks

Test 3 (Medium LOE, medium level of impact)

  • Business impact — 70% more revenue than the control
  • Build Time — 3 weeks

In this scenario, Test 2 first, then Test 1 and finally Test 3, along with Test 2, then Test 3, then Test 1 were the highest-performing scenarios. The lowest-performing scenario was Test 3, Test 1, Test 2. The difference was $894,000 more revenue from using one of the highest-performing test sequences versus the lowest-performing test sequence.

“If development for tests could not take place simultaneously, there would be a bigger discrepancy in overall revenue from different test sequences,” Strally said.

“Running a higher LOE test first suddenly has a much larger financial payoff. This is notable because once the largest impact has been achieved, it doesn’t matter in what order the smaller LOE and impact tests are run, the final dollar amounts are the same. Development limitations (although I’ve rarely seen them this extreme in the real world) created a situation where whichever test went first had a much longer opportunity to impact the final financial numbers. The added front time certainly helped to push running the highest LOE and impact test first to the front of the financial pack,” she added.

The Next Scenario Is Up To You: Now forecast your own most profitable test sequences

You likely don’t have the exact perfect information we provided in the scenarios. We’ve provided model scenarios above, but the real world can be trickier. After all, as Nobel Prize-winning physicist Niels Bohr said, “Prediction is very difficult, especially if it’s about the future.”

“We rarely have this level of information about the possible financial impact of a test prior to development and launch when working to optimize conversion for MECLABS Research Partners. At best, the team often only has a general guess as to the level of impact expected, and it’s rarely translated into a dollar amount,” Strally said.

That’s why we’re providing the Test Planning Scenario Tool as a free, instant download. It’s easy to run a few different scenarios in the tool based on different levels of projected results and see how the test order can affect overall revenue. You can then use the visual charts and numbers created by the tool to make the case to your team, clients and business leaders about what order you should run your company’s tests.

Don’t put your tests on autopilot

Of course, things don’t always go according to plan. This tool is just a start. To have a successful conversion optimization practice, you have to actively monitor your tests and advocate for the results because there are a number of additional items that could impact an optimal testing sequence.

“There’s also the reality of testing which is not represented in these very clean charts. For example, things like validity threats popping up midtest and causing a longer run time, treatments not being possible to implement, and Research Partners requesting changes to winning treatments after the results are in, all take place regularly and would greatly shift the timing and financial implications of any testing sequence,” Strally said.

“In reality though, the number one risk to a preplanned DOE (design of experiments) in my experience is an unplanned result. I don’t mean the control winning when we thought the treatment would outperform. I mean a test coming back a winner in the main KPI (key performance indicator) with an unexpected customer insight result, or an insignificant result coming back with odd customer behavior data. This type of result often creates a longer analysis period and the need to go back to the drawing board to develop a test that will answer a question we didn’t even know we needed to ask. We are often highly invested in getting these answers because of their long-term positive impact potential and will pause all other work — lowering financial impact — to get these questions answered to our satisfaction,” she said.

Related Resources

MECLABS Institute Online Testing on-demand certification course

Offline and Online Optimization: Cabela’s shares tactics from 51 years of offline testing, 7 years of digital testing

Landing Page Testing: Designing And Prioritizing Experiments

Email Optimization: How To Prioritize Your A/B Testing

The post A/B Testing Prioritization: The surprising ROI impact of test order appeared first on MarketingExperiments.

Customer Motivation: How a craft brewery tapped into the element that most affects conversion

If you want conversion rate increases, the No. 1 factor to consider is customer motivation, according to the Conversion Sequence Heuristic from MECLABS Institute (parent research organization of MarketingExperiments).

That’s why the letter “m” has the biggest multiplier (4) in the heuristic.

When we talk about motivation, we often talk at a granular level — understanding where traffic is coming from or where customers are in the thought sequence to help your landing page optimization.

I recently came across a great example of an entire product built solely on customer motivation: A small brand went up against a giant competitor by tapping deeply into customer motivation. You may not be able to go this far with your products, but extreme examples like this are nice because they help us brainstorm possible outside-the-box ideas we can do with our own marketing.

“It begins with an ancient story”

Our story begins with the 2017 AFC Championship football game. The Jacksonville Jaguars versus the New England Patriots. David versus Goliath. If you’re unfamiliar with this part of the story, John Malkovich tells it far better than I can.

Except, when David slew Goliath, there were no referees involved to influence the outcome. In the case of the Jaguars versus the Patriots, a controversial call by the refs decided the outcome of the game. Goliath (the Pats) went on to the Super Bowl, and David (the Jags) was sent into a long offseason.

In case you’re unfamiliar with football, I’ll briefly overexplain what happened. If you’re totally uninterested in football, feel free to skip the next two paragraphs.

The most controversial call in the game came when Jaguars linebacker Myles Jack stripped the ball (took it away) from Patriots running back Dion Lewis in the fourth quarter of the game. After stripping the ball, Jack got up and started running to the end zone for a touchdown. But he stopped because the refs blew the whistle, in effect saying he was touched by Lewis, meaning he was down by contact and the play was over.

However, upon looking at the slow-motion replay, it appears that Lewis didn’t touch Jack, and therefore Jack wasn’t down. However, once a play is blown dead by the refs’ whistle in the NFL, they can’t overturn the call from the instant replay. If the refs had waited on the whistle, allowed the play to run its course, Jack likely would have scored a touchdown, the replay would have shown he was never touched and therefore never down, the Jaguars would have had an insurmountable lead and headed to their first Super Bowl.

Instead, Goliath won.

This botched call became a thing. A meme. It went viral. Whatever you want to call it, it created a deep and abiding motivation in a large percentage of people living in the Jacksonville area.

Which also created an opportunity.

Every marketer faces their own Goliath

Before I complete the story, let’s jump to a challenge you likely face — how to compete with a larger rival. How do you defeat your industry’s Goliath? Unless your brand dominates its market, you likely have to face a larger competitor. In ecommerce, that competitor is Amazon. In B2B, it might be IBM. In the beer industry, that company is Anheuser-Busch InBev SA/NV and its $246.13 billion in assets.

Intuition Ale Works is a Jacksonville-based craft brewery and taproom. I don’t know the value of its assets, but it is significantly less than AB InBev.

So how to compete?

You need a compelling story powered by a forceful value proposition because you’re fighting against a whole lot of money. Money that can drive logistical efficiencies that allow your bigger competitor to be profitable at a much lower cost than you can bear. Money that can buy loads of advertising and sponsorships and endorsement and expertise.

For Intuition Ale Works, part of its value proposition is beer brewed in Jacksonville. But actually, that isn’t unique. AB InBev also has a brewery in Jacksonville.

Another part of its value prop is that Intuition has a greater degree of intimacy with its customers. It is better able to tap into their motivations.

“We try to keep a close eye on the buying patterns of our customers,” Brad Lange, Chief Operating Officer, Intuition Ale Works said. “Every morning our sales team reviews updated metrics that show how our core beers are performing. (Core beers are available year-round in package and draft format throughout Jacksonville, as opposed to seasonal, specialty and limited-release beers that have shorter lifespans). We also check the previous day’s sales report in our taproom.”

He continued, “This gives us insight into how our seasonal and specialty beers have been selling. I’d say that we are obsessed with data, at least when it comes to consumer interest in our beers. Part of this interest is business related. But at a deeper level, we want to provide Intuition drinkers with beer that they are excited about. We let the sales numbers tell us what consumers like and what they don’t.”

Customers vote for their motivations with their wallets

Intuition had a new beer in the works, brewed by owner and founder Ben Davis, that needed a name. “Our brand is typically more outdoorsy and Florida-related, and the beer names are simple and straightforward. For example, Jon Boat Coastal Ale, I-10 IPA, and King Street Stout,” Lange told me.

However, they knew the whistle heard around the city had an undeniable allure to their customers. So they decided to stray from the brand in order to tap into the customer’s motivations. The customer’s motivations trumped the company-derived brand.

“As most people in Jacksonville know by now, the phrase ‘Myles Jack Wasn’t Down’ has gone viral locally. It’s become a rallying cry, of sorts. Ben mentioned it and we all thought it was great, even though it is completely off-brand in terms of how we normally name our beers,” Lange said.

And so Myles Jack Wasn’t Down! became the name of the brewery’s latest product.

Not all purchases are logical. Customers aren’t dismal scientists, coldly calculating how supply and demand affect their decisions. The purchases that tap most deeply into their motivations are based less on product features and benefits and more on an ability to express themselves in a cold, noisy and overpowering world. “I’m here. I matter. And this is what matters to me.”

Apple understood that with its legendary Think Different campaign. “I’m a misfit, I’m a rebel, I can’t buy a PC.”

Patagonia has tapped deeply into customer motivations with its environmental activism (probably less as a marketing strategy and more as a core belief). As a result, revenue and profit have quadrupled over the past 10 years, and the company now sells about $1 billion per year in outdoor clothing and gear.

It’s difficult for a customer to logically compare the features and functions of every jacket on the market and determine which will best serve their short- and long-term needs. However, it’s easy for a customer to understand that they have a deep motivation to support public lands. And they see Patagonia is fighting for public lands against Goliath (even though the refs are being unfair). So they subconsciously think, “While I might be a mere speck of dust in this universe, I’m going to stand with Patagonia and public lands and the environment by buying this jacket.”

And so it is with beer as well. While the actual product and the football play really have nothing to do with each other, the Myles Jack Wasn’t Down! beer name has had an undeniable effect. “It has sold incredibly well. We don’t try to actively market our beers. But once we announced the name, it sort of took on a life of its own. People came in right away to try it. A lot of them have been wearing Jaguars gear. It has been a pleasant surprise for sure. Myles Jack’s family actually contacted us and are planning on stopping by to try it,” Lange said.

While Lange says they don’t actively market their beers, I will disagree. Sure, in the typical business connotation they don’t. They don’t buy advertising, hold focus groups or build an official marketing plan. They don’t have a drip campaign built into their marketing automation platform.

But customer-first marketing doesn’t always look like the traditional definition of marketing at first glance. The core of customer-first marketing is understanding and serving a customer and then creating messaging so the customer perceives that your product will serve them. All that other stuff is just a means to get that message to your ideal customer. And in that sense, I think Lange and his team engage in some serious marketing.

It’s not always sunny in Jacksonville, Florida

I could have ended the story right there, on an up note. But the sun doesn’t always shine in the Sunshine State. As we’ve seen, David doesn’t always defeat Goliath. And sometimes, dark clouds form around products as well.

Part of customer intimacy and deeply understanding customer motivations is being able to say goodbye to products. Customer motivations aren’t static. They change. As your customers age. As new technology is developed. As competitors get a better fix on what customers want. As the shifting tide of trends and public opinions ebbs and flows.

For example, Intuition recently decided to retire one of its first beers.

“This was a really difficult decision because it played such a key role in the development of our brand the past seven-and-a-half years. When a beer doesn’t sell as well as it once did, it tells us that something has changed. Maybe a style isn’t that popular in the market anymore. Or we’ve developed a similar beer that just tastes better, and our customers prefer it. It’s our job to figure out why sales fell off and then to create something different that our customers will be excited about,” Lange said.

Grab your slingshot and go into battle

If your brand is facing down its own Goliath, I hope this story provided a bit of inspiration in your day. Remember, size isn’t everything.

Your slingshot is your understanding of the customer — whether you’re using data analysis or A/B testing, sales reports or in-person customer interviews.

Whichever brand understands customer motivations best, wins.

Related Resources

Five Questions to Ask to Understand Customer Motivation

Analyzing Customer Motivation to Create Campaign Incentives that Resonate

Harnessing Customer Motivation: How one company increased conversion by 65% by aligning page elements with customer desire

The post Customer Motivation: How a craft brewery tapped into the element that most affects conversion appeared first on MarketingExperiments.

Conversion Rate Optimization: 7 tips to improve your ecommerce conversion rates

We recently published median conversion rates for 25 ecommerce product categories in MarketingSherpa (MarketingExperiments’ sister publication).

That answers the first question most marketers have — how are my conversion rates compared to my competitors?

But the quick second question should be — how do I improve my conversion rates?

To tackle that topic, we look back at the original research that informed those benchmark conversion rates — The MarketingSherpa E-commerce Benchmark Study. In the study, we asked for quantitative information, like average conversion rates. But we also had free response fields where marketers could enter qualitative information as well.

In this article, we’ll go through some of that qualitative information, along with resources to help you put it into practice.

 

Tip #1: Conversion rate improvements are a continual process

Here are some thoughts from respondents …

“[Our challenge is] conversion rates … we have been making improvements in the website and our conversion rate is steadily climbing.”

“Current version of website is 2 years old. In the process of a re-build to enhance PPC and organic conversions.”

“Quickly changing landscape with our competitors, mainly in look and feel of sites. We’ve hired a conversion optimization team to assist with a website overhaul.”

“Revised and tested landing pages over and over again.”

Conversion rate optimization (CRO) is the continual process of making changes, testing them, learning from them, and make further improvements based on new knowledge.

It is continual. A website isn’t like a brochure that is fixed. The internet is constantly changing, and those changes can affect your site’s conversion from a Google algorithm change to a suddenly slower-loading plug-in on your site to shifting consumer preferences and whims.

Here are some resources on CRO:

Tip #2: Optimize your call-to-action

“We have seen improvements in the conversion rates by reducing clicks and improving CTAs [calls-to -action]. Instead of ‘View,’ we used ‘Buy’ and that proved to be a stronger CTA for conversions.”

The call-to-action can have a large impact on conversions since it is the point of decision for the customer. Just don’t run out and change your CTA from ‘view’ to ‘buy’ and accept a similar conversion increase.

In fact, we’ve run many experiments where we’ve seen that asking for more commitment in the CTA can drive down conversion. But it depends on the buyer’s journey. In the example from these marketers, they might have already been far down the buyer’s journey. Or they may simply be paying for clicks in an ad and only want to pay for very qualified traffic that will purchase. However, if you’re optimizing the CTA in the first email your prospect sees, a CTA of “Buy” may be asking too much too soon.

So test and see what works for your unique customers. And don’t assume the same CTA will be most effective in different stages of the buyer’s journey. Understand where the customer is at in their thinking, and what the most logical next step is.

Here are a few resources to help you optimize your CTAs:

Tip #3: Communicate your brand’s and products’ unique value proposition

“It is difficult to increase conversion rate without using incentives like discounts and offers.”

Yes, it certainly is. That’s why so many marketers revert to using discounts and offers. It’s a way to juice short-term results.

But selling on cost reduction alone is a difficult way to run a sustainably profitable business.

The answer to overcoming this challenge is the value proposition. If your product provides a true value to potential customers, and you do a good job of clearly articulating that value in a compelling way, you are more likely to be able to sell on value and not just cost reduction — increasing conversion rates and your margins.

A few helpful resources:

Tip #4: Make it easy for the customer

“It is obvious, but worth mentioning: making the eCommerce experience as simple as humanly possible.”

A well-articulated value proposition increases the likelihood of conversion, while friction decreases the likelihood of conversion.

You may have all sorts of internal policies, technological challenges or good faith reasons why the buy process is difficult for the customer. But those internal reasons don’t matter. The more friction there is, the more likely the customer will find someplace else to purchase (like that streamlined efficiency monster Amazon).

Here are a few resources to help you reduce friction:

This doesn’t mean your website can’t convert despite friction. There are other factors at play to optimizing conversion, as shown in the MECLABS Conversion Sequence Heuristic. But reducing friction will usually increase conversion. For example, this respondent was wise enough to know that while the site was converting well, friction was hampering the conversion rate.

“Our conversion rate is surprisingly good for a website that requires users to register before checking out.”

Tip #5: Less can be more

Another way to reduce friction is reducing the steps necessary to make a purchase, as this respondent describes. This is a common problem in ecommerce purchase funnels.

“Our conversion funnel from Cart Start to Purchase confirmation was a disaster. Too many steps, too much abandonment, and not enough messaging within the funnel to help the customer navigate. We did some thorough analysis on our purchase funnel. We identified all the major drop-out points (abandonment) and identified steps that could either be eliminated, skipped or consolidated. Our purchase funnel went from 12 steps down to six. We still have some testing and optimizing to do.”

Some ideas to help you remove steps that are hindering conversion in your funnel:

Tip #6: You don’t only need a clear value prop for customers, you need that clear value prop for internal and external decision makers as well

“Most of our clients only want a website. And most all of our clients have no understanding of the importance of creating a strategic marketing plan that integrates the website as a part of a traffic and conversion process. They also don’t understand the value and benefits of analytical tools on the back end to measure results and to modify marketing campaigns and testing for better conversion.”

Some marketers are so focused on selling to an end customer, they overlook the internal marketing that is crucial to any campaign or website’s success as well. For some reason, we get frustrated and think internal business decision makers or external clients should just get it.

Even though our art is communication and conversion, when it comes to those closest to us, we overlook the essential need to present a value proposition.

Here are a few ideas to help with your internal marketing:

Tip #7: Understand the conversion you need before the conversion you want

Optimizing conversion rate shouldn’t only be about the final sale. The marketer quoted below has done a good job understanding the necessary micro-yes that ultimately leads to higher sales — opt-ins to an email newsletter.

“We have three key items we check almost daily: number of visitors, conversion rate, and average order value (AOV). Where we see that number of visitors is increasing, conversion rate and AOV is more or less stable. By far, our most successful marketing instrument is our weekly newsletter with some special offers. That boosts sales dramatically. So we are also quite keen on having customers sign up for the newsletter.”

Some resources to help you understand the micro-yes sequence:

The post Conversion Rate Optimization: 7 tips to improve your ecommerce conversion rates appeared first on MarketingExperiments.