Author: Daniel Burstein

Mental Cost: Your customers pay more than just money

What payment options do you give customers? Credit card? Debit card? Maybe you’re on the cutting edge and allow Bitcoin?

Regardless of the final payment methods for a product, you are asking your customers to pay in other ways as well, a way that is all too easily overlooked by brands to the detriment of their conversion rates — mental costs.

“I have to do this much”

A mental cost is essentially your customer thinking, “I have to do this much.” It could be to purchase a product. But it could be for another conversion goal, like simply reading your content marketing.

This is above and beyond the material cost — the actual price, maintenance charges or other monetary expenses.

If you don’t understand them well, mental costs can be detrimental to conversion because they add to the cost of the decision beyond just the price of a product. And when the cost side is too heavy in the exchange sum (i.e., how the customer weighs whether an action is worth taking), you will lose customers. The image above is from the book The Marketer as Philosopher, and it illustrates the customer’s choice process. (Click here to see a larger image)

“Mental costs represent the ‘soft’ elements, especially friction and anxiety. The marketer must remember cost is not just a mathematical calculation; it is especially a psychological calculation. A low material cost does not necessarily mean a low mental cost.” 

— Flint McGlaughlin, CEO and Managing Director, MECLABS Institute

Friction — psychological resistance

These mental costs exist for paid products, but they also exist for conversion actions that don’t require any payment.

For example, I was doing my taxes recently, and usually I get the income statement forms in the mail. Well, for one account I have, I didn’t get the form in the mail. I simply got an email telling me I had to log-in to the online account to download the form.

This process involved far more friction than simply opening my mailbox. It made me re-consider further investing money with this company. I literally thought, “I have to do this much just to get my tax forms?”

And note, this didn’t involve any elements of the actual material cost or value like fees on the account or the return I’m getting for my investment.

Other examples of friction include:

  • Making a customer log into an account before purchasing
  • Text that is too small or reversed out of a dark background
  • Popups that are difficult to exit out of (if you have a pop-over, allow visitors to easily exit with a clear “x” close button in the upper right or by simply clicking on the background)

Reduce friction. Ask yourself, what elements of the conversion process are a pain in the butt for customers? What hoops do you make them jump through? And how can you ease the process for them?

Anxiety – psychological concern

While friction is the level of effort required on the customer’s part, anxiety is the level of concern.

To use the previous example, not only was their friction from finding the proper form, there was anxiety as well. The first account I went into didn’t have the tax form. The email I received only said the form was ready, but I had multiple accounts with the financial institution and didn’t realize it was in another account.

This caused anxiety — first, because I couldn’t find the form. But then after I successfully found and printed the tax form, what if there were other forms I was missing? Again, this causes more anxiety than simply receiving the tax form in the postal mail.

This is true for many online-only processes and communication mechanisms – from email to online billing to online accounts of all sorts. There will naturally be more friction and anxiety so it’s necessary to ensure intuitive usability as well as trusting-building measures like easy and convenient customer service through chat, phone, email, social media, however your customers prefer to communicate.

If not, the cost of doing business with your institution – the mental cost that is – may cause you to lose customers.

Other examples of anxiety include:

  • Not clearly communicating shipping costs
  • Being unclear about when a customer will receive a product
  • An email or landing page with typos, poor grammar, or one that clearly isn’t written by a native speaker of that language
  • Asking for sensitive information (Social Security number, income, etc.) without communicating why you need it and how it will be used

You can find the right balance between mental and material costs

A product with a strong value force can command high mental and material costs from customers. Tesla attained 518,000 preorders for the Model 3 electric car before it was even built. Not only were customers willing to pay a high material cost (no haggling or discounts on these cars) they paid a high mental cost — the friction of waiting to receive the car and the anxiety of buying a car sight unseen.

On the flip side, have you ever received a free ticket to a concert or event that you chose not to attend? In that case, the material cost was zero. However, the perceived value was so low you were unwilling to pay the mental cost of attending.

Most products or other conversion goals are between the two extremes of this spectrum. And while you want to increase the perceived value above the perceived cost to increase your conversion rate, keep in mind that you can also try to find the right balance between mental and material costs on the cost side.

For example, you might discover that some customers are willing to pay the mental cost of tracking down a coupon or promo code in order to save on material costs. Some customers are willing to pay the mental costs of picking up a product in-store to save the material costs of shipping. However, other customers prefer paying a higher material cost in order to minimize mental costs.

By testing offers and getting to know your customer personas better, you can find the right mix of mental and material costs for your customer segments.

Related Resources

Learn more about mental cost in MECLABS Value Proposition Development online certification course.

Value Proposition: NFL’s Jaguars increase revenue with customer-centric marketing

Copywriting: How to tip the scale so customers act

The post Mental Cost: Your customers pay more than just money appeared first on MarketingExperiments.

Heuristic Cheat Sheet: 10 methods for improving your marketing

We were recently asked, “Is there a heuristic cheat sheet published that shows all of them at a glance?”

MarketingExperiments and its parent research organization, MECLABS Institute, have become well known for our heuristics.

If you’re already familiar with what a heuristic is, feel free to scroll down to see the cheat sheet.

If heuristics are new to you, first a quick explanation.

A heuristic isn’t an equation to solve, it’s more of a thought tool to help understand a process or method.

While other areas of the business have well-defined methodologies, techniques and tools to help organizations work toward process improvement (e.g., Six Sigma, Lean manufacturing, TQM, ISO 9001), marketing has lacked a systematic method, tending to rely on individual high performers with a “golden gut” who just “get it.”

Since not every person in a marketing department or advertising agency just preternaturally “gets it,” performance can be lumpy, and it’s hard to field a marketing team that performs well across the board.

MECLABS Institute has developed a series of patented methodologies (Pat. No. 8,155,995) to help marketing, advertising and business leaders bring rigor to the way their teams think about executing marketing tactics as well as serving a customer with their marketing.

These heuristics were developed from patterning the results of our research library. The goal of these heuristics is to systematically diagnose the inefficiencies in your sales, marketing and conversion process. They are tools to help identify where to focus your energies when moving through a conversion opportunity.

Here is a quick glance at these heuristics, with links to more in-depth information.

The MECLABS Conversion Index Optimization Heuristic

This is our most well-known heuristic, so you’ve probably seen it around before. Since it is the most fundamental heuristic — after all, the main goal of marketing is to inform potential customers to get them to act — I will go into this heuristic in the most detail.

The MECLABS Conversion Index sequence seeks to identify the factors you can influence to help increase the probability of conversion (C in the above heuristic). It can help you step out of your marketing department and get in the shoes of the customer.

Motivation

Motivation (m) has the highest coefficient (4) because it is the most important deciding factor in the sales process. It consists of two components:

  • The nature of the customer’s demand for the product (why)
  • The magnitude of the customer’s demand for the product (want)

An effective strategy is to target customers or channels that have a higher motivation to buy your product.

To understand a user’s motivation and design relevant webpages to their needs, you must analyze the behavior of both online and offline traffic channels.

Value Proposition

Value Proposition (v) is the primary reason why your prospect should buy from you rather than any of your competitors. There are four elements to a powerful value proposition.

  • Appeal (“I want it”) — Three factors contribute directly to a prospect’s degree of “want”: relevance, importance and
  • Exclusivity (I can’t get it anywhere else”) — Exclusivity is related to the number of competing options. The lower the number, the better.
  • Credibility (“I believe in it/you”) — How to intensify credibility: specificity, quantification and
  • Clarity (“I understand it/you”) — Is the message clearly articulated, and can the prospect easily find the message?

Friction

Friction (f) is the psychological resistance to a given element in the sales process. There is a minus sign before friction because friction is an element that hinders conversion. It is composed of two components:

  • Length — This might be the number of fields or the number of steps from Point A (desire to buy) to Point B (purchase).
  • Difficulty — This might be the nature of the fields, a disruptive eye path or page elements that cause visitor annoyance.

The objective is to minimize not eliminate friction. If you eliminate all friction, you eliminate “the sale” (for example, you cannot remove a credit card field).

Anxiety

Anxiety is psychological concern stimulated by a given element in the sales or “buy” process.

You must seek to relieve and/or correct for anxiety at three different levels:

  • Specificity — Corrective measures address the precise source of anxiety.
  • Proximity — Visitor experiences corrective measures at the same time and place as anxiety is experienced.
  • Intensity — Corrective measures are amplified to overcome irrational fears.

Incentive

Incentive is an appealing element such as a discount, a bonus or special offer introduced to stimulate a desired action.

Incentive is used to “tip the balance” of emotional forces from negative (exerted by friction elements) to positive.

 

An effective test plan tests the other elements in the Conversion Index first, then seeks to test the impact of an incentive for additional improvement.

To learn how to apply the Conversion Index Heuristic in your marketing, you can take the MECLABS Landing Page Optimization online certification course.

The MECLABS Perceived Value Differential Heuristic

As mentioned in the previous heuristic, the proper use of incentive can help increase the probability of conversion. This heuristic helps you identify the most effective incentive to use.

  • PVD: Perceived value differentials
  • Vp: Perceived value of incentive
  • C$n: Net delivered cost of incentive

You can learn more about the Perceived Value Differential Heuristic in the article Finding The Ideal Incentive: How We Increased Email Capture by 319%.

The MECLABS Return on Incentive Heuristic

Since incentives often have a monetary cost, the Return on Incentive Heuristic helps call out the need for determining which incentive is actually most effective. The danger with incentives is that you could use them to increase an intermediate metric but hurt your overall results. For example, you could offer free shipping and gain more sales. However, if you don’t keep an eye on the return on incentive, you might overlook the fact that you’re losing money on each sale because the shipping is so expensive.

  • ROIc: Total return on incentive
  • P$n: Net profit impact from incentive
  • C$n: Net delivered cost of incentive

To learn how to apply the Return on Incentive Heuristic to your business, you can take the MECLABS Landing Page Optimization online certification course.

The MECLABS Friction Heuristic

The Friction Heuristic takes a closer look at another element of the Conversion Index Heuristic.

  • fsc: Friction
  • lt: Length
  • dt: Difficulty

To learn how to use the Friction Heuristic on your marketing, you can take the MECLABS Landing Page Optimization online certification course.

The MECLABS Net Value Force Heuristic

The Net Value Force Heuristic helps you understand which elements to adjust to increase the force of a value proposition.

  • Nf: Net force of the value proposition
  • Vf: Gross force of the value
  • Cf: Gross force of the cost
    • Mt: Material (I have to pay this much)
    • Mn: Mental (I have to do this much)
  • Ac: Acceptance (aka reception)

Learn more about using this heuristic with your business in the Value Proposition Development online certification course.

The MECLABS Optimization Sequence Heuristic

The MECLABS Optimization Sequence guides the order in which you should optimize your sales and marketing funnel. Namely, make sure you have a high-quality, valuable product before you craft a landing page for it. And make sure you have a good product and an optimized landing page before you start driving traffic to it.

Learn more about the Optimization Sequence Heuristic in the video How to approach a Minimum Viable Product.

The MECLABS Email Conversion Heuristic

The previous heuristic shows the proper priority of channel optimization. The MECLABS Email Conversion Heuristic helps you optimize your messaging specifically for an email channel to increase your effectiveness.

  • eme: Email messaging effectiveness
  • rv: Relevance to the consumer
  • of: Offer value (why)
  • i: Incentive to take action
  • f: Friction elements of the process
  • a: Anxiety elements of the process

You can learn more about the Email Conversion Heuristic in the article Email Marketing: 91% of marketers find target audience testing effective.

Email Messaging Optimization Index Heuristic

The Email Messaging Optimization Index helps you improve email effectiveness by prioritizing your email marketing optimization efforts.

  • ec: Email capture
  • op: Open rate
  • ct: Clickthrough
  • lp: Landing page

Learn more about the Email Messaging Optimization Index Heuristic in Internet Marketing for Beginners: Email marketing optimization 101.

The MECLABS Ad Messaging Index Heuristic

The Ad Messaging Index Heuristic provides a framework for optimizing a significant channel for most marketers — advertising — to create an effective ad.

  • ea: Effective ad
  • at: Ability to capture attention
  • I: Ability to turn attention into interest
  • as: Force of the “ask”

Learn more about the Ad Messaging Index in the article Banner Ad Design: The 3 key banner objectives that drove a 285% lift.

The MECLABS Online Testing Heuristic

All the heuristics help identify changes marketers can make to improve conversion. But ultimately, these changes should inform research questions and hypotheses that you then test with potential customers to discover what works best. The Online Testing Heuristic helps you understand the factors necessary for effective tests.

To learn how to apply the Conversion Index Heuristic in your marketing, you can take the MECLABS Online Testing online certification course.

The post Heuristic Cheat Sheet: 10 methods for improving your marketing appeared first on MarketingExperiments.

Conversion Optimization Testing: Validity threats from running multiple tests at the same time

A/B testing is popular among marketers and businesses because it gives you a way to determine what really works between two (or more) options.

However, to truly extract value from your testing program, it requires more than simply throwing some headlines or images into a website testing tool. There are ways you can undermine your testing tool that the tool itself can’t prevent.

It will still spit out results for you. And you’ll think they’re accurate.

These are called validity threats. In other words, they threaten the ability of your test to give you information that accurately reflects what is really happening with your customer. Instead, you’re seeing skewed data from not running the test in a scientifically sound manner.

In the MECLABS Institute Online Testing certification course, we cover validity threats like history effect, selection effect, instrumentation effect and sampling distortion effect. In this article, we’ll zoom in on one example of a selection effect that might cause a validity threat and thus misinterpretation of results — running multiple tests at the same time — which increases the likelihood of a false positive.

Interaction Effect — different variations in the tests can influence each other and thus skew the data

The goal of an experiment is to isolate a scenario that accurately reflects how the customer experiences your sales and marketing path. If you’re running two tests at the same time, the first test could influence how they experience the second test and therefore their likelihood to convert.

This is a psychological phenomenon known as priming. If we talk about the color yellow and then I ask you to mention a fruit, you’re more likely to answer banana. But if we talk about red and I ask you to mention a fruit, you’re more likely to answer apple. 

Another way interaction effect can threaten the validity is with a selection effect. In other words, the way you advertise near the beginning of the funnel impacts the type of customer and the motivations of the customer you’re bringing through your funnel.

Taylor Bartlinski, Senior Manager, Data Analytics, MECLABS Institute, provides this example:

“We run an SEO test where a treatment that uses the word ‘cheap’ has a higher clickthrough rate than the control, which uses the word ‘trustworthy.’ At the same time, we run a landing page test where the treatment also uses the word ‘cheap’ and the control uses ‘trustworthy.’  The treatments in both tests with the ‘cheap’ language work very well together to create a higher conversion rate, and the controls in each test using the ‘trustworthy’ language work together just as well.  Because of this, the landing page test is inconclusive, so we keep the control. Thus, the SEO ad with ‘cheap’ language is implemented and the landing page with ‘trustworthy’ language is kept, resulting in a lower conversion rate due to the lack of continuity in the messaging.”

Running multiple tests and hoping for little to no validity threat

The level of risk depends on the size of the change and the amount of interaction. However, that can be difficult to gauge before, and even after, the tests are run.

“Some people believe (that) unless you suspect extreme interactions and huge overlap between tests, this is going to be OK. But it is difficult to know to what degree you can suspect extreme interactions. We have seen very small changes have very big impacts on sites,” Bartlinski says.

Another example Bartlinski provides is where there this is little interaction between tests. For example, testing PPC landing pages that do not interact with organic landing pages that are part of another test — or testing separate things in mobile and desktop at the same time. “This lowers the risk, but there still may be overlap. It’s still an issue if a percentage gets into both tests; not ideal if we want to isolate findings and be fully confident in customer learnings,” Bartlinski said.

How to overcome the interaction effect when testing at the speed of business

In a perfect scientific experiment, multiple tests would not be run simultaneously. However, science often has the luxury of moving at the speed of academia. In addition, many scientific experiments are seeking to discover knowledge that can have life or death implications.

If you’re reading this article, you likely don’t have the luxury of taking as much time with your tests. You need results — and quick. You also are dealing with business risk, and not the high stakes of, for example, human life or death.

There is a way to run simultaneous tests while limiting validity threats — running multiple tests on (or leading to) the same webpage but splitting traffic so people do not see different variations at the same time.

“Running mutually exclusive tests will eliminate the above validity threats and will allow us to accurately determine which variations truly work best together,” Bartlinski said.

There is a downside though. It will slow down testing since an adequate sample size is needed for each test. If you don’t have a lot of traffic, it may end up taking the same amount of time as running tests one after another.

What’s the big idea?

Another important factor to consider is that the results from grouping the tests should lead to a new understanding of the customer — or what’s the point of running the test?

Bartlinski explains, “Grouping tests makes sense if tests measure the same goal (e.g., reservations), they’re in the same flow (e.g., same page/funnel), and you plan to run them for the same duration.”

The messaging should be parallel as well so you get a lesson. Pointing a treatment ad that focuses on cost to a treatment landing page that focuses on luxury, and then a treatment ad that focuses on luxury pointing to an ad that focuses on cost will not teach you much about your customer’s motivations.

If you’re running multiple tests on different parts of the funnel and aligning them, you should think of each flow as a test of a certain assumption about the customer as part of your overall hypothesis.

It is similar to a radical redesign. Much like testing multiple steps of the funnel can cause an interaction effect, testing multiple elements on a single landing page or in a single email can cause an attribution issue. Which change caused the result we see?

Bartlinski provides this example, “On the same landing page, we run a test where both the call-to-action (CTA) and the headline have been changed in the treatment. The treatment wins, but is it because of the CTA change or the headline? It is possible that the increase comes exclusively from the headline, while the new CTA is actually harming the clickthrough rate. If we tested the headline in isolation, we would be able to determine whether the combination of the new headline and old CTA actually has the best clickthrough, and we are potentially missing out on an even bigger increase.”

While running single-factorial A/B tests is the best way to isolate variables and determine with certainty which change caused a result, if you’re testing at the speed of business you don’t have that luxury. You need results and you need them now!

However, if you align several changes in a single treatment around a common theme that represents something you’re trying to learn about the customer (aka radical redesign), you can get a lift while still attaining a customer discovery. And then, in follow-up single-factorial A/B tests, narrow down which variables had the biggest impact on the customer.

Another cause of attribution effect is running multiple tests on different parts of a landing page because you assume they don’t interact. Perhaps, you run a test on two different ways to display locations on a map in the upper left corner of the page. Then a few days later, while that test is still running, you launch a second test on the same page but in the lower right corner on how star ratings are displayed in the results.

You could assume these two changes won’t have an effect on each other. However, the variables haven’t been isolated from the tests, and they might influence each other. Again, small changes can have big effects. The speed of your testing might necessitate testing like this; just know the risk involved in terms of skewed results.

To avoid that risk, you could run multivariate tests or mutually exclusive tests which would essentially match each combination of multiple variables together into a separate treatment. Again, the “cost” would be that it would take longer for the test to reach a statistically significant sample size since the traffic is split among more treatments.

Test strategically

The big takeaway here is — you can’t simply trust a split testing tool to give you accurate results. And it’s not necessarily the tool’s fault. It’s yours. The tool can’t possibly know ways you are threatening the validity of your results outside that individual split test.

If you take a hypothesis-driven approach to your testing, you can test fast AND smart, getting a result that accurately reflects the real-world situation while discovering more about your customer.

You might also like:

Online Testing certification course — Learn a proven methodology for executing effective and valid experiments

Optimization Testing Tested: Validity threats beyond sample size

Validity Threats: 3 tips for online testing during a promotion (if you can’t avoid it)

B2B Email Testing: Validity threats cause Ferguson to miss out on lift from Black Friday test

Validity Threats: How we could have missed a 31% increase in conversions

The post Conversion Optimization Testing: Validity threats from running multiple tests at the same time appeared first on MarketingExperiments.

A Simple Guide for the Busy Marketer: Using data from online marketing and web analytics tools

How does Adobe define bounce rate? What’s the difference between exit and bounce rate? And how can these numbers be meaningfully used to better serve customers and ultimately improve results?

Let’s take a closer look at some of the numbers you see in your digital marketing analytics platform to help answer this question.

We’ll focus on terminology used by Adobe Analytics and Google Analytics since they are the two most popular analytics platforms. For example, 74% of the Internet Retailer Top 500 use Google Analytics, 41% use Adobe Analytics and only 16% use others (IBM Digital Analytics, WebTrends, etc.).

Also, since many organizations use multiple platforms (as you can see the above numbers), it’s helpful to understand when different platforms use different terminology to mean essentially the same thing.

Bounce Rate

What it is: Google Analytics defines bounce rate as “The percentage of single-page sessions in which there was no interaction with the page.” Adobe Analytics has a similar definition. It’s important to note that exit rate includes bounces, but it also includes instances when a visitor did interact with something (e.g., a previous page) and this was simply the last page they viewed on your site.

How to use it: This one strikes fear into the heart of many marketers. “People are just bouncing off my site? Simply bouncing? And it’s not a Tigger type of friendly bounce.”

However, many marketers should ease their anxiety and focus elsewhere. Bounce Rate is usually most helpful for a business that sells traffic, like a publisher that sells ads on its site. In that case, the goal of most pages (which tend to be content, like an article or blog post) is to get people to view more pages and thereby generate more ad revenue.

But if you’re selling a product or service, your focus shouldn’t be getting visitors to simply view another page. It should be to walk them through a thought sequence to help them make the best purchase decision. Essentially, a funnel.

A word of caution: “For funnel optimization, I tend to focus more on exit rates and clickthrough rates to other steps in the funnel,” said Rebecca Strally, Associate Director, Strategy Development, MECLABS Institute.

“It’s not that bounces aren’t salvageable, it’s that source/medium reporting has gotten spottier over the years so it’s difficult to assign a bounce to a specific off-site source and therefore we are hard-pressed to determine what motivation may be lacking on the page. I’ve seen better long-term results when we just focus on moving more people deeper into the funnel rather than worrying about how to reduce bounces specifically,” she said.

Exit Rate and Exits

What it is: The number of a particular webpage’s views that were last in the session. Google Analytics provides an exit rate and shows a percentage, Adobe Analytics shows exits and provides a total number. As mentioned above, exit rate includes bounces, but it also includes instances when a visitor did interact with something (e.g., a previous page) and this was simply the last page they viewed on your site.

Also, keep in mind how the denominators are different between exit rate and bounce rate. Since bounce rate essentially mentions a one-page session, the bounce rate is only measuring against entries to that page. Exit rate includes both people who entered the site on the webpage you’re looking at metrics for, but also anyone who viewed the webpage and entered from a different page.

How to use it: It can help you understand the biggest leaks in your funnel. This may be the place in the buyer’s journey that is most ripe for conversion optimization work to improve your overall results. Why are people leaving before purchasing or becoming a lead? And what information can you provide to encourage more of them to continue the journey through your funnel?

A word of caution: People will naturally leave at certain stages of your funnel more frequently than others, and it doesn’t necessarily mean that step of the funnel is underperforming. For example, if the first step of your purchase funnel asks people to select the color so they can see what their future couch will look like in that color, and the second step asks for a credit card and purchase, people are far more likely to exit on the credit card step even if you’re executing it quite well due to the nature of the ask.

Visits, Sessions and Unique Pageviews

What it is: The number of visits within the time of your report. Adobe Analytics refers to this as “visits,” and Google Analytics refers to this as “sessions” (for multiple pages in a single visit) and “unique pageviews” (for an individual page, even over multiple sessions).

How to use it: This can help you see how many times people are coming to your website. For many products and services, a customer may have to visit your website and get information about your company over time (while building trust) before making a purchase.

“In order to tell how many times people are coming to the site you need to do a calculation — visits per visitor will tell you this,” advised Taylor Bartlinksi, Senior Manager, Data Analytics, MECLABS Institute. “A visit per visitor value of 2 means that on average, users come to your site twice in the date range you have set. You can also compare visits per visitor on a page basis to see if users are returning to any particular page more often than others.”

But a word of caution: This is an engagement metric. While it can be an important step in the funnel to lead to an ultimate conversion, it likely isn’t your end goal. So you may want to focus less on getting an increased number of visits and focus more on getting visits from your ideal customer. More visits from your ideal customer are what will ultimately lead to more sales and conversion.

Page Views

What it is: The number of times a page is … wait for it … viewed. Unlike with unique pageviews or visits, this metric can include multiple views from the same person, even if they simply hit refresh. While Google Analytics and Adobe Analytics refer to this by the same name, they differ slightly on the grammar: Adobe calls it page views and Google calls it pageviews.

How to use it: It could give you a sense of how popular certain pages are on your website. If you engage in conversion optimization on the most popular pages, you can increase your chances of getting more leads and more sales.

But a word of caution: As mentioned above, people hit refresh on webpages. They go back to a page multiple times just because they’re distracted by something else. This measurement could be showing not just popularity, but simply distracted browsing and user habits.

Clickthrough Rates

What it is: Essentially, clicking on a link. This could be clicking on a homepage link to get deeper into a site, for example. Or clicking on a specific product on a product gridwall. But it can also be clicks from offsite to the website, like clicks from an online advertisement or an email. This is shown in terms of total number of clicks as well as clickthrough rate (CTR) percentage.

How to use it: This metric can help you track the effectiveness of not only parts of your website but also elements of your channel at driving prospects to the next step of the funnel.

When to de-emphasize it: If you’re paying for traffic (for example, a pay-per-click ad), your goal shouldn’t be to get the highest clickthrough rate on that particular advertisement. The goal should be to get the most clicks from customers who ultimately get through your funnel and make a purchase or take some other ultimate conversion action.

Unique Visitors and Users

What it is: The number of unduplicated visitors to your website within the time of your report. Adobe Analytics uses the term “unique visitors,” and Google Analytics uses the term “users.” One important caveat, different platforms calculate this metric in a different way (more on that below).

How to use it: This metric helps you determine how many actual people you’re reaching with your website. It can ultimately help you determine how many people take the desired action you want them to take, and help you optimize that number by running A/B and multivariate testing.

A simple word of caution: This metric may not be perfectly tracking how many people visit your website. This is probably the most difficult metric to measure of any listed in this article since customers use multiple devices and some use private browsing modes or otherwise hinder cookies so analytics tools could double- (or triple-) count some of your visitors. It’s also important to understand how unique visitors and users are counted if you’re testing on your website.

Because of this, Adobe and Google might count unique visitors and users differently. For example, if a visitor was logged in to their Gmail account and visited the same website using a desktop device and a mobile device, Google may be able to track those visits as being from the same visitor where Adobe wouldn’t. And there are likely scenarios where Adobe would be able to tell a visitor is the same on multiple devices where perhaps Google could not.

A more complex word of caution: If you really want to get into the weeds, let’s address this question — how do unique visitors differ between Google Analytics, Google Content Experiments, Adobe Analytics and Adobe Target?

Keep in mind, a unique visitor is only unique in reference to a certain timeframe. So, the way you pull data from your testing platform (like Adobe Target or Google Content Experiments) when running a test may show a different number than your analytics platform because it may be accounting for a different timeframe.

To explain this better, Rebecca Strally provided the following scenario:

  • I ran a test for the entire month of May (31 days)
  • Visitor A came to my site once in May
  • Visitor B came to my site three times on three different days in May
  • Visitor C came to my site two times in a single day in May

All of these platforms define unique visitors in the same way, so if we were to pull a monthly report from each platform, we would have three unique visitors.

Where the discrepancy comes in is when you begin pulling daily data.

If you are pulling daily data from either Adobe Analytics or Google Analytics to cross-reference to your results from Adobe Target or Google Content Experiments you will get different numbers. This is because Adobe Target and Google Content Experiments will not pull daily data; they will pull aggregate data for the entire test period.

Let’s look back at the scenario. If we pulled daily data from Adobe Analytics or Google Analytics, we would get five unique visitors (Visitor A = 1, Visitor B = 3, Visitor C = 1), but Adobe Target and Content Experiments would only be showing three unique visitors (Visitor A = 1, Visitor B = 1, Visitor C=1), because they are looking at the entire month of May.

For your tests, you should pull data aggregately so you don’t deflate the conversion rate with more unique visitors than really were in the test.

How This Data Can Be Used to Better Serve Customers and Improve Results

Keep in mind, there is one flaw with each metric we’ve discussed. It tells you about the past, not the future. That’s why the testing scenario in the previous section is so important.

You can, of course, use these numbers to better understand customers and start predicting their future actions and, more importantly, what changes you can make to affect those future actions.

You would create a hypothesis. And run an online test.

For example, if your goal for an SEO landing page is to click through to other pages with deeper info on the topic, and you run tests to improve the headlines for those topics and reduce bounce rate, those numbers and that test helped you better serve customers and improve results.

Or, if you have a five-step process to guide people through the process of choosing a nursing home, you run a test to reduce the friction necessary to move from step three to step four, and in so doing, reduce the exit rate; that is another example of using these metrics to better serve customers and improve results

From each online test, you learn more about your customers. You turn simple numbers in a spreadsheet into a Technicolor view of your customers to help serve them better, and in so doing, improve your business results.

You Might Also Like

Online Testing online certification course — learn a proven methodology for executing effective and valid experiments

Marketing Research Chart: What metrics should you track?

Five Steps To Better Metrics: How one marketer leveraged Web analytics for an annual revenue increase of $500,000

Ecommerce Chart: How a low conversion rate can be a good thing

 

 

 

The post A Simple Guide for the Busy Marketer: Using data from online marketing and web analytics tools appeared first on MarketingExperiments.

A Simple Guide for the Busy Marketer: Using data from online marketing and web analytics tools

How does Adobe define bounce rate? What’s the difference between exit and bounce rate? And how can these numbers be meaningfully used to better serve customers and ultimately improve results?

Let’s take a closer look at some of the numbers you see in your digital marketing analytics platform to help answer this question.

We’ll focus on terminology used by Adobe Analytics and Google Analytics since they are the two most popular analytics platforms. For example, 74% of the Internet Retailer Top 500 use Google Analytics, 41% use Adobe Analytics and only 16% use others (IBM Digital Analytics, WebTrends, etc.).

Also, since many organizations use multiple platforms (as you can see the above numbers), it’s helpful to understand when different platforms use different terminology to mean essentially the same thing.

Bounce Rate

What it is: Google Analytics defines bounce rate as “The percentage of single-page sessions in which there was no interaction with the page.” Adobe Analytics has a similar definition. It’s important to note that exit rate includes bounces, but it also includes instances when a visitor did interact with something (e.g., a previous page) and this was simply the last page they viewed on your site.

How to use it: This one strikes fear into the heart of many marketers. “People are just bouncing off my site? Simply bouncing? And it’s not a Tigger type of friendly bounce.”

However, many marketers should ease their anxiety and focus elsewhere. Bounce Rate is usually most helpful for a business that sells traffic, like a publisher that sells ads on its site. In that case, the goal of most pages (which tend to be content, like an article or blog post) is to get people to view more pages and thereby generate more ad revenue.

But if you’re selling a product or service, your focus shouldn’t be getting visitors to simply view another page. It should be to walk them through a thought sequence to help them make the best purchase decision. Essentially, a funnel.

A word of caution: “For funnel optimization, I tend to focus more on exit rates and clickthrough rates to other steps in the funnel,” said Rebecca Strally, Associate Director, Strategy Development, MECLABS Institute.

“It’s not that bounces aren’t salvageable, it’s that source/medium reporting has gotten spottier over the years so it’s difficult to assign a bounce to a specific off-site source and therefore we are hard-pressed to determine what motivation may be lacking on the page. I’ve seen better long-term results when we just focus on moving more people deeper into the funnel rather than worrying about how to reduce bounces specifically,” she said.

Exit Rate and Exits

What it is: The number of a particular webpage’s views that were last in the session. Google Analytics provides an exit rate and shows a percentage, Adobe Analytics shows exits and provides a total number. As mentioned above, exit rate includes bounces, but it also includes instances when a visitor did interact with something (e.g., a previous page) and this was simply the last page they viewed on your site.

Also, keep in mind how the denominators are different between exit rate and bounce rate. Since bounce rate essentially mentions a one-page session, the bounce rate is only measuring against entries to that page. Exit rate includes both people who entered the site on the webpage you’re looking at metrics for, but also anyone who viewed the webpage and entered from a different page.

How to use it: It can help you understand the biggest leaks in your funnel. This may be the place in the buyer’s journey that is most ripe for conversion optimization work to improve your overall results. Why are people leaving before purchasing or becoming a lead? And what information can you provide to encourage more of them to continue the journey through your funnel?

A word of caution: People will naturally leave at certain stages of your funnel more frequently than others, and it doesn’t necessarily mean that step of the funnel is underperforming. For example, if the first step of your purchase funnel asks people to select the color so they can see what their future couch will look like in that color, and the second step asks for a credit card and purchase, people are far more likely to exit on the credit card step even if you’re executing it quite well due to the nature of the ask.

Visits, Sessions and Unique Pageviews

What it is: The number of visits within the time of your report. Adobe Analytics refers to this as “visits,” and Google Analytics refers to this as “sessions” (for multiple pages in a single visit) and “unique pageviews” (for an individual page, even over multiple sessions).

How to use it: This can help you see how many times people are coming to your website. For many products and services, a customer may have to visit your website and get information about your company over time (while building trust) before making a purchase.

“In order to tell how many times people are coming to the site you need to do a calculation — visits per visitor will tell you this,” advised Taylor Bartlinksi, Senior Manager, Data Analytics, MECLABS Institute. “A visit per visitor value of 2 means that on average, users come to your site twice in the date range you have set. You can also compare visits per visitor on a page basis to see if users are returning to any particular page more often than others.”

But a word of caution: This is an engagement metric. While it can be an important step in the funnel to lead to an ultimate conversion, it likely isn’t your end goal. So you may want to focus less on getting an increased number of visits and focus more on getting visits from your ideal customer. More visits from your ideal customer are what will ultimately lead to more sales and conversion.

Page Views

What it is: The number of times a page is … wait for it … viewed. Unlike with unique pageviews or visits, this metric can include multiple views from the same person, even if they simply hit refresh. While Google Analytics and Adobe Analytics refer to this by the same name, they differ slightly on the grammar: Adobe calls it page views and Google calls it pageviews.

How to use it: It could give you a sense of how popular certain pages are on your website. If you engage in conversion optimization on the most popular pages, you can increase your chances of getting more leads and more sales.

But a word of caution: As mentioned above, people hit refresh on webpages. They go back to a page multiple times just because they’re distracted by something else. This measurement could be showing not just popularity, but simply distracted browsing and user habits.

Clickthrough Rates

What it is: Essentially, clicking on a link. This could be clicking on a homepage link to get deeper into a site, for example. Or clicking on a specific product on a product gridwall. But it can also be clicks from offsite to the website, like clicks from an online advertisement or an email. This is shown in terms of total number of clicks as well as clickthrough rate (CTR) percentage.

How to use it: This metric can help you track the effectiveness of not only parts of your website but also elements of your channel at driving prospects to the next step of the funnel.

When to de-emphasize it: If you’re paying for traffic (for example, a pay-per-click ad), your goal shouldn’t be to get the highest clickthrough rate on that particular advertisement. The goal should be to get the most clicks from customers who ultimately get through your funnel and make a purchase or take some other ultimate conversion action.

Unique Visitors and Users

What it is: The number of unduplicated visitors to your website within the time of your report. Adobe Analytics uses the term “unique visitors,” and Google Analytics uses the term “users.” One important caveat, different platforms calculate this metric in a different way (more on that below).

How to use it: This metric helps you determine how many actual people you’re reaching with your website. It can ultimately help you determine how many people take the desired action you want them to take, and help you optimize that number by running A/B and multivariate testing.

A simple word of caution: This metric may not be perfectly tracking how many people visit your website. This is probably the most difficult metric to measure of any listed in this article since customers use multiple devices and some use private browsing modes or otherwise hinder cookies so analytics tools could double- (or triple-) count some of your visitors. It’s also important to understand how unique visitors and users are counted if you’re testing on your website.

Because of this, Adobe and Google might count unique visitors and users differently. For example, if a visitor was logged in to their Gmail account and visited the same website using a desktop device and a mobile device, Google may be able to track those visits as being from the same visitor where Adobe wouldn’t. And there are likely scenarios where Adobe would be able to tell a visitor is the same on multiple devices where perhaps Google could not.

A more complex word of caution: If you really want to get into the weeds, let’s address this question — how do unique visitors differ between Google Analytics, Google Content Experiments, Adobe Analytics and Adobe Target?

Keep in mind, a unique visitor is only unique in reference to a certain timeframe. So, the way you pull data from your testing platform (like Adobe Target or Google Content Experiments) when running a test may show a different number than your analytics platform because it may be accounting for a different timeframe.

To explain this better, Rebecca Strally provided the following scenario:

  • I ran a test for the entire month of May (31 days)
  • Visitor A came to my site once in May
  • Visitor B came to my site three times on three different days in May
  • Visitor C came to my site two times in a single day in May

All of these platforms define unique visitors in the same way, so if we were to pull a monthly report from each platform, we would have three unique visitors.

Where the discrepancy comes in is when you begin pulling daily data.

If you are pulling daily data from either Adobe Analytics or Google Analytics to cross-reference to your results from Adobe Target or Google Content Experiments you will get different numbers. This is because Adobe Target and Google Content Experiments will not pull daily data; they will pull aggregate data for the entire test period.

Let’s look back at the scenario. If we pulled daily data from Adobe Analytics or Google Analytics, we would get five unique visitors (Visitor A = 1, Visitor B = 3, Visitor C = 1), but Adobe Target and Content Experiments would only be showing three unique visitors (Visitor A = 1, Visitor B = 1, Visitor C=1), because they are looking at the entire month of May.

For your tests, you should pull data aggregately so you don’t deflate the conversion rate with more unique visitors than really were in the test.

How This Data Can Be Used to Better Serve Customers and Improve Results

Keep in mind, there is one flaw with each metric we’ve discussed. It tells you about the past, not the future. That’s why the testing scenario in the previous section is so important.

You can, of course, use these numbers to better understand customers and start predicting their future actions and, more importantly, what changes you can make to affect those future actions.

You would create a hypothesis. And run an online test.

For example, if your goal for an SEO landing page is to click through to other pages with deeper info on the topic, and you run tests to improve the headlines for those topics and reduce bounce rate, those numbers and that test helped you better serve customers and improve results.

Or, if you have a five-step process to guide people through the process of choosing a nursing home, you run a test to reduce the friction necessary to move from step three to step four, and in so doing, reduce the exit rate; that is another example of using these metrics to better serve customers and improve results

From each online test, you learn more about your customers. You turn simple numbers in a spreadsheet into a Technicolor view of your customers to help serve them better, and in so doing, improve your business results.

You Might Also Like

Online Testing online certification course — learn a proven methodology for executing effective and valid experiments

Marketing Research Chart: What metrics should you track?

Five Steps To Better Metrics: How one marketer leveraged Web analytics for an annual revenue increase of $500,000

Ecommerce Chart: How a low conversion rate can be a good thing

 

 

 

The post A Simple Guide for the Busy Marketer: Using data from online marketing and web analytics tools appeared first on MarketingExperiments.

Call Center Optimization: How a nonprofit increased donation rate 29% with call center testing

If you’ve read MarketingExperiments for any length of time, you know that most of our marketing experiments occur online because we view the web as a living laboratory.

However, if your goal is to learn more about your customers so you can practice customer-first marketing and improve business results, don’t overlook other areas of customer experimentation as well.

To wit, this article is about a MECLABS Institute Research Partner who engaged in call center testing.

Overall Research Partnership Objective

Since the Research Partner was a nonprofit, the objective of the overall partnership focused on donations. Specifically, to increase the total amount of donations (number and size) given by both current and prospective members.

While MECLABS engaged with the nonprofit in digital experimentation as well (for example, on the donation form), the telephone was a key channel for this nonprofit to garner donations.

Call Script Test: Initial Analysis

After analyzing the nonprofit’s calls scripts, the MECLABS research analysts identified several opportunities for optimization. For the first test, they focused on the call script’s failure to establish rapport with the caller and only mentioning the possibility of donating $20 per month, mentally creating a ceiling for the donation amount.

Based on that analysis, the team formulated a test. The team wanted to see if they could increase overall conversion rate by establishing rapport early in the call. The previous script jumped in with the assumption of a donation before connecting with the caller.

Control Versus Treatment

In digital A/B testing, traffic is split between a control and treatment. For example, 50% of traffic to a landing page is randomly selected to go to the control. And the other 50% is randomly selected to go to the treatment that includes the optimized element or elements: optimized headline, design, etc. Marketers then compare performance to see if the tested variable (e.g., the headline) had an impact on performance.

In this case, the Research Partner had two call centers. To run this test, we provided optimized call scripts to one call center and left the other call center as the control.

We made three key changes in the treatment with the following goals in mind:

  • Establish greater rapport at the beginning of the call: The control goes right into asking for a donation – “How may I assist you in giving today?” However, the treatment asked for the caller’s name and expressed gratitude for the previous giving.
  • Leverage choice framing by recommending $20/month, $40/month, or more: The control only mentioned the $20/month option. The addition of options allows potential donors to make a choice and not have only one option thrust upon them.
  • Include an additional one-time cause-related donation for both monthly givers and other appropriate calls: The control did not ask for a one-time additional donation. The ongoing donation supported the nonprofit’s overall mission; however, the one-time donation provided another opportunity for donors to give by tying specifically into a real-time pressing matter that the nonprofit’s leaders were focused on. If they declined to give more per month for financial reasons, they were not asked about the one-time donation.

To calibrate the treatment before the experimentation began, a MECLABS researcher flew to the call center site to train the callers and pretest the treatment script.

While the overall hypothesis stayed the same, after four hours of pretesting, the callers reconvened to make minor tweaks to the wording based on this pretest. It was important to preserve key components of the hypothesis; however, the callers could make small tweaks to preserve their own language.

The treatment was used on a large enough sample size — in this case, 19,655 calls — to detect a statistically valid difference between the control and the treatment.

Results

The treatment script increased the donation rate from 14.32% to 18.47% at a 99% Level of Confidence for a 29% relative increase in the donation rate.

Customer Insights

The benefits of experimentation go beyond the incremental increase in revenue from this specific test. By running the experiment in a rigorously scientific fashion — accounting for validity threats and formulating a hypothesis — marketers can build a robust customer theory that helps them create more effective customer-first marketing.

In this case, the “customers” were donors. After analyzing the data in this experiment, the team discovered three customer insights:

  • Building rapport on the front end of the script generated a greater openness with donors and made them more likely to consider donating.
  • Asking for a one-time additional donation was aligned with the degree of motivation for many of the callers. The script realized a 90% increase in one-time gifts.
  • Discovering there was an overlooked customer motivation — to make one-time donations, not only ongoing donations sought by the organization. Part of the reason may be due to the fact that the ideal donors were in an older demographic, which made it difficult for them to commit at a long-term macro level and much easier to commit at a one-time micro level. (Also, it gave the nonprofit an opportunity to tap into not only the overall motivation of contributing to the organization’s mission but contributing to a specific timely issue as well.)

The experimentation allowed the calling team to look at their role in a new way. Many had been handling these donors’ calls for several years, even decades, and there was an initial resistance to the script. But once they saw the results, they were more eager to do future testing.

Can You Improve Call Center Performance?

Any call center script is merely a series of assumptions. Whether your organization is nonprofit or for-profit, B2C or B2C, you must ask a fundamental question — what assumptions are we making about the person on the other line with our call scripts?

And the next step is — how can we learn more about that person to draft call center scripts with a customer-first marketing approach that will ultimately improve conversion?

You can follow Daniel Burstein, Senior Director, Content & Marketing, MarketingExperiments, on Twitter @DanielBurstein.

You Might Also Like

Lead Nurturing: Why good call scripts are built on storytelling

Online Ads for Inbound Calls: 5 Tactics to get customers to pick up the phone

B2B Lead Generation: 300% ROI from email and teleprospecting combo to house list

Learn more about MECLABS Research Partnerships

The post Call Center Optimization: How a nonprofit increased donation rate 29% with call center testing appeared first on MarketingExperiments.

Customer Theory: How to leverage empathy in your marketing (with free tool)

Think about every marketing message you saw yesterday. Every newspaper ad. Every email. Every sign being twirled around on the side of the street.

Did you stop to read each message? Watch every commercial? Think about the message? Decide if you should go for the call-to-action?

No you didn’t, did you? You ignored the vast majority of the messages. A few you actually noticed and rejected. You consumed less of them. And maybe acted on a handful.

And the reason is, when you saw most of those messages, you probably weren’t waiting to be sold. You were busy doing something else. Maybe something related. At best you were probably looking for a solution to a problem. Or maybe something totally unrelated and didn’t even notice the message.

Now flip the script. That’s how you act as a customer, but when you’re the marketer, account executive, copywriter, art director … how do you approach each piece you create? You likely have a deep understanding of the product, the copy, even little details of the ad. Perhaps even a deep affection for the product, the landing page or the ad — after all, many marketers end up entering their work into awards shows because they’re so proud of it.

Bridging the customer-marketer divide

As a marketer, you need to do the seemingly impossible. You need to bridge this divide for your entire team. The divide between the customer and the marketer.

I found myself in this very situation recently while working on a video script for The BairFind Foundation, a nonprofit that uses sports marketing to raise awareness for missing children. MECLABS Institute has taken BairFind on, pro bono, as a Research Partner to use our conversion optimization methodology and practices, which we usually apply to business challenges, to help this nonprofit meet its own goals.

BairFind has signs in 151 Minor League ballparks across the nation, with pictures of missing children. It was recently featured in USA Today. League and team presidents were hungry for a video to play in their stadiums about the nonprofit organization, and it was my job to deliver.

So this was a quick-turnaround project, and I had little familiarity with the intended audience of the video. Ever find yourself in this situation? Here’s something that might help …

Free customer theory development tool

I took what I learned from  University of Florida/MECLABS Institute Communicating Value and Web Conversion graduate certificate program and began to build a customer theory dossier. I’ll show you how I used it in just a moment, but first — you can download a free version of it as well, and use it as a tool on your next ad, campaign or marketing initiative.

FREE CUSTOMER THEORY DEVELOPMENT DOWNLOAD

Step 0: Identify as many distinct customer profiles as necessary

Before you can even start building a customer theory, you must determine which type of customer you’re building that theory for.

Here’s why this pre-step is so important. If you’re building an ad or other marketing pieces with a strong, unique value proposition, it will speak very directly to a specific type of customer. Boom. Hit them square in the chest, so to speak.

You can’t do that if you try to be everything to everyone, if you’re blandvertising.

This is also important. While there are certain types of customers you shouldn’t try to serve because you aren’t the best solution for their needs, there are other types of customers you can serve.

Some marketing communications will speak to all those types of customers at once. But more likely, for most of your marketing campaigns, you’ll want to zero in on as unique and homogeneous a group as possible.

As an example, here are the possible customer profiles I listed for BairFind Foundation.

  1. Parents at a Minor League Baseball game
  2. Grandparents at a Minor League Baseball game
  3. Children at a Minor League Baseball game
  4. Adults with no children at Minor League Baseball game
  5. Marketer from a retailer or other potential corporate sponsor
  6. Minor league team presidents
  7. MiLB league presidents
  8. Marketers at MiLB teams
  9. MiLB baseball players
  10. Sports and other local and national media

For the video script, I chose to focus on parents at a minor league baseball game. If you watch the video (embedded at the bottom of this article), you can see why that choice is important. I sought to grab their attention from the very beginning and hit them hard with something they could easily relate to.

I couldn’t have done that if I tried to write a video for all 10 of BairFind’s customer profiles. Even just adding a second customer profile would have made that harder.

This doesn’t mean that customers in those other profiles won’t be able to understand and perhaps act on the video. But it means I wrote the video with those specific people in mind.

Step 1: Create a list of preliminary customer insights

For my selected prospect profile, I began to list out some basic insights about the ideal customer — parents at a Minor League Baseball game.

I started with my own gut and intuition, and expanded using some basic internet research. This was, of course, a very small project. And a pro bono one at that. But if you have a larger, higher profile project, you might want to conduct deeper research to get these insights — social listening, focus groups, interviews, surveys, etc.

It helps that I’m somewhat in this demographic. (I am a parent, although the last time I attended a MiLB game was before I became a parent.) But this exercise is all the more important when you’re not in the target customer profile. Marketers often fall into the trap of “I’d want this” or “I’d want that.” But if you’re not the ideal customer for that product, the actual customer might want something very different.

So this tool helps you get as close as possible to a fundamental insight — not what you’d want if you were in the customers’ shoes, but what the customers in those shoes actually want themselves.

Here are the insights I came up with:

  1. Parents age 21-54
  2. Have children 0-16
  3. Limited external funds for entertainment
  4. Focused on having fun at the ballpark, not really thinking about other issues at that time
  5. Family oriented
  6. Diverse level of education
  7. Diverse ethnicities
  8. Don’t have much additional spare time to help community
  9. More likely than the general population to have smartphones
  10. Community minded

Step 2: Categorize these preliminary insights

Next, categorize these preliminary insights into attributes, context, desires and fears. As you do this, it will likely inspire you and your team to come up with new insights you hadn’t considered before.

The context is an important reminder. For example, you may view a print ad in isolation, nicely mounted on a piece of blackboard. However, the customer will view the ad in a newspaper with many competing articles and ads trying to get their attention. In addition to what’s in the newspaper, they may be reading in a crowded coffee shop or subway, or perhaps they’re at home with children who are trying to get their attention.

In this case, we would view the video in a studio on a nice hi-def superwide Apple monitor with superb audio speakers. However, the customer may be viewing it on a washed-out screen in a noisy stadium between innings.

In addition to the context, it’s important to understand your ideal customers’ desires and fears. We all move toward pleasure and away from pain. What are they trying to achieve? What are they trying to avoid?

You’ll note in my example below that not everything I included directly relates to the BairFind Foundation, missing children or the call-to-action. It’s very easy for us as marketers to only focus on what we want customers to do, or the tiny sliver of their life that relates to our product or ask.

However, real human beings aren’t two dimensional. And their experiences in life are much broader and deeper than just those that relate to your product.

And at the end of the day, all those perceptions ultimately affect how they regard your message. After all, as the Talmud says, “We do not see things as they are. We see things as we are.”

Attributes (Demographic Characteristics)

  1. Ages 21-54
  2. Diverse education level
  3. Diverse ethnicity
  4. Moderate household income, however, 29% HH income $100K+
  5. 78% own home

Context

  1. Family of four can see ballgame for $62
  2. Some fans attend just a few games per year; some are season ticket holders
  3. Between innings, they are distracted
  4. Receive many promos throughout the game
  5. Children will be going back to school soon
  6. Likely watching on a washed-out screen in a noisy stadium

Common Desires (Moves Toward)

  1. Experience budget-friendly entertainment
  2. Create happy memories together
  3. Be a part of the community
  4. Be the hero to their kids
  5. Be a good parent
  6. Be an upstanding member of the local community
  7. Relax with family
  8. Escape pressures of life
  9. See a future big leaguer
  10. See the local team win
  11. Have a story to tell their friends the next day
  12. Watch the mascot do something funny

Common Fears (Moves From)

  1. Something bad will happen to my children
  2. I can lose my job, and I won’t have enough money to support my family
  3. The home team will lose
  4. Will my kids throw a temper tantrum if I don’t by them cotton candy at the game?
  5. Crowds and traffic leaving the game
  6. Violence will come to my country/my town/this baseball game
  7. Will this game get rained out?
  8. If I text a donation, will I be continually sent text messages
  9. What if I think I know the missing kid, tell the cops, but I’m wrong
  10. Will my kids need a nap at the game?

Step 3: Unanswered questions about the prospect

Generate a list of the most important unanswered questions about the customer’s identity and behavior.

Unanswered Questions about the Prospect (Parents at a Minor League Baseball game)

  1. Will they be too distracted to pay attention to a video between innings?
  2. Will the donate message make them more or less likely to look at the sign?
  3. Do they understand how to text to donate?
  4. Is $2 the right amount to ask them to donate?
  5. Is a video the right way to ask them to donate?
  6. Would they refer a friend to donate?

These first three steps are part of the MECLABS Seven-Step Customer Theory Development Framework that is taught in the University of Florida graduate program. The full framework also includes conducting experiments to build a robust customer theory discovered from customer behavior to answer some of these questions.

In the case of this project — a simple video for a nonprofit — we were unable to go full on through all seven steps and conduct experimentation. However, I still find this step helpful because it instills humility as part of the process. As much as you have certain assumptions about the customer, it forces you to admit there’s still a lot you don’t know.

It also doesn’t hurt to look back at these questions when you’re working on the next project, see what the results of the previous project were, and continue to build a base of knowledge about the customer.

Getting everyone on the same page

In addition to helping the creators of the advertisement (copywriters, art directors, video producers, etc.) get in the minds of the customer, this tool helps everyone working on the project — from an account coordinator to the vice president of marketing, on the agency side and the client-side — get on the same page about which customers will (and won’t) be talked to and what is important to those customers.

This can help reduce rework, and lay the groundwork for successful creative pitches to clients.

Which is what happened in this case. After I filled out the Customer Theory tool, I sent it over to Dennis Bair, Founder, The BairFind Foundation. I asked him for his perspective on the ideal customer as well, before writing the first word in the script.

Once I was able to incorporate his insights, I wrote a script and sent it over to Dennis, and he loved it, providing only minor feedback. Here’s the result:

It’s just an example of how successful copywriting is about so much more than just great writing. So much fantastic writing never sees the light of day because it never gets the green light.

Successful copywriting requires customer intimacy, but it requires client intimacy as well. Get on the same page with everyone you must collaborate with, and have the client share their key insights about the customer before you begin the creative process.

And the same is true in reverse if you’re on the brand side. Be proactive and make sure your internal or agency creatives have the same understanding of the customer as you do. As Sun Tzu has said, “Every battle is won or lost before it’s ever fought.”

If you’d like that free tool to use with your own clients, agencies and marketing projects, here it is again …

FREE CUSTOMER THEORY DEVELOPMENT DOWNLOAD

You can follow Daniel Burstein, Senior Director, Content, MarketingExperiments, on Twitter @DanielBurstein.

You might also like

Customer Theory: How We Learned from a Previous Test to Drive a 40% Increase in CTR

The Marketer as Philosopher book

Customer Theory: What do you blame when prospects do not buy?

 

 

The post Customer Theory: How to leverage empathy in your marketing (with free tool) appeared first on MarketingExperiments.

6 Good (and 2 Bad) B2B and B2C Value Proposition Examples

Email subscriber Jennifer recently wrote to us saying, “I’m a big fan of MECLABS and your value proposition work. I’d love to see a story with specific examples of five great value propositions.”

Well, Jennifer, let’s dive right in. The first example of what a value proposition should look like is from the University of Florida/MECLABS Institute Communicating Value and Web Conversion graduate certificate program …

PR Newswire

Here is an example of a value proposition argument (sometimes these are referred to as the short-form value proposition statement) from the program’s MMC 5435 Messaging Strategy & the Centrality of the Value Proposition course. It starts with the word “because” in order to answer the question, “If I am your ideal customer, why should I buy from you instead of any of your competitors?”

Because PR Newswire has the most established1 and largest2 news distribution network in the industry, enabling you to more reliably reach your target audience3.

    1. Industry leader for 59 Years. Established relationships with major news sources such as: Yahoo!, MarketWatch and New York Times
    2. Distribution to over 200,000 media points and 8,000+ websites, dedicated journalist website with 30,000 active members per month, 150 mobile apps that carry PRN content (broadest in the industry).
    3. PR Newswire provides flexible and cost-effective distribution options to help you reach niche markets across the U.S.

When this short-form statement was applied to a landing page in an experiment, it resulted in a 22% increase in clickthrough. You can see the winning treatment below and how it naturally flowed from the value proposition statement.

B2B database marketing solution

Here is another great value proposition example from the UF/MECLABS program; this one is from the MMC 5436 Messaging Methodologies and the Practice of Conversion Optimization course.

This is for a company that provides database marketing solutions for small to medium-sized business.

Because we have the most comprehensive1 and accurate2 lead database.

  1. Includes access to over 210 million U.S. consumers, 14 million U.S. businesses, and 13 million executives.
  2. We have a team of 600 researchers that verify the data daily and make over 26 million verification calls a year, 80,000 calls a day.

That value proposition was applied to the following webpage:

When the webpage was tested as part of an experiment, it generated a 201% increase in lead capture rate.

Realtor Kristan Cloud-Malin

I can’t share every example from the UF/MECLABS program, of course. So here are a few publicly available examples.

“A value proposition argument or statement rarely makes a direct appearance on page or in an advertisement. Typically, it’s more likely that you’ll find a single highly-exclusive evidential expressed in marketing collateral,” Gregory Hamilton, Director of Education, MECLABS Institute, and Associate Professor, University of Florida, told me.

Here’s an example from a Realtor in Jacksonville who used nice evidentials to provide credibility to the way she expressed her value proposition.

Delta Airlines

Another example Greg gave me was Delta Airlines, which does a nice job expressing an “only” factor in an otherwise commoditized industry.

Even when Delta has to convey a more generic message — essentially, “we’re big” — it uses evidentials and specificity to convey a more forceful value proposition.

The Honest Company

This next example comes courtesy of Gaby Paez, Associate Director of Research, MECLABS Institute, who came across it while conducting a summary competitive analysis for a MECLABS Research Partner.

The company has a strong, unique value proposition, using a good mix of health and safety, and also unique designs.

And the Honestly Free Guarantee promises customers very safe products.

Here’s how The Honest Company expresses different elements of its value proposition:

  • Exclusivity: It’s the manufacturer (not just a reseller) and offers the unique Honestly Free Guarantee
  • Credibility: Founders’ video, giving back timeline, awards and certifications
  • Clarity: Video with co-founders (human touch) and unique and clear section on product pages for third-party awards and certifications

“They are masters to me. Every customer touch point has been carefully designed to provide or remind value,” Gaby said.

The Honest Company is also an example of why it’s important to ensure there is true value in your marketing proposition. When The Wall Street Journal reported that its laundry detergent contained ingredients that it pledged to avoid, the company faced backlash from customers and eventually agreed to drop use of the disputed ingredient.

So, keep the above in mind as an example of good presentation strategies to communicate or support value. But Gaby pointed out you must also remember that if your company is not truthful, things will backfire for it sooner or later.

Apple iPod launch

For over nine minutes, Steve Jobs takes the audience step-by-step through a unique value proposition when launching the iPod.

First, he discusses the appeal of music.

He leverages exclusivity by showing charts that communicate how other music options don’t have this feature set.

His credibility comes not only from his position and previous successes, but by physically being able to show the product and leverage Apple design’s strong primary value proposition.

And he walks through each feature methodically (“three major breakthroughs”), not just listing a few bullet points, to ensure clarity of communication.

An effective value proposition is a unique value proposition

One key element of all of these value propositions is that they have an “only” factor.

So, here’s an example of value propositions that are not unique. Can you tell the difference between HP and Epson?

Epson says, “Where there’s business, there’s Epson.”

HP says, “HP: everywhere you do business.”

And then goes on to say, “HP provides the products, services and solutions that help you simplify IT. Because your business is everywhere you are.”

This is an example of what I like to call blandvertising. A copywriter put those words together, and they sound vaguely businesslike and professional, but they also just kind of wash over you. What do they really mean?

I don’t blame the writer; I blame the marketer. If you’re working with a freelance writer or agency, you need to make sure they are empowered with a clear and forceful value proposition. Or else, the writing you get back will be well-formulated and sound professional but also be fairly meaningless to prospective customers.

So, what could Epson and HP do differently? Well, I’m guessing the resulting conclusion they’re trying to get across is, “We’re big. And we can solve a lot of your problems. All over the world.”

If that’s the intended goal, specificity could really help them get across a powerful value proposition. For a nice example of how to convey this value proposition in a forceful way, just scroll up and see Delta.

You can follow Daniel Burstein, Senior Director, Content, MarketingExperiments, on Twitter @DanielBurstein.

You might also like …

Do You Have The Right Value Proposition?

Value Proposition Development Online Certification Course Level 1

Value Proposition: 3 Worksheets To Help You Craft, Express And Create Derivative Value Props

Download the free 30 Minute Marketer: Value Proposition

Learn more about improving your value proposition and the MECLABS Conversion Sequence Heuristic here

 

The post 6 Good (and 2 Bad) B2B and B2C Value Proposition Examples appeared first on MarketingExperiments.

The behind-the-scenes story of how we optimized outdoor advertising that was featured in a USA Today article

You might have read about The BairFind Foundation in USA Today Sports Weekly magazine recently: Minor league ballpark signs raise awareness on missing kids. We helped optimize the signs discussed and pictured in that article, and I wanted to share our thinking behind those changes so you can get ideas for applying the Conversion Sequence Heuristic to your own outdoor advertising and other non-digital optimization.

Photo credit: Edwine Pierre Louis  

More eyes looking equals more children found

BairFind is a Jacksonville-based nonprofit founded by former minor league pitcher Dennis Bair, who has been hard at work helping to find missing children for almost two decades. “It is sports marketing — we are leveraging the power of sports,” is how Dennis described BairFind in the article. “Minor league baseball has been our proving ground. We’re doing something the families cannot do for themselves, which is to get pictures of their missing children out there to millions.”

For the past year or so, MECLABS has taken on BairFind as a Research Partner, although a Research Partner with a unique pro bono relationship due to the foundation’s small size and nonprofit nature.

Before

Here is an image of the original BairFind sign. Before you scroll down and see our analysis, you might want to look at the sign yourself and, using your own conversion optimization skills, consider how you might optimize the sign.

In fairness to the all-volunteer team at The BairFind Foundation, the signs were working pretty well — 183 children were located in the 2016 baseball season.

However, as with any conversion optimization scenario, the opportunity always exists to improve the messaging to increase conversion. With any sufficient amount of traffic, your messaging should convert at some level. And in BairFind’s case, that traffic is substantial — 40 million baseball fans.

But this is where it helps to sometimes get an external opinion. Because you live, eat and breathe your product every day, every week, every month, it can be difficult to see your messaging with fresh eyes.

When Dennis first discussed the sign with us in our boardroom, there were a lot of things that were unclear to us, and we had a lot of questions simply because we were outsiders to the world of finding missing children.

In addition, Adam Lapp, Senior Director, Services Operations, MECLABS Institute, led an analysis of the sign using our methodology and MECLABS Conversion Sequence Heuristic. Here are a few key optimization opportunities he identified:

  • No clarity around the images of children — What are they? Are they missing? What should I do with them?
  • “Join The Search” headline may cause confusion-related friction — What search? How do I join?
  • org is small and difficult to see
  • Viewers may lack motivation because they feel like they can’t make a big difference
  • No clear call-to-action to go anywhere (if you don’t see the small bairfind.org)
  • Official minor league baseball charity message might be too small to understand

After

Here is a look at the sign after we optimized it.

A headline was added above the images — “Have you seen these missing kids?” — to provide needed clarity around what the pictures are. The headline solves the previous difficulty/confusion-related friction, and a bright, yellow color was used to attract attention.

“We increased the size of the children’s images because they are truly the most important part of the posters. We squeezed every extra pixel we could out of the available space to increase their prominence and visibility,” said James White, Senior Brand Designer, MECLABS Institute.

The headline was changed to “Help Bring Kids Home” to add clarity to what search the sign is asking them to join. A subhead was added — “by looking, you just helped” — to reinforce their activity.

Messaging was added to the bottom of the sign to solve the issue of no clear call-to-action — “Text FIND to 91999 to donate   BairFind.org.” Also, a bright color brings attention to it.

“The new copy at the top and bottom were very important new pieces which immediately conveyed the purpose of the signs and let the viewer know how they can help. We selected a vibrant ochre yellow taken directly from the central image to contrast well against the black background and draw the eye quickly to these important additions,” James said.

Results

“This year, we had big time improvements to the visual aspects of our signs, thanks to MECLABS.  MECLABS took our sign, which was very basic, and optimized the photos and really made the sign attractive to look at and made the children’s photos pop. They also helped us to clarify our message.  And the feedback from all of the GMs [general managers] of the Minor League Baseball teams, the fans and the league presidents has been 100%: ‘Wow!  Holy moly, these signs are sharp!’ Thanks to MECLABS, they are even more impressed with us and even prouder to feature our signs in their ballparks,” Dennis told me.

During the 2016 baseball season, 183 children who were featured on the signs were found.

During the 2017 baseball season so far, 334 children featured on the signs have been found — and there’s still a month left in the season.

Now, unlike a landing page A/B split tested in a controlled environment to ensure there are no validity threats, we can’t be sure that the optimization changes helped find more children. There were other changes as well. For example, BairFind’s network of minor league stadiums featuring the signs grew from 139 to 151.

And more children were featured on the signs. Instead of featuring the same four on each side of the sign, four children were featured on one side of the sign, and a different four children were featured on the other side of the sign. (However, if anyone has ever tried to increase clickthrough in an email or conversions on a website by simply featuring more links or CTAs, you know you might get some increase, but it is incremental and certainly not equal to the amount of new links added).

Applying conversion optimization methodology beyond digital marketing

Overall, the point of this article is not to show an example of a valid experiment, like we normally do. But rather give you some ideas for using core conversion optimization methodologies beyond the landing page.

If you’ve spent any time following MarketingExperiments and the MECLABS Institute, I’m sure you’ve seen how the Conversion Sequence Heuristic can be used to optimize landing pages, emails and PPC ads.

However, the Conversion Sequence isn’t really optimizing any of those digital marketing channels. It’s really being used to optimize a brand’s interaction with a customer’s thought sequence. And those digital channels are merely the avenue to facilitate (and test) that communication.

For example, I’ve used this heuristic when making recruiting trips to universities to show students how the Conversion Sequence can help them pick the best job when entering the workforce. I’ve used it to discuss how PR professionals can optimize their own work. And as shown in this article, we’ve used it to optimize physical, outdoor advertising signs. I’ve even heard talk around the labs of people using the Conversion Heuristic to help optimize their relationship with their kids or spouse. What have you used the Conversion Sequence to optimize? Message me on Twitter @DanielBurstein and let me know.

You can follow Daniel Burstein, Senior Director Content, MarketingExperiments and MECLABS Institute, on Twitter @DanielBurstein.

You might also like

Data Analysis 101: How A Nonprofit Used Data To Secure A Critical Business Decision And Help Find 125 Missing Children

Beyond Landing Pages: Conversion rate optimization strategies

Landing Page Optimization: An Overview Of How One Site Increased Leads By 155%

Participate in a research project and drive conversion increases

Learn more about applying conversion optimization methodology and the MECLABS Conversion Sequence Heuristic here

The post The behind-the-scenes story of how we optimized outdoor advertising that was featured in a USA Today article appeared first on MarketingExperiments.

4 Lessons About B2B Inbound Marketing from a Sunday Morning in the Coffee Shop

I was in Starbucks the other day, and in walks an older gentleman. I couldn’t help but notice that people kept focusing on him and chatting him up — in line, while waiting for a drink, etc.

I could overhear the conversations a bit, so I asked someone sitting near me, “Was that guy in the NFL or something?” He responded, “Yeah, that’s Rocky Rochester. He was defensive tackle for the New York Jets in Super Bowl III.”

He happens to sit by me, and we strike up a conversation. He notices I’m wearing a Hofstra shirt, and he says, “Hey, we used to practice there.” Then, when I notice his Super Bowl ring on his finger and mention it, he does something that simply shocks me.

He just hands it to me. So, I’m sitting there, holding a ring from Super Bowl III. The Super Bowl of Super Bowls. Broadway Joe. The Guarantee.

I share this story because inbound marketing was on the top of my mind in that coffee shop on Sunday morning — the team at our sister company, MarketingSherpa, was putting the finishing touches on the Quick Guide to Inbound Marketing for B2B  — and I realized this story was the perfect analogy for effective inbound marketing. Often, we get so focused on data and metrics, technology and automation that we overlook everyday human interactions like this.

However, normal human interactions are what we should be trying to emulate with our marketing, especially inbound marketing.

Lesson #1: B2B inbound marketing gets you recognized

The first lesson speaks to the power of inbound. Whatever you’re selling — marketing automation tools, hospital diagnostic equipment, construction software — your buyers have a list in their head. It’s the consideration list.

I need to buy a B2B product. I can’t consider every possible company. Who’s going to make that short list?

When you create an engaging inbound B2B program and build an audience, you’re like Rocky Rochester. No longer are you just another guy in a Starbucks. You’re someone everyone wants to talk to. And hear from.

And the value of that has a ripple effect through your marketing. When prospects are at a trade show scanning booths, name recognition makes them much more likely to engage. When they get a phone call or email from someone representing your company, they’re more likely to give it a small opening. And, when they’re making that all powerful consideration or RFP list, you’re more likely to be on it.

Lesson #2: Have a good story to tell

Recognition isn’t enough. Prospects must have the desire to actually want to engage with that brand.

Sure, it helps to have the biggest brand in the world in your industry. However, if customers know they will only be sold to when they engage with you, they’re much less likely to seek out your content or subscribe to your newsletter.

The reason everyone was engaging Rochester in that coffee shop is they knew he would have good stories to tell.

On the flip side, if everyone had recognized him as, say, a vacuum cleaner or insurance salesman, they likely would have had that moment of recognition as well. However, they also likely would have gone out of their way to avoid him, not engage him.

Lesson #3: Effective B2B inbound marketing is relevant

When we were talking, Rochester noticed my Hofstra shirt, and he mentioned how the Jets would practice at Hofstra.

It’s a minor detail. And it happens naturally in a human conversation.

But all of your inbound marketing should, as closely as possible, replicate these human interactions and seek to provide relevant, helpful content to your audience.

Do you give your audience different email newsletters to subscribe to based on their interests? Do you de-dupe email sends when you know someone has already taken advantage of the offer — for example, removing people who have already registered for a webinar from the invite?

What can you do to make your B2B inbound program more relevant to customers?

Lesson #4: Surprise and delight your audience

Once they know who you are, are interested in your story, and know it’s relevant…still, these are busy people with a million different concerns. Even if they’re reading your blog post, they’re probably skimming it and only half reading it. And, how likely are they to share it with their social network?

To stick out from the clutter, you really need to delight them.

When I noticed Rochester’s ring, I didn’t expect him to hand it to me. It was so far above and beyond my expectations that I didn’t even think to take a picture of the ring on my finger until the moment was well over, and I had left the Starbucks. D’oh!

How can you surprise and delight your prospects? How can you go above and beyond? Here’s a great example from the Quick Guide to Inbound Marketing for B2B with New Relic, a software analytics company.

The company had a photo booth at an event and turned the photos of visitors — along with their answer to the phrase “Data helps me ___” — into virtual picture billboards it shared on social media. A great inbound strategy — customers hearing from customers.

But, the New Relic team didn’t stop there. They decided to surprise and delight. They turned the virtual billboards into tiny physical billboards that they then mailed to the customers. What do you think happened when they received those billboards in the mail?

They were surprised and delighted, so they shared that story with their peers on social media. Just like I’m sharing my minor brush with Super Bowl history with you.

“It’s really important to connect on that personal level, because no matter how big the companies that you’re selling to may be, they’re still people. And any time you can find a way to engage that’s a little unexpected and fun, that makes a huge difference,” said Baxter Denney, VP of Growth Marketing at New Relic.

You can follow Daniel Burstein, Senior Director of Editorial Content, MarketingSherpa, on Twitter @DanielBurstein.

You might also like

B2B Inbound Marketing: Top tactics for social media, SEO, PPC and optimization

Inbound Marketing: How a B2B company used a content marketing strategy to improve customer experience

B2B External Communications: How IBM conveys the value of complex products, spotlights innovative employees and entrusts employees with social media

Inbound Marketing for B2B: 10 tips to attract and engage your audience in a helpful (not salesy way)