Can you both A/B test and hyper-personalize your shopper experiences?

by | Apr 12, 2022

  • Back

No matter the optimization strategy – for ads, landing pages, emails, etc – A/B testing or multivariate testing is employed just about everywhere. And of course… it should be!

Stating the obvious

The logic behind simultaneously testing A versus B versions is that the audience is split: where one audience segment gets the A version, another segment gets the B version, and a winner is determined. Got it. 

This helps you learn what works the best and subsequently send more traffic to the best-converting version

The same logic stands with multivariate testing: where the different variations across testable-attributes are exposed (or delivered) to some segment of the total audience. The goal? Finding a high performer and doubling down.

I’m stating the obvious… but simply to highlight that this methodology for optimizing is via tests between defined segments: relatively static groups (pre-defined or randomly generated 50/50) that allow you to treat your next audience with an optimized experience, predicated on the experiment’s lesson.

And highlighting it in order to mention that other optimization strategies operate in different ways.

The desire to “personalize” an experience or campaign is presently getting attention as the leader in optimization best practices. If we want to think about combining the “testing” with a “personalized” methodology, we need to first make sure we define personalization and explore if & how it can truly interact with A/B testing. 

Two ways to define personalization 

(1) The first way to define optimizing-through-personalization: specifying an identified characteristic of your customer audience to target a group of shoppers with a custom tailored promotion, version, or experience. In short, we are creating an audience where each member shares similarities and giving them a customized message.

However, if we’re working under this definition how will we A/B test? The two methods herein are:

  • Take the specific segment (e.g., gender = male, source = snapchat, region = southeast USA, …) and split it into A and B groups. So your group is defined by attributes and thus already usable for personalization; however, by splitting that group you can test sub-versions of the personalized promotion. It helps you get the most out of the group. 
  • The second method is less compelling: use this specific segment itself as the A version, and define another segment as the B. The two groups are different by intentional design – so the test doesn’t really do any “deeper” personalization. Instead it validates and confirms the assumptions you had about the affinities of that audience. 

(2) The next way to define personalization could be tailoring that promotion (or version, or content) to the one specific person, based on any set of characteristics known about that shopper.

In this case, instead of using a similar-looking audience cohort, we are narrowing that “grouping” down until we can optimize at the most granular level possible: one individual shopper.

For example: if it’s a past customer who bought a cute set of wine glasses 6 weeks ago, you can tailor specifically to them – “Hey {Name}, two months back you bought some {wine glasses} – keep the party going with our new selection of {e.g., cocktail tumblers}!”

Okay, but again: how will we A/B test?

When a team is personalizing website experiences or marketing campaigns dynamically based on each individual, it begs the question: how do you split-test with an audience sample size of only one person?

There are two reasons I’m calling attention to this.

The first reason is as follows…

For sales and marketing funnels, testing is elemental, expected, and basically a foundational practice. And often it is of the A/B or multivariate variety.

Yet if you have the capability to take all the known data about an individual person and create a custom experience just for them – landing pages, search results, product recommendations, in-cart upsell content, retention emails – does it even make sense to A/B test anymore? Is it needed? Where does it fit into your optimization-through-personalization strategy?

Personalization is grown from a different type of testing

The reality is that any personalized experience you setup wasn’t simply generated from thin air.

Rather, all of your data was used to define it! 

Historical website browsing data, previous customer transactions, email conversion rates, and new web traffic source information (etc) was analyzed and crunched to determine A) the shopper behaviors and B) the relationship between those behaviors and certain products, purchases, or abandonment.

How is all of that powered? 

Invariably it’s driven by artificial intelligence (AI) and machine learning (ML). The backbone of predictive analytics is built with types of deep learning neural nets, collaborative filtering, and other statistical models. 

Those models “train” on historical data, and then they learn continuously via real-time interactions with your personalization widgets.

In other words, the models are testing their own accuracy and validity through new and historical data to generate the one-to-one style of personalization. 

So while testing may seem like it isn’t necessary or involved – especially when thinking about A/B or multivariate; the testing is done in a different manner.

In fact, if a prediction made by one of these ML-powered personalization engines doesn’t turn into the expected result, it is treated as a mini test whose results are fed back in for tweaking and iterating to improve for the next prediction.

(The “learning” in machine learning is another way of saying test, evaluate, tweak, re-test, repeat. It’s almost like an organism reacting to its environment.)

What is the second reason?

The second reason is to emphasize that A/B testing can still be used, even when tailoring the web experience for each individual shopper.

There are various ways that teams can execute tests like this – but for our purposes here I’ll share two.

First: 

Start by custom tailoring certain elements of your web experience with the machine-learning powered personalization. But leave other website elements available for implementing other optimization tests. 

For example: while the hero section, recommended products, and color scheme may dynamically follow each individual based on their data profile, the landing page itself, the social proof, and promo banners can be used for A/B testing. Those tests will be based on some higher level segmentation, randomized testing, or whatever strategy that team selects.

So in this scenario, you would be able to optimize super granularly to personalize the purchase path to each prospective buyer – but also layer in control of creative split tests, for whatever promotions or tactics that your team brainstorms. 

Second:

This second one is a bit interesting. It aims to A/B test the efficacy of this individual-personalization methodology itself. 

You can split your website traffic 50/50, where half of the shoppers receive a control version of the website experience – the “A”, and the other half get a hyper-personalized experience based on correlations and predictions from your 1st party data – the “B”.

What would this tell you? 

In short, does personalizing at the individual-shopper-level prove itself to be a higher converting method. (And does the global consensus on personalization actually not hold up for your business?? It’s possible!)

But sincerely, this is actually a highly recommended practice – as the reality is not every business will improve with a machine learning powered personalization strategy. 

Whether because a business doesn’t have enough overall data volume, or perhaps just the nature of the business, or some other complexity: sometimes human analysis is just as effective (if not more accurate) in producing high-performance tests or personalization decisions. 

At Jarvis ML, we prefer to test against a control to ensure that our customers are getting meaningful results by implementing machine learning personalization intelligence. It doesn’t make sense to not test it. If you’re investing in this technology, you should feel confident that it’s a positive, high ROI impact!

Tying it all together

So to step back, the purpose of A/B testing is to experiment, learn, and then optimize – ultimately to perform better in the metrics that are core to your business: typically, conversions and revenue.

But the point of personalization is largely the same, improving conversions and revenue – while simultaneously giving a better experience to customers to incentivize loyalty. 

As we’ve seen “testing” is still happening either way. It just occurs in a different way.

We also can now see the ways that A/B testing can indeed happen at the same time as this hyper-personalization tactic. A mix of all of these learnings can and should be applied when looking to optimize shopper experience on your website or other digital touchpoints.

We believed that this thought exercise was worth walking through to think through the goals of testing, the reality of working with two optimization methods at once, and ultimately to gather a better understanding of what’s possible for a marketing team to evolve.

Continue learning about personalization to your anonymous shopper audiences by checking out this guide: How to Convert your Anonymous Shoppers using 1st Party Web Analytics Data.

Stay in the know

Monthly updates & resources delivered to your inbox.

Nick Budincich
Nick's objective in life is to create good, happy, fulfilling experiences and memories for himself and everyone he interacts with.

Similar Articles

How a Fractional CMO Can Help Navigate the AI Revolution

How a Fractional CMO Can Help Navigate the AI Revolution

November 7, 2023