All Collections
Diagnosing Issues
Message Testing Issue Diagnosis
Message Testing Issue Diagnosis

Multivariate Message Testing not working the way you expect? Check out this article to diagnose issues.

Scott Gutelius avatar
Written by Scott Gutelius
Updated over a week ago

Quicklinks:

Motiva's Multivariate Message Testing is an extraordinarily powerful tool for content optimization, and ensures that your audience segments are receiving content that resonates with them.

Plus, you can go well beyond simple A/B testing. Instead of just two variations, you can create three, four, five, or even more distinct variations of your messaging to really get a sense of what your audience wants, as well as increase your own productivity.

Instead of needing to create multiple A/B tests to find an eventual winner, you can simply put all the variations in one experiment and let your audience's response tell you what works.

If some variations don't resonate, that's no problem. Underperforming variations automatically stop sending, ensuring that the highest percentage of your audience receives your strongest messaging.

New to email Message Testing? We also have an article covering Best Practices which will give you an overview on how to get the most out of your MT steps.

Having problems?

Sometimes an email Message Testing step doesn't behave the way you are expecting. It happens. Below is a list of issues that can pop up when using Message Testing, and our recommended actions.

As always, if you're seeing something not listed here, feel free to reach out to the team, and we'll be happy to investigate!

Issue: The Message Testing Step Failed to Start

The individual email report for the Motiva Message Testing step displays a notice that the step failed to initiate:

The clue here is that second line: "Optimizer does not have enough contacts to run."

In order for a Message Testing step to initialize and launch successfully, there must be a statistically significant number of contacts flowing into the step in order for the results to be reliable. If it's below that threshold, then one errant open or click carries too much "weight" within the results, and can lead you to make incorrect assumptions about your content and your audience. This, in turn, will damage your email marketing strategy.

We've found that 150 contacts per day per variation is the minimum number, and more is always better. Don't worry though. We won't make you remember that equation. Instead, just keep an eye on the Minimum and Recommended numbers of contacts (highlighted in red below) during the configuration of the Message Testing step:

Here's a closer look at the message:

The closer you can get to the Best Results number, the better your data will be. And if you go over? That's excellent!

Issue: Message Testing did not find a winner

The test did not identify a winner despite sending to the recommended number of contacts.

You have the recommended number of contacts, you have multiple variations to test, but the experiment doesn't yield a clear winner. What's happening?

In cases like this, the cause usually comes down to a lack of distinction between your variations. In other words, the similarities between the variations outweighed any differences between them, so the engagement metrics ended up being close enough that no clear winner emerged.

There are two paths to take in these situations:

  1. Create variations that are more distinct from one another. Take some risks and see if they create opportunities, or uncover new segments to target. Be Bold! Remember, if a variation severely underperforms, Motiva will automatically stop sending it, which minimizes your risk. We'll ensure that the highest percentage of your audience receives your strongest messaging.

  2. Just because the test didn't meet the confidence threshold doesn't necessarily mean it was a failure. If you look at the results, you may see an indication of which message was the strongest, meaning that given enough time and contacts, it may have been the winner.

Here's what we mean. Taking the image at the beginning of this section, it looks like the three variations ended up finishing with nearly identical results. But if you look at the complete report, you'll get a sense of which one is strongest:

Although the three estimated open rates were within 1/4 of a percent of each other, it looks like the third variation (depicted in the graph by the turquoise line) was the strongest overall, with a confidence percentage of 47.6%.

Be careful when lowering the confidence threshold in these situations, especially if the number of contacts is close to the minimum. The smaller the volume of contacts, the greater the impact each contact has on the overall results. So if there's a momentary spike in engagement from a small sample size, the Message Testing step may prematurely interpret that spike as the clear winner (which is why more contacts makes for more actionable results).

Issue: No Improvement in Open Rate

The test identified a winner, but it wasn't higher than the baseline.

Yup, this happens, too, and it's usually related to the previous issue of not finding a winner: The message variations you are testing aren't all that different from each other, or from the types of messages you've been sending previously.

If you're generally happy with your baseline results, then small improvements may be enough. But if you really want to see improvement, then taking a few risks might work in your favor. You can try out different tones in the subject lines or in the body copy. Try out different messaging in the pre-header or CTA.

If your metrics continue to be flat, then take a look at our Message Testing Best Practices article for more ideas of what you can test.

Hope this helps troubleshoot some of the issues you might come across in your Multivariate Message Testing with Motiva. And, of course, if you have an issue not shown here, feel free to reach out to our support team, and we'll be happy to take a look.

Did this answer your question?