All Collections
Elite Motivation
Experimentally Designed Variants
Experimentally Designed Variants

Learn about the insights gained from experimentally designed Variants.

Portal Support avatar
Written by Portal Support
Updated over a week ago

A Variant is a marketing message that Persado generates using its AI algorithm and proprietary marketing language database. The Variants we create for Experiments are uniquely designed to obtain maximum insights about which elements of your message are resonating most with your audience. Persado uses experimental design to create these Variants and then deploys them in two phases: Exploration and Broadcast. While Predictive Content also employs Variants, they are not experimentally designed.

This article dives deeper into how these Variants are created and deployed, and reviews the types of insights you can expect when running an Experiment.

Insights Gained by Phase

1. Exploration Phase

In the Exploration phase, we use experimental design to create Variants - typically, 16 of them. Then, we deploy them (plus your control) to a test audience. This allows Persado’s platform to “explore” and discover what works and what doesn’t.

There are two key messages we typically identify by the end of the Exploration Phase:

  • The Best Observed message, which is actually observed to perform the best in the Exploration phase

  • And the Best Predicted message (or messages) that Persado’s statistical model predicts will perform the best. This message is a combination of all of the winning elements tested and is only possible through our experimental design and AI.

Why 16 Variants?

To maintain the integrity of our experimental design, the smallest number of Variants required to test is the lowest common multiple of the number of values in every element being tested. So, if your promotional email has at least 2 4-value elements (like 4 different variations on your subject line and your offer description), then the smallest possible design is 4 x 4 = 16.

Over years of working with customers, Persado has found that 16 Variants obtain useful results while still being efficient to produce and approve. However, this is not restrictive; we can use a different experimental design and test more or less Variants (e.g., 8) depending on your objectives.

Analyzing Elements of a Variant

When we use experimental design, our technology can track which elements of each Variant have the most impact - something that simply can’t be accomplished using A/B testing. We break down each Variant into 6 key elements so we can understand during testing which elements contribute most to your KPIs.

We define each element as follows:

  • Narrative: Language that combines emotional and descriptive elements; the story that the message conveys

  • Emotional: Words and phrases that have an emotional impact

  • Descriptive: The different ways of describing the offer or product

  • Functional (CTA): The language that directly impels or prompts a specific action, response, or process

  • Formatting: The stylistic or structural elements of the message (like symbols)

  • Positioning: The placement of all of the different components of the message.

We then tag these elements during testing for tracking purposes. You’ll see these tags in your reports for both Exploration and Broadcast.

Persado’s Experiment Reports give you a full analysis of the outcomes of each individual Experiment or Predictive Content deployment, and they are unique to each Experiment’s specific structure. After reading your Experiment Report, you’ll have insight into which Variants performed the best with your audience, how each message performed against your KPIs, and a better understanding of what worked and what didn’t with your audience.

Insights Gained

In Exploration, we learn which emotions and elements of your message resonated most with your audience and make a data-driven decision on which creative combinations should be deployed in the Broadcast phase. We track impressions, clicks, and any other KPIs you’ve determined with your Persado team.

When Exploration is performed, we can see the Best Predicted message and the breakdown of elements that make it up. To get a more in-depth look at how this message was created from the best-performing elements of each Variant, click on “Show me how this was generated.”


Exploration allows a more detailed look at which language elements are contributing most to performance via heatmaps, something that Predictive Content can’t do.


You can see how every Variant deployed performed, along with the control. We can see that Variants 16, 3, and 15 resulted in the highest conversions, and also view the emotions and element tags they use. All of these metrics are available for viewing in Portal.


You can also view the aggregated KPIs tracked across all Variants deployed during the Exploration phase.

NOTE: Certain Exploration views only populate if significance is reached for the metric. You can toggle between available metrics on the Experiment Report by clicking on the tabs above the report.

2. Broadcast Phase

The second phase of an Experiment is the Broadcast phase, or the “earning” phase of the Experiment. In this phase, we decide which winning content to deploy based on our findings in Exploration. We may deploy a Best Observed Variant, a Best Predicted Variant, or both to maximize impact for the remainder of the audience. On average, Broadcast consists of 1 to 3 Variants in addition to the control.

Hot Tip

Persado recommends against using your Best Observed message as your control in the Broadcast phase. Not only will this result in redundant learning, as the message has already been explored in market, but you won’t be able to see the true distinction in uplift between your original control and your ultimate champion message.

Insights Gained

Broadcast is structured as a head-to-head test for simplicity, so it doesn’t produce the same insights as Exploration, which uses experimental design. But you’ll be able to see your winning message, along with the performance of all Variants tested and their performance against your KPIs.

Did this answer your question?