A new tool is available to help you make data-backed decisions around Predictive Content: Performance Prediction Scores!
Performance Prediction Scores are calculated using Persado’s Knowledge Base and proprietary LLMs, which aggregate data across multiple brands, industries, campaigns, and languages. These scores tell you how a message is predicted to perform based on the grand scheme of messages in your industry in our Knowledge Base. Scores are presented between a range of 0 and 100 (or 4-level scale for IT/GR accounts: average, good, great, excellent), with 0 being the lowest and 100 being the highest score.
You can use scores to:
Guide your revisions to Variants, by seeing if our AI anticipates a negative or positive impact on performance
Learn which words and phrases are predicted to drive performance
Compare how your control is predicted to perform against Persado Variants.
If you’re interested in seeing Performance Prediction scores alongside your Variants for Predictive Content, please contact your account representative to turn this feature on.
Performance Prediction Scores in Action
View Scores: When Predictive Content Variants become available for your review, you’ll see scores in colored circles (or colored pie charts for IT/FR accounts) next to each Variant, indicating their Performance Prediction score.
Inspect Scores: Hover above a Performance Prediction score to get more information. In this example, the first Variant is predicted to perform better than 69% of messages in the financial services industry, and uses a top emotion for the account (encouragement).
Revise Variants as Usual: As always with Predictive Content, you're able to make edits to Variants. Performance Prediction Scores will preview how these changes will affect a Variant's anticipated performance.
Performance Prediction scores will appear alongside any potential alternative phrases.
You’ll also see a flag if the alternative is not predicted to perform as well as the original.
Once you select an alternative, the Performance Prediction score for the Variant will automatically update as described.
If you make manual edits to a Variant, the Performance Prediction score will automatically update once you save, so you can see if your edits have an impact!
Deploy Variants as usual once approved!
Best Practices
Here are some best practices to keep in mind when considering Performance Prediction scores:
A small difference between scores isn’t something to fixate on. As a benchmark, aim for Variants to have a higher Predicted Performance score than the control. Aim for at least 10 points better.
You should use Performance Prediction scores to compare anticipated performance of Variants within a single campaign, as opposed to comparing scores across different campaigns, where the product or offer itself could have an impact on the score. Similarly to the previous statement, please avoid comparing prediction scores across different periods (even for a very similar campaign/offer/promo) given that the models are evolving constantly to learn and adjust their calculations.
Comparing performance scores is not an accurate way to estimate the anticipated UPLIFT. (Scores are generated by a regression model that finds the probability that a specific phrase will perform better than any other phrase.)
These scores are not guaranteed; the best way to ensure performance is actually to test and learn what works for your audience. Any content you choose will be validated again in market with metrics like clicks and orders that will inform our Knowledge Base’s performance over time. This is why it’s so important to upload your results to Persado Portal!
FAQs
Will Performance Prediction scores appear for past campaigns?
No; Performance Predictions scores are not retroactive. They will appear on all Predictive Content campaigns after your Persado representative turns on the feature.
What data is used to calculate Performance Prediction scores?
We use engagement data (e.g., clicks) from your particular industry to calculate these scores. Our knowledgebase contains over 10 years of engagement data (200B interactions annually) attributed to tagged words and phrases. We measure how individual words and phrases impact performance, which enables us to predict how well any given message, composed of various words and phrases, will perform in market.
One caveat to keep in mind: The more any given word or phrase has been tested, the more certain we can be in the performance score. If your brand is using language Persado has never tested before, it might take some time to build up high confidence scores.
Is it okay to launch a Variant that doesn't have a "green" performance score?
Yes, even if a score is yellow (for example), you're still welcome to deploy that content in market, especially if the score is better than your control's score.
I see an AI Alternative with a higher Performance Prediction score than the Variant Persado provided. Why wasn’t it originally suggested?
When our Brand Content Strategists (BCS) choose content for your brand, they are taking into account more than just the performance score. They’re creating a cohesive message across multiple touchpoints, and creating something that is natural and not repetitive, while ensuring the strongest touchpoints all have strong language. They’re also taking into account other content for your brand, and avoiding repeating the same words or phrases as a recent send. They also aim to get new insights for your brand, which means testing into different emotions and descriptive language, to keep the language fresh and keep learning. Plus, they always want to be sure that the content is on brand (and at times, AI Alternatives can stray from your brand voice, in the name of performance). They have to balance all of this with the performance score, so the content with the highest possible Performance Prediction score isn't always the optimal choice for any given situation.