0:05
Let’s get started on module 2 “How to Evaluate Creative Performance”. In the last module, we talked about how there’s no single metric that correlates directly with Ross. So unlike first media, the example that we used where they talked about focusing exclusively on share rate when evaluating their ad creative for us,
0:25
that singular metric does not exist in e-commerce. So as we ran into this problem inside of our agency and with our own brands we began to look back at historical hierarchy of effects models. Now the phrase “hierarchy of effects” just references the idea that there are systems or models for a series of behaviors
0:46
that have to happen in sequence to lead to a desired outcome. One of the most legendary versions of these is something called AIDA or “Attention Interests Desire Action”, which basically states that these behaviors. First, you must capture someone’s attention. Then you must get their interest, then trigger their desire before you can
1:07
get them to complete an action. And it represents a model for evaluating a series of behaviors that you want a customer to take before they make a purchase. And this has been used in advertising for over 100 years. It’s a classic hierarchy of effects model. And so we started evaluating that and thinking is there a way in which we could use this to apply modern application using data available to us
1:29
inside of Facebook business manager. And if we could combine those metrics could we develop a set of metrics to use as a signal for what makes an ad likely to sell. In other words, can we go back to our car example and no longer say that an ad is broken. But could we look at the component parts of an ad
1:49
and begin to figure out what is needed in each stage of this model in order to get to a sale. So what we came up with was a set of metrics that relates to each stage of this model for attention. We created this metric of three second views divided by impressions. We’ve all heard the phrase that we need to create thumb stopping ads.
2:10
We know that on any given day users are scrolling almost a mile on their phone. If you go and visit one mile scroll, you’ll get a sense of how much content a user is consuming on any given day. And so for us as advertisers. We have to answer the question, what percentage of users are we getting to actually stop and consumer content.
2:32
And so we look at this metric three second views divided by impressions expressed as a percent to answer the question, how what percent of users are we actually capturing their attention. And this is really critical because you actually get build on an impression basis within Facebook’s ad environment. So what you’re going to see is that if this number is really
2:52
low say 10% of all impressions result in a 3 second video view, that means you are paying for a bunch of impressions that aren’t actually resulting in consumption of your content. So this metric becomes critically important. The second, then interest is expressed in average watch time. How long are they staying with our material.
3:13
So if we’ve stopped them. And we have their attention. Now how interested are they in the content that we’re sharing how many seconds are they staying with our content. And then third ultimately desire we express in outbound TTR if we have a call to action that is shot now, which simply means click to our website.
3:33
Are we getting people to express a desire to participate in that action. And this we sometimes will say here at CTC that our job as an advertiser is to sell the click. At the end of the day, there are a lot of things that go into whether or not someone makes a purchase. The speed of your website the quality of information the price of the product, your return
3:54
rates all sorts of pieces that go into the sale as an advertiser. Once someone has clicked on the ad our job is done. So we use outbound TTR as a signal for an expression of desire. And so ultimately, the action that we want is still a purchase. It’s still measured by return on outspend but the question was, if we looked at these three parts.
4:15
How often were capturing people’s attention. How much interest they’re expressing by spending time with us and whether or not they’re expressing a desire to learn more about our product by clicking to our website. We wanted to understand if there was a relationship between that set of metrics and the ultimate purchase. So to do this. Here’s what we did. And this is a little bit in the weeds on data.
4:36
So I want you. So stay with me. And I want you to look this slide and focus on what you see in the red box. We use a tool often called super metrics super metrics acts as a pipe between Facebook the API and Google Sheets. It simply allows us to port information and data from inside a business manager into a Google Sheet
4:57
where we can build formulas to assess specific metrics that aren’t available inside of Facebook ads manager. So what we will often do is we will take our Facebook account. And we will port in all of our ad level data and we’ll do it with these five metrics 3 second video views impressions video average watch
5:18
time outbound CCR and Ross. You’ll see that in columns G H And I and then we’ll build column F3 second views to impressions by simply dividing columns D and E in the video. And when we pull this data. This is an important note just for understanding
5:39
of how we’re isolating this data to make it as accurate as possible. All the data that we pull in reference for this study was pulled by aggregating all of our prospecting spend exclusively agency wide across all of our clients in 2019. The other thing we do is we pull out all of the sales from the data. So that what we’re trying to really focus on
6:00
is what is driving evergreen prospecting growth. What I mean by that are the ads that we’re running all the time that are driving the bulk of our growth spend for new customer acquisition. That’s really what we’re focusing on when thinking about what kind of creative works because at the end of the day, the bulk of your growth is going to come from the ability to prospect
6:20
effectively. So once we pulled all of this data in we create a threshold for ads that have spent over $100 because we want to eliminate sample size small sample sizes from our data. So we pull in every ad from our Add Account that’s spent more than $100 across the entire year.
6:40
And then what we do is we create a baseline by figuring out a weighted average of each of those four metrics a.I.d.s. So if we look at our red box here where you see total account that bottom line represents the baseline weighted average of these four metrics. So the first one you see there is or could also
7:02
be expressed as 17% 3 second video of you to impression. So for this add account what we see is that 17% of the ads that we serve are resulting in a 3 second video view. The average watch time is 2.52 seconds. And the outbound CTA is 0.83 percent on average.
7:25
So those across all of the ads that we run represent an average result. Then what we did is we created this characteristic or this ad measurement that we call a grade adds and a great ads are ads that outperform the baseline in every one of those metrics. So in other words, their performance
7:47
is above 17% and three points. 3 second video views two impressions above 2.5 two seconds on video average watch time and above 0.83 percent outbound TTR so ads that outperform the average in all three metrics we call those a great ads b grade ads outperform the baseline in two categories C grade outperform them in one
8:10
and then f ads don’t perform the baseline in any categories. And so this allows us to look at individual ads on a letter grade basis and start to understand and compare how letter grades perform across the account. So the question was, if we could get an ad that outperformed the account average in all three of those metrics again in 3 second video views divided
8:32
by impressions average watch time and outbound c tr how would those ads perform on a robust basis versus the baseline average. And this is what we found. You can see that graph sort of expressed here in sort of the table view. So the light green ads are the baseline. This is this data is from a client of ours called Martin Bao the dark green.
8:53
Ads are the a great odds and you can see the performance. Not only does it outperform them, obviously by definition in the I.a. but also in rows as well. In fact, that’s across all of our clients agency wide. We found that a great ads produce a 26% increase in Ross on average.
9:13
So in other words, if we could look at those three indicators and see that they are outperforming the account on average, the likelihood was that they were also outperforming on a sales basis. So just like we were searching for that individual singular metric called share rate we now found it in the form of what we call a grade ads.
9:33
And this is a way that you can look at your individual ad creative performance and begin to think about what ads are working really well, based on the component parts of those three metrics. In the next module, what we’re going to go through is we’re going to now tell you how to setup and analyze that measure. And then how to apply that to creative iteration in a way
9:54
that you’re going to be able to take your ad creative and constantly improve it over time to generate more sales.