If you're among many digital media publishers and OTT broadcasters you'll be feeling like your fill rate could be doing a lot better.

And if you feel that way, chances are you're also feeling a bit lost as to why it's so low and how you can go about fixing the problem.

After all, your editorial teams are driving video views higher and higher all the time. So why is the money not following suit? What can be done?

Here at Watching That we have spent considerable time working with all our ad funded customers in learning and understanding fill rates in great detail.

And from that experience we have developed a tried and tested fill rate improvement framework.

A framework that we've decided to share as this guide.

We hope you find some nuggets of insight that can be applied to your situation and give you a much needed boost.

Setting the right expectation

There is no such thing as a 100% fill rate.

Which leads to the inevitable questions: what is a good fill rate?

Example of a fill rate visualised by Watching That

Answering that fully is a subject for another post but for now a good benchmark is:

  1. 60% - 70%: for the inventory that you've invested in selling directly (i.e. hired sales teams, have regional presence and contextually relevant content etc)
  2. 40% and above: for inventory you farm off to 3rd parties (i.e. international agents, exchanges etc)

As we progress through this guide keep these bands in mind. And obviously you need to know what your fill rate is to make this all work.

For that, we need to first define what we mean by Fill Rate.

A good definition of a Video Fill Rate

There are many definitions of video fill rate out there but we'll be using a very technically accurate one:

Fill Rate = impressions rendered / ad requests sent by the client where a play request is present.

There are two important points in that definition worth pulling out:

  • Impressions Rendered - this is an important distinction as it means we want to measure all the times the ad was actually rendered - so any failures with playing the creatives are taken into account.
  • Play Requests are Present - it's not enough just to count ad requests from the client to get a true fill rate calculation. We need to bring in viewer intent as well to ensure there is a real chance to monetise, and not just some phantom request like pre-fetching ads or bot type traffic.

With this definition in place we will be getting a true measure of how many times a real intent to view is monetised and, more importantly, what we can do about those that fail.

As a side note, the approach outlined here is slightly different for Server Side Ad Insertion setups as there you need to tie both server and client data sets together, but generally the optimisation steps remain the same.

So now we are all working from the same page it is time to collect the right data.

Getting the data we need

Our objective is to use the data to get as complete a picture as possible so that patterns and connections can be spotted, investigated and understood for leverage.

As a minimum you'll need:

  1. Ad Requests : best captured at the client side so you can account for ad blocking; in a pinch you can get a pseudo number from many ad servers but they are based on requests received so can have up to a 30% margin of error;
  2. Play Requests : some commercial video platforms provide these but again best captured from the client;
  3. Impressions Rendered : some times called the Start Ad event, this is the point in the timeline when the video creative starts playing back successfully;
  4. Ad Errors (by code): the best starting point is the VAST Error Spec published by the IAB but you should aim higher by including platform specific errors as well (we maintain a consolidated list here).
  5. Ad Info : things like media urls, page urls, ad units, key value pairs, VAST tag wrappers names, creative ids that all describe the ad to be shown are all required to really zero in on cause.

Obviously the preferred solution is to get all this data in a connected, discrepancy free way by collecting it all from each viewing session at the same time and storing as a cohesive complete session (this is how we approach it at Watching That), but getting the data sets together into a spreadsheet is enough to begin to shed new light on the reasons for any drop off of fill rate.

Now that we've got all the data it’s time to get stuck in.

One more thing before we begin: Video Ad Zones

For any data analysis and investigation we need to create a lens. A way to see the data in a meaningful way, and from the right perspective.

For troubleshooting low video fill rates this is done by looking at the data from the point of view of the sequence of video advertising and at each actor that is involved in making each ad appear and playback.

We’ve covered a lot of this in a technical post about the Video Ad Flow Map but, again for this guide, we just need to agree that there are 3 key zones in the sequence:

  1. Request Zone: this covers all the logic and steps that happen right up to the point the ad is requested
  2. Response Zone: this is all about the process and handoffs between the various actors in the supply chain that should be delivering an ad per the request
  3. Playback Zone: here we are looking at the creative playback. An ad has been delivered but something has gone wrong with rendering and playing it back to the viewer.

Zoning everything in this way really helps to focus in on potential root causes. For example:

  1. Issues arising in the Request Zone are most likely due to a coding issue. A new change to the code base perhaps? Or a newly deploy ad tag where a key value pair is not being set properly;
  2. The Response Zone typically has causes at the policy and demand level. Trying to sell the ad placement for too much, or the video content is not contextually relevant or brand safe, or the audience profile is not what the campaign is looking for.
  3. And the Playback Zone, is usually an issue with the format of the video ad not being compatible with the viewer’s playback device. Or the viewer is on a slow connection and the media file is too large for it to be delivered properly.

The trick is to map the Errors (By Code) data set that we've collected to one of these Zones.

That way we start to link a somewhat abstract number and its generalist description to actual explicit meaning and, in turn, insightful actions.

An example of how mapping errors to ad zones can really focus the remedial action

To save you the hard work of doing this mapping, our Complete Video Ad Error Guide has been updated to map individual error codes to their applicable Zone.

Ok, so if you've followed this far along here's what you now have:

  1. A combined rich dataset from all the key systems and environments involved in delivering video advertising;
  2. A measure of your failed attempts by Zones mapped from the Error Codes;
  3. An understanding of probable cause by Zone.

With probable cause linked to metrics that indicate severity and impact you can pick and choose what to troubleshoot and, more importantly, who's required from the team to do this.

For example if we are dealing with Request Zone issues then let's get our devs involved. Or, if the issue presents in the Response Zone we need to get our trafficking and yield teams involved.

The good news is the troubleshooting process is the same for each Zone - it's just the who you need involved that varies.

Ok, lets get started.

Step #1: Get to know your normal

Nothing in life is perfect and everything has its normal.

With so many factors that go into video advertising effectiveness, we really need to know what normal is and what your expectations should be.

The difference between seeing your fill rate by the hour vs by the day can be the difference

The best way to do this is to record and visualise your fill rate (using the data and calculation listed above) for up to 7 days.

The emerging shape will give a statement of normal.

This is now your starting point. You know you don't want to go lower than this, and we can now set achievable goals for improvement.

Step #2: Explain the trouble in as much detail as you can

Troubleshooting is effectively trying to run cause and effect in reverse.

We need to understand the effects well enough so we can work our way back through the chain.

The challenge is that effects tend to be generalised outcomes - errors linked to arbitrary codes and numbers with generalised descriptions.

Or another way to think about it is like trying to use Google Earth to find a specific house but you're starting at the global view with only information about what continent it's on.

You need to zoom in and in until you're at street level view where cause typically lives.

The way we do this at Watching That is to start with Error Codes and how they map to Zones as described above.

This is like using landmark names and descriptions in Google Earth to allow us to focus on the right country.

We then use contextual data points around browser, territory, device type and more to look for areas of clustering of these errors. This gets us to city level views.

Finally we pull out specific sessions that live in that area and inspect their individual parameters like ad tags, page urls, media urls and more find commonality that gives us a hyper detailed, street level description of the problem.

Example of data points needed to explain the origin of an error

When you're at this point you should now have about 100 offending sessions (out of millions) in a table or list that are described across parameters including:

  • Ad tags including key value pairs
  • VAST data including wrappers, ad ids and ad systems involved
  • Quality data like viewability, player size, latency timings
  • Media URL if the ad was successfully loaded
  • Key IDs from participating systems: Player ID, Video ID, Ad Server IDs, SSP IDs
  • Environmental data including browser name, device type, page urls, page tags, geo data, time of day

Step #3: Finding the cause

Look for clustering of data across metrics and dimensions

With the detailed list of offending ad sessions we developed in the previous step we are now ready to look for the cause of the trouble.

When troubleshooting keep to the mantra "see hooves prints?, think horses not zebras".

99.9% of the time errors are caused by human activity that has lead to the change of the status quo.

So first and foremost look for any changes in the coding of the ad stack:

  • Are there any typos in the ad tags and key value pairs?
  • Is any data missing from the ad tags? Things like consent strings not being set properly? Or macros not being filled in?
  • Are there any security issues with tags being loaded with HTTP and not HTTPS?

Next look to the dynamic elements of the page:

  • On the pages that listed in our offending set of ad sessions, has there been any new code released recently? Player configuration changes? New versions of SDKs etc?
  • Are there any timing issues and page latency?
  • Is there any viewability issue?
  • Was the browser not an active tab?

Next look to the ad information:

  • Are there any new parties in the wrapper chains?
  • Are there any parties not in the wrapper chains that should be there?
  • Is there a common Creative ID or Ad ID?

Next it's time for the video content details:

  • Are there any common tags or specific video ids that are recurring in the list?
  • Has there been any changes to the player configuration?

Now the creative details (if it got this far):

  • Is the media url properly form? Any typos? Macros not filled in?
  • Is there a format compatibility issue?
  • Is the file being served too big for the device and network conditions of the session?

As you go along this list you're looking for commonality, clusters of data that form patterns.

At such a detailed level it will become quite clear what has changed and when.

That will be the smoking gun you're looking for.

But wait, what if nothing is obviously wrong?

So you've done all of the above and still nothing has popped out the other end.

What do you do now?

Well don't panic.

First you might need to find more data. Have you covered all the data in the list above? If you still have blindspots then try and remove those as it's likely the cause is in there.

The more complete the data set you have at your disposal the more effective you will be at finding the cause.

Secondly, and going right back to the beginning of this guide, it very well might be that nothing is technically wrong at all.

If you've completed all the steps with a high granularity data set with nothing wrong then you have probably proven everything is working as expected and you don't have enough buyers for the inventory you're selling.

Enticing buyers to splash out more and more on your inventory will be addressed in another guide but for now you can rest assured your stack is functionality properly and you're armed with the knowhow and tools the next time trouble pops up.

 

 

Like what you're reading? Get every post delivered right to your inbox!