The ad industry has always been consumed with the latest trends. This should be no surprise, given that marketers and their agencies spend the better part of their days trying to create them. But nothing in advertising has generated more buzz in recent months than programmatic buying. Buying ad inventory more efficiently by applying rules to technology-enabled, automated purchases has marketers salivating. And theyโre putting their money where their mouths are, with some of the worldโs largest advertisers reportedly planning to shift as much as 75% of their digital spend to programmatic buying over the next year.
In amongst all the euphoria, an important question has emerged: how to measure the effectiveness of programmatically-bought brand advertising. This is fairly straightforward for direct response advertising โ the same activity-based measures employed for years (e.g., click-throughs, views) still apply, and programmatic platforms and trading desks have done a fantastic job of seamlessly integrating them into a marketerโs workflow. But brand advertising is more complicated, as the traditional measures of effectiveness (campaign reachโwho saw an ad, resonanceโhow effectively they were engaged, and reactionโwhat they did in response) requires more manual involvement in establishing and interpreting the resultsโall of which can counteract the efficiencies that programmatic buying offers in the first place.
The temptation is strong for brand advertisers engaging in programmatic buying to let the algorithm do the work and shift to a more periodic, reactive check of effectiveness measures. After all, if the platformsโ and trading desksโ algorithms enable a marketer to reach the right audience going into the buying decision, how critical is post-buy measurement? To draw an analogy, if you were given the millionth self-driving car, and none of the earlier ones had crashed, would you really stare in the rearview mirror the whole time to assess how well itโs working as it whisks you to your destination?
Itโs a beguiling thought for marketers, because programmatic buying offers efficiencies on both sides of the equationโyou increase the efficiency of your media buying, and you donโt have to invest as much in confirming or disconfirming its efficacy after the fact. But itโs flawed thinking and can jeopardize the success of a marketerโs campaign. Sound, independent measurement focused on core brand objectives matters in a programmatic buying environment every bit as much than in a non-programmatic one.
This is true for several reasons.
First, even if one agrees that advanced audience-identification makes the reach element of post-buy measurement somewhat duplicative, for the reasons advanced above, programmatic buying does nothing to confirm the โresonanceโ and โreactionโ aspects of a campaign. This is where the difference between brand and direct response advertising really bites. Do those exposed to the ad remember it? Even if they remember the ad itself, do they remember the message? Is it influencing them to think about the advertised product or brand in a different way? Are they ultimately more likely to purchase? It may well be that they fit the profile of someone in the market for the productโbut did they buy it once exposed? Ignoring questions of resonance and reaction is like sending out a (virtual) message in a bottle. Perhaps you are casting it off in the direction of your intended recipientโbut you have no way of knowing what effect your message is actually having.
Second, itโs far from clear that the claims made about the high efficacy of programmatic buying in achieving reach objectives are uniformly reliable: Because of its early promise, programmatic buying, like many hot advertising trends before it, has attracted a tidal wave of players, from technology companies, to agencies, to ad networks, to publishers looking for an edge. This is an unavoidable issue of the fact that you create programmatic models not just to direct ads algorithmically, but to build scale. When you do this you inevitably sacrifice some degree of accuracy. Not surprisingly, then, analysis of Online Campaign Ratings results (OCR is a post-transaction, in-flight measure of the campaign audience by demographic group) reveals that some programmatically-bought inventory yields desired-audience delivery percentages below industry norms. In this case, measurement becomes the yardstick that determines whether the model is accurate enough for the price you are paying for it to be worthwhile.
Third, there is often a complex chain of companies involved in the aggregation of ad inventory and audience data. This makes it difficult for marketers and others to identify weak links in order to improve programmatic buying performance. We have seen many platforms and trading desks use OCR to run diagnostics assessing their different providers and vendorsโand finding themselves taking action to optimize performance as a result. It turns out that measurement is a key enabler of improving programmatic buyingโnot something made irrelevant by it.
Fourth, the automated nature of programmatic buying makes transparency and accountability critical. In the cult movie classic โOffice Space,โ a band of slackers โearnsโ a small fortune by writing a piece of code that digitally siphons a penny from each transaction processed by Initechโsomething only possible in the world of computers. There are many rumors of this procedure having been applied in the real world. While none has been verified, the idea does call out two important things when it comes to rules-based, automated systems: one, left unchecked, anything that is โoffโ in an algorithmic protocol will generally be repeated over and over, turning small problems into big ones very quickly; two, there will always be some players incentivized to game the system, and technical complexity is their ally. Analysis from Integral Ad Science, a pioneer in online advertising verification, shows that, today, fraud and viewability at an industry level is worse in the programmatic buying space than in traditional buying (although performance does vary significantly by provider).
For all these reasons, independent third-party audience measurement is as crucial for campaigns driven by programmatic buying as it is for the traditional kind.
What would measurement mean in the programmatic buying environment? Obviously, programmatic buying is an entirely different process from traditional ad buying. As such, simply importing traditional measurement processes into programmatic buying will not succeed. What is needed is measurement that fits the workflow of this new process.
In addition, however, actionable measurement in a programmatic environment must:
Be more accurate and stable than the modeling used for the programmatic buy in the first place. All measurement systems have a margin of error. As such, in order for measurement to be useful to marketers, it must be a more reliable source of information than the information provided in the programmatic scheme pre-transaction.
Be fast enough to allow for action to be taken over the course of a campaign. It is not sufficient only to draw post-campaign conclusions from measurement. In-flight reporting and actionability is paramount to making a dent in the return on investment (ROI) of a campaign. This means minimal lead time in beginning to report on campaign performance, and continuous reporting thereafter, if marketers are to be able to take action and to see the results of their interventions.
Be sufficiently granular to identify specific areas in which to take action. Itโs not enough to report on a campaignโs performance at the broadest level. Such reporting might lead to useful performance discussions between media buyers and sellers, but it wonโt yield dividends for a campaign in flight. Reporting that shows โbreaksโ for different audiences, placement, and creative are needed to allow marketers to pinpoint what is and is not working.
Address the fundamental objectives marketers are trying to achieve. For brand advertising, measures that help assess the marketing ROI in terms of a campaignโs reach, resonance, and reaction must be central to all measurement efforts in programmatic just as in other cases.
Be easily accessed. Having a snazzy user interface that allows manual review of measurement reporting is not enough. In order for measurement to scale across campaigns and feed results back into individual campaigns for optimization, a reliable and robust API is an imperative.
Be independent and unbiased. As noted above, the automated nature of programmatic means that gaming the system is a constant concern. No doubt the vast majority of players act above board. The only way to ensure that all do so is through independent, third-party measurement.
Measurement seeks to enhance marketing efforts, not impede them. Reliable measurement will enable programmatic players to assess their current inputs and models and allow them to build better ones, thus improving the fundamental asset that lies at the heart of any programmatic strategy. Measurement will also help the industry evaluate the effectiveness of programmatic modelsโwhich are now beginning to focus on driving brand engagement as well as direct responseโwith regard to brand engagement. (The possibility that it actually results in lower brand engagement is currently a significant hurdle to higher levels of inventory associated with premium video content โgoing programmatic.โ) Ultimately, reliable measurement can help programmatic buying achieve its full potential, and so help those that pursue it build significant advantages.



