Is lack of knowledge castrating advertising research?

Just got back from APG Canada’s provocatively-titled event: “Is research castrating great work?”  And, in fact, the event actually provoked us to write our own polemic on advertising research, a subject we feel we have a unique perspective on, given our collective experience as clients, agency folk AND advertising researchers! 

The question of how best to research advertising has long been debated,
not least because different stake-holders have very different
perspectives on what matters.

From our time on the client side (where we footed the bills and defended
our expenditures to those above us), we know that advertising ROI is
critical.  If we were going to invest huge sums in an ad, we had to have
some evidence that said ad was going to drive sales.  

From our time on the agency side (where we worked with art directors and copy writers to nurture insights brought to life through creative ideas) we know that great ads can be great in any number of ways: from a great story to a great production; from a compelling dramatization of a product benefit, to an irresistibly cool tone of voice; from tugging at the heartstrings, to generating guffaws.  The varieties of these modes of greatness are not all conducive to advertising research, particularly quantitative research, but sometimes qualitative research as well. 

After much vilification for their “black box”, pass/fail approaches, quantitative researchers have evolved to provide a more nuanced approach to creative evaluation, including more diagnostics, and more emphasis on measuring an ad’s emotional impact (since we now all know consumers make their decisions with their emotion first, and reason second).  But, quantitative research companies know that in order to satisfy senior clients, they must deploy supposedly “proven” measures like norms and persuasion measures, despite the fact that such measures have increasingly been called into question.

Qualitative researchers are preferred by agencies, because qualitative research provides less absolutist decrees about whether an ad will succeed or not, and can provide guidance for how to strengthen an idea that needs work, rather than killing it outright.  However, qualitative researchers may be less informed than quant researchers about “how advertising really works”: even the most skillful moderator and subtle interpreter of focus group data may not have spent much time reading up on the real drivers of advertising success, and tend to rely on a formulaic list of assessment questions: message playback, reported relevance, etc.

Here’s the thing: there is actually a vast body of work on how advertising leads consumers to choose brands, going back YEARS.  But, VERY little of this knowledge actually makes its way into advertising research.   There are too many stake-holders who THINK they know what matters when evaluating advertising, who have not really studied it, and are not open to learning about it.

So, in the hopes of opening a few minds, we’d like to share a little bit of that body of knowledge.

1.     Not enough research money is spent in the so-called “exploratory” or foundational stages of consumer research to understand what motivates consumers, how they currently use products/view brands and what their unmet needs are.  Too often, the first time clients are hearing from their customers is when the product has been developed and the ad has been written.   Getting to know customers earlier on helps develop more relevant brand strategies and messaging

2.     Message playback and purchase intent are not good measures of an ad’s ability to perform because consumers do not necessarily process ads consciously in the real world.  They form more general impressions of a brand through its advertising which are summoned up at the point of purchase.

3.     The most predictive measures of an ad’s in-market success are likeability, brand link and more favourable general impressions of the brand.  The most predictive measure of an ad’s failure is reported confusion (people do not like what they don’t understand).

4.     Story-based ads tend to research best.  Ads relying on executional factors such as casting, cinematography etc. tend not to test as well.  This does NOT mean they won’t make great ads!

5. There is no one-size-fits all approach to advertising evaluation.  Different ads work differently with different consumers. 

We humbly suggest that there is work to be done for ALL stake-holders in terms of educating themselves AND educating those around them.   Here are a few resources we recommend:

Copy Testing: Practice and Best Practice: Tim Ambler and Scott Goldstein

World Advertising Research Center (WARC): a global data base of case studies and articles on advertising best practice.

For a more informed perspective on how to research your advertising, contact us.

s