Advertising Research

Better Creative Development Research: A Conversation with Legendary Planner Adrian Langford

Sarah Jane Johnson

At Athena, we’ve been thinking a lot about what needs to be done to make creative development research more valuable, and more able to reflect the latest thinking about how advertising works and how people process information and make decisions.   So, we decided to pick the brains of the legendary Adrian Langford, Global Planning Director at JWT London.  Adrian has helped develop some of the most effective campaigns in the world, including Smash Martians, Terry’s Chocolate Orange, Kodak and Berocca.  And he has certainly seen his share of creative development research.  Not surprisingly, Adrian had some strong opinions about the state of creative development research today.  Here’s what he had to say…

Adrian Lang

Sarah:  At Athena we believe that having a solid of understanding of how advertising works is critical to doing good creative development research.  When it comes to understanding advertising–what it can do in terms of brand behavior and what consumers do with advertising–what books would you say are the ones that you feel give the best account?

Adrian: You may have read–if you haven’t, it’s a great pleasure–Paul Feldwick’s last book, The Anatomy of Humbug.  I would go with that kind of ecumenical viewpoint that there is no single model, and a lot of the tensions between agencies and clients arise when either side is trying to work dogmatically from a single model.  There ain’t one.  Different things work in different contexts.  The important thing is that the agency, the client, and the researcher have agreed ahead of the research, what do we think the model is in this case, what’s our shared understanding of what the model is so we can be more alert to whether it’s working in the way that we feel it is?

Sarah:   What is the ideal role for qual research in the creative development process?

Adrian:  I would say the ideal is that it’s diagnostic.  And when I say “diagnostic,” what I mean is that it identifies a problem and it’s then able to point at a solution.  It’s really understanding where the problem lies, and the problem can lie in many areas. There’s strategy, there’s the creative idea, there’s execution, there’s other stuff.  It’s identifying where the problem is so you can point at the solution in a way that motivates the creative people who have to carry it out. That’s very different from saying we only scored 25% interest in the eighth decile of the execution, so we need to turn the sound up there and say something exciting.

Sarah: It might be a bit of a funny word to use for qual, but sorts of measures or diagnostics would you think are the most important ones to have in a qual discussion guide?

Adrian:  I wouldn’t have a guide at all.  To be honest, I think it should only be an aide-mémoire for the moderator/researcher.  The catastrophic decline I’ve noticed in the way qual is carried out in the last 10-15 years, a lot of that is down to 10-page discussion guides and people using them as scripts and people not paying attention and listening to what’s being said to them.  So, I wouldn’t say it’s about having things in the guide, but I would say it’s about the researcher having in her head a clear concept and a framework for putting what they’re listening to in context.

Sarah:  When you say there’s a clear framework, are there things that you would say are standard things that every creative concept needs to address?  Because you know obviously in copy testing there scores for breakthrough etc.  Would you say that the same sorts of things apply in qualitative, or is it a different model?

Adrian:  The framework is always strategy, creative idea, execution.  A lot has changed, but that’s still a great framework for analyzing certainly a lot of traditional creative.  It’s keeping in mind strategy and creative idea so that when people are responding you either can analyze subsequently what it is that has driven their response, or if you’re thinking well on your feet and you’re kind of “in the zone” that you’re able to immediately probe or encourage more elaboration or discussion, nudging it gently toward the area where you feel they are having a problem.  When it comes to “measures”, assessing how well a concept is communicating is often down much more to associations than messages. Quant is a very blunt instrument in terms of asking people “what’s it trying to say?”, and checking this word for word against the brief. By comparison qual is more sensitive, unstructured and holistic, so it’s better at eliciting associations beyond direct message, and can judge whether these associations are relevant/motivating.  And of course, a good moderator is always assessing more subtle things like the energy and the temperature in the room, the emotional energy that’s being generated.

Sarah: When it comes to creative qual today, what are the different ways you’re seeing it executed?  Or is there sort of a standard way that creative qual is being done?

Adrian: It’s being done terribly, really terribly. I’m afraid having seen dozens of different suppliers over, say, the past six or seven years when I’ve been back in agencies, almost all of them don’t have a clue about how to do creative development and research.

Sarah:  So, what are they not doing that they should be, and what are they doing that they should not be?

Adrian:  They don’t have the framework.  They don’t understand what strategy is.  They can’t separate creative ideas from creative execution.  So they will cling on to the sort of laundry list of likes and dislikes and come down on the shorter list or the longer list if it’s likes.  They’re just not approaching it with any kind of understanding of what they’re meant to be finding out. There are some exceptions, but unfortunately they’re few and far between.

Sarah: And the honorable exceptions, what do they do?

Adrian: They would be better at identifying something like the creative idea at the heart of it and recognizing whether or not any problems they encounter in the response are driven more by an easily fixed executional problem or something fundamental to the idea so they’re able to tailor their recommendations with a sort of sensitivity.  They’re able to deal with something which is quite fragile and not be heavy handed with it.

Sarah: So what I’m hearing is that it really has a lot to do with the skill of the moderator in terms of how to probe and suss things out versus following a particular sort of approach, a specific checklist of questions that need to be asked.

Adrian:  Absolutely right.  They will clutch the guide like a kind of security blanket and they’ll want to plow through it.  Because of that and because it’s usually very long and involved, they will be desperate to sort of work their way through it like an exam paper.  So you can be sitting behind the glass and hear someone saying something which is incredibly relevant or incredibly revealing, and they’ll be charging on, not able to recognize when things are being said which are of significance.

Sarah: Right.  I do know exactly what that’s like because I have had many guides that I have had to plow through, so they have my sympathy.

Adrian:  People are given a huge amount of writing to do.  It really does become like an examination.  I’ve seen plenty of research where people spend much more time writing in a tense, stilted atmosphere than they ever do discussing, and it’s only when the poor moderator leaves the room to go back and ask a dozen clients watching “would you like to ask something?” that the respondents actually say something spontaneous and interesting and honest.

Sarah: Clearly you are not feeling too positive about qual today.  What is your sense of how clients feel about qual ad research these days?

Adrian:  Actually there’s a fair amount of disillusion with qual is my feeling, but that’s sort of a self-fulfilling prophecy.  If you create this very mechanical, semi-quant sort of process around qual and you’re very prescriptive about it, then you’re going to get very superficial response back.  So there is a dissatisfaction with we’re hearing the same old stuff, yes.

Sarah: One hypothesis we have at Athena is that there has been all this thinking about neuroscience and how consumers process things with their emotions versus rationality and it’s actually the big quant pretesting houses that have done–if not a better job–at least they have taken some steps to ty to address that in their methods, whereas qual companies tend to be not as innovative.  For example, Robert Heath would say that message playback is a completely irrelevant measure, and yet that’s one of the first things most qual researchers ask about.   I’m just wondering what your thoughts are or what you’ve seen.

Adrian:  Yes, that is the most crucial skill: helping people surface and articulate more deeply felt responses. Without that you are doomed to gathering the polite, stereotypical and superficial responses that fuel much of the disillusion with qual.  In terms of the extent to which the quant people have taken emotional response, often they just bolt it onto their normal formula for doing creative research.  They can’t change that formula much and they can’t customize it because the great money is in the fact that it’s sort of off-the-peg and it’s something you can churn out.  But they will bolt on a sort of emotional thing at the end of it.  Where I think it’s being used best is by quant agencies who have not been doing so much creative development, but more brand positioning and brand imagery development, and they’ve been able to apply that technology of what I call implicit response in terms of when people respond to something quite fast it means that is kind of much truer and much more strongly associated.  So it’s that kind of work rather than creative development work, where I think I’ve seen best use made of it.  No one qualitatively has turned it into a tool.  It should be attempting to do it, perhaps.  The trouble is, it’s very hard to turn it into a tool because it’s down to the sensitivity of the moderator.  It’s something that good qual people have always done.  It just hasn’t been a sort of a fancy tool.  It’s being alert to the little slips and comments and tone of voice and posture if you like that shows that people have responded to something either very positively emotionally or very negatively emotionally.  It’s just sensitivity.

Sarah: Whether you can turn it into a specific approach that goes beyond just doing “good qual” is precisely what Athena has been pondering.

Adrian:  It would be the Holy Grail if you could find a way of doing it. That’s the problem, that the best qual looks so damned simple, but that’s deceptive.  For people who can do it well, it’s such a high-level skill, and it combines so many almost contradictory talents.  It’s tough to distill it.

Key Take-aways for Marketers:

  1. The best way to evaluate ad ideas is to identify and agree on what advertising model the creative idea is based on. Don’t assess an ad that’s trying to be emotional in terms of rational measures like purchase intent.
  2. Creative Development researchers need to have a deep understanding of the differences between strategy, creative idea and execution, and to be able to pinpoint where exactly a problem might lie so that an executional issue doesn’t rule out a whole campaign direction.
  3. Qualitative research is valuable precisely because it allows for on-the-spot probing and watching and listening for subtle nuances in discussion. When qual becomes too structured and formulaic, it loses the qualities that make it most valuable.
  4. Exploring the associations that concepts evoke is a better way to assess the impact of an idea vs. using message playback as a proof of effective communication.
  5. The opportunity for moderators is to build on qual’s innate ability explore the subtleties of consumer responses to advertising with a deeper focus on uncovering the subconscious, emotional responses that respondents may not be able to put into words