I came across an article on LinkedIn recently which sent me into a spin. Admittedly, I focused on one small part, which may have been beyond the point the author was making.
The article discusses blurring the lines between Quant (large amounts of structured data) and Qual (small amounts of unstructured data), yet I heard something different: An offer for clients to get answers even more quickly and inexpensively by replacing slow, expensive Qual studies with a quick-turnaround single question, where text analytics can be used to efficiently analyze the answers.
My complete focus zoomed in on “single question”. One question -?
For me, the heart of good qualitative research is dialog. Not taking answers at face value, but exploring them with respondents – which often unearths real surprises!
Our industry is developing rich ways of bringing dialog to other platforms through smartphone diaries, online ethnography and insight communities, to name a few. The key is creating the opportunity to follow up on answers in a personalized way. Not just asking “please tell us why” – but following up personally, e.g. “Hi Jenny, when you said xyz… I’m not sure if I understood what you meant. Can you please describe to me the last time that happened?”
Here’s an example:We can ask women why they don’t use a certain hair care product.
Many answer “I don’t have the time for this in my routine”. But we don’t stop there. We use dialog to find out where this impression of “no time” comes from. It could be that using the product is boring, and therefore feels like time wasted. Or that the results she gets aren’t striking enough to justify the time spent. Or that the process of using it is laborious, and something she doesn’t enjoy. Through discussion we learn it’s not a time issue, after all – other things are blocking her.
Or another example: We can ask what’s important in the choice of what people eat for breakfast.
Many answer “it has to be healthy”. But what is healthy? For some, it may mean low-calorie & low-fat processed breakfast cereal from the supermarket. For others, relatively high-calorie & high-fat freshly cooked bacon and eggs. Or locally-grown produce from the local farmers’ market. Or gluten-free. Or dairy-free… You get my point.
Without a follow-up discussion, we can only guess what “healthy” means, and we risk guessing wrong.
I don’t want to prematurely dismiss innovative ideas such as text analytics. Rather, I’d like to put in a plea for not losing sight of the value of dialog – and challenge us as an industry to keep personalized, interactive question-asking in the picture.
Who hasn’t been there: The moderator enters the viewing room following the focus group to be met by a chorus of “which concept won?” It’s hard to dampen the enthusiasm by replying “well, it depends…”
In our culture of fast and decisive decision-making, we are under increasing pressure to deliver quick, actionable answers. However, qualitative research isn’t programmed to deliver a yes or no, A or B answer, but rather to help us gain insights to determine which is the right way forward.
Here are 3 steps – before, during and after fieldwork – that can help research teams do just that.
1. Before Fieldwork: Clarify the questions behind the questions
Research teams often come to me with a list of questions they’d like to ask potential customers. I always ask for detailed background information: Why do they want to know that? Which decision depends on it? What else do they already know? And most importantly, what will they do with the answer? This is critical to be able to adjust probing in interviews “on the fly”, in order to avoid wasting time exploring what the team already knows and missing potential surprises which may influence broader decision-making.
I also encourage research teams to reach out to their internal customers during this phase. Product engineers can shed light on which designs have already been tried and rejected, or the advertising team can explain guidelines which govern the choice of color schemes or the placement of taglines.
Example: Concept evaluation for a new medical treatment - knowing limitations
Prior to the interviews, I spent time with the product team discussing details of the treatment, how it compared to others on the market and results of clinical testing. During the interviews, patients expressed distrust in a broad claim, stated without numbers. I knew the client was not in a position to state those numbers – so rather than returning with the finding “patients would like to see numbers to back this up”, I explored which elements could potentially strengthen the broad claim, which features or benefits could be more compelling in its place, or if it would be better to leave the claim off entirely rather than state it in broad terms.
2. During Fieldwork: Ensure we’re asking the right questions
During a study, teams often become very focused on specific details and lose sight of what else may be influencing customers’ behavior. This increases the risk of only getting answers to those specific questions – but potentially missing factors which are even more important!
During interviews, I always start new topics with general, non-specific questions before drilling down into more detail – to allow room for discovering new, unexpected factors.
Example: B2B Product Satisfaction – leaving room in the discussion guide for surprises
During the interviews, I asked respondents to walk us how they used the product – before going through a detailed list of features. We were surprised to learn dissatisfaction wasn’t driven by any specific feature, but because many users disregarded the instructions and didn’t use the product correctly! Had we only focused on the feature list, we would have missed this huge insight – and the recommendation to revise the product based on how professionals actually used it, which was very different than how it was designed to be used.
3. After Fieldwork: Weigh potential answers in the context of implementation
Across qualitative interviews, different respondents express different preferences or behaviors. We also see polarizing reactions – people who completely love and absolutely hate the same thing. That’s why we can rarely say who is the clear “winner” – or the clear “right answer” immediately following fieldwork.
It’s important to evaluate findings in the context of the overall strategy, in order to determine which feedback to weight as more relevant. Taking a pragmatic approach can help teams choose which features to implement or which creative to develop into the final advertising campaign.
Example: Usability testing – evaluating responses in the context of the intended target audience
I recently helped an online retailer choose a design for their new shop re-launch. The designs were very polarizing: The one which was preferred by young, savvy shoppers was strongly rejected by other users, whereas the design which was least often rejected was not rated as very new or different. In choosing the “winner”, we helped the team understand which type of users they would attract or alienate with which design – so that they could choose as “winner” the design which most closely aligned with their target audience and their brand image.
In closing: These three steps can help you engage the whole research team to best respond to the “which concept won” question with confidence – as you’ll be able to base your recommendation on
I recently had the pleasure of attending an ideation workshop with a team of researchers, clients and experts. Our mission was to start with what seemed like an intimidating amount of insights from recent qualitative interviews, and use principles of Behavioral Economics to identify potential nudges for consumers in our client’s category.
This reminded me of some fundamentals I’ve been using in interviews for years. But as one of the workshop attendees playfully pointed out – Behavioral Economics has finally legitimized many of those conclusions we intuitively draw ourselves, based on what we’ve learned through countless interviews we’ve conducted.
So here’s my nudge to myself :
Be mindful of investing more time in exploring and understanding the environment surrounding the decision or behavior we want to learn about, and spend less time asking specifics about why.
Behavioral Economics teaches us that we often rely on mental models, using connections between things we know, understand or admire to minimize the effort of decision-making. These simplified, but effective stories about how the world works can make what we see in advertising or marketing seem plausible to us, or not.
Although it’s tempting, asking respondents why they do something isn’t always a fruitful line of questioning in qualitative interviews. People often don’t know why they decided for a certain product and against another, or may entertain us and themselves with what seems like, in retrospect, a logical explanation for the choices they made.
So when we use qualitative interviews to learn why somebody did or bought something – we should be seeking to learn more about the mental models they used to make that decision. We should be asking, for example:
When we design interviews to learn more about the mental models or frameworks our respondents have when it comes to products and brands, we can help our clients go a long way towards understanding what their true motivations are for making the decisions they do.
After 15 exciting and successful years, we paused at the end of 2016 to re-evaluate where we are and where we’re headed as a company. Most importantly, we took a close look at the alignment between our core strengths, where we deliver the most value add to our clients, and where we are actually spending our time and resources.
As a result, we are modifying our business to enable us to focus more on our areas of expertise.
Elizabeth Lamberts will dedicate her time to Qualitative Moderation. This includes consulting on study design & methodology, screeners and discussion guides, as well as analysis and reporting. She is excited to continue extending her portfolio beyond the traditional face-to-face methodologies such as focus groups and depth interviews to include moderating using digital methodologies for flexible, agile qualitative research.
This means that beginning April 2017 we will leave 3rd party fieldwork (recruiting, facilities) to the local experts! For clients we have supported with full-service in the past, we will work closely to transition to trusted agencies.
Beginning April 2017, Dr. Stefan Lamberts will pursue an exciting new opportunity in his original field, computer science. He will continue to serve in an advisory role as Managing Director at Lamberts Consulting.
We thank our loyal clients for your support and trust over the past 15 years, and look forward to new opportunities and experiences in the next 15 years!