Adaptive Path has hired a marketing manager. I’m glad to see this happen, as hopefully it heralds a trend where the new kids (UX designers) learn to understand and collaborate with the old kids (marketing folks). At least, personally I want to get the conversation going. Ten years ago I found the statistics coming from marketing staff of several organizations to be difficult to apply when creating new designs. I subsequently realized it was because those data were from preference research programs, which is not as useful as generative research data when it comes to getting creativity to flow. Hence, I started making mental models.
The new marketing manager at Adaptive Path is Smart. (His name is Roland Smart.) Really he is. He is probably tired-to-death of this particular humor, but allow me to share with you some of the questions he asked me recently in conversation. He knows his stuff, and further, he knows the problems that plague researchers. I suspect several of you are his equal in this respect and will get some benefit from how he asks things and how I explain.
Roland asked about sample size, the way a “question” is asked, and conflicting preference statements:
“As a marketer I’ve conducted surveys and field research for various projects over the years and one of the issues that invariably arises relates to sample size. I’m curious how you determine the sample size for building mental models? There are standards for taking representative samples, but I sense from you that it might take too long to actually do that level of research, and that this is a lighter and more agile approach?
“Of course, there is always concern that smaller sample sizes will obfuscate a good understanding of behaviors, or simply misrepresent. And to add another variable to the equation, I’ve read some interesting research that seems to point to the fact that people confabulate reasons for their behaviors quite often. One such study was around the Pepsi/Coke Challenge. Part of the subsequent research demonstrated that sample size (the amount of soda in the sample taste) has a significant impact on reporting (basically the sweeter products did better with smaller sample sizes) and Pepsi was able to game the challenge to their advantage. Yet, people would make clear statements about their preferences. How do you avoid such traps when conducting your interviews?”
I have a lot to say about this. To begin with, I avoid asking preferences entirely. I avoid asking about specific things (like picking Coke or Pepsi) and instead ask about something general (like what they think about when they order a soda). For a mental model, we have the luxury of ignoring what people prefer–although they will tell us in conversation. We don’t follow up on it. This is generative research, not preference research. I think generative research is not a method that is widely practiced.
It’s closer to a psychotherapy session than anything else, as far as I can tell. In conversation, I focus on unpacking the “why” behind things. I skip over the detailed “how” part and just keep asking why. For example, a friend says she likes to see movies by herself. I think a lot of surveys would just stop there and report that 12% of moviegoers prefer to see movies alone. But I ask why and get tons of different reasons, like “I want to give the director my attention and respect,” or “I want to be sure I get the point of the film.” or “I need a break from my Blackberry and all the people who want me to do something.” When I know these underlying reasons, then I can come up with better ideas, such as helping someone “get the point” of a film through other channels in addition to the movie itself. If I only had the “12% of moviegoers prefer to go alone,” I wouldn’t have a foundation to bounce ideas off of.
Because it’s kind of like psychotherapy, you get to the deep ideas, but you don’t have time or money to spend an hour talking “on the couch” with 1000 or even 100 people. So money is one reason for the smaller sample size. The other reason is that I’m going to create a sketch, not statistics. I need to make myself confident that I’ve sketched every mental space and mostly every conceptual group (tower). I don’t care whether all the boxes are represented in the tower–I just need the gist of it. If I’ve missed a mental space, though, that’s bad news, especially if I don’t realize I’ve missed it. If I’ve missed a tower, it will get caught during the mental model storytelling and polishing with stakeholders. If I’ve missed a box in a tower, no big deal. When I think of sample size, I think more in terms of how it is done with evaluative research. In usability testing, we start seeing the same problems over and over, and new problems crop up with less and less frequency as we go through more participants. In my experience with mental models, I get repetition and few new concepts at about the 4th or 6th participant of a certain audience segment. Generally we study 4-5 segments at once.
Then Roland asked about how research can effect results:
“When I worked for Adina for Life I spent a significant amount of time in different kinds of retailers watching how people select beverages off the shelf. I also conducted surveys in conjunction with in-store demo campaigns, ran taste test focus groups, and did online surveying around product packaging, and other messaging. I’ve always tried to be aware of how a research methodology can effect results, but I’ve never really been able to get good enough clarity on this. I understand that any approach has it’s pros and cons, but you mentioned in your talk that surveys are particularly difficult because they often have embedded biases. I agree that there are biases, but my experience has been that field research is even more challenging in this regard when there are interactions between the researcher and subjects. How do you avoid such influences?
“When I think of these issues I am reminded of Trinh T. Minh-ha, the documentary film-maker and artist. My background is partially in the arts, and I learned about her work in college through post-structuralist curriculum. Some of her work explores how the documentary film camera influences the behaviors it captures. In your talk there was a discussion of letting subjects review video of themselves, and trying to explain their behavior in retrospect. I wonder if this is a good approach to address the issue of confabulation and research bias?
“I’ve been completely blown away by the work of Dr. John Gottman. You may have heard of his work on marriage? He has conducted research for the last 15 years, which has completely changed previous theories on marriage stability. He conducts highly effective research that involves prominent use of cameras, and bio-measuring (heart rate, blood pressure, etc). Somehow all the apparatus has not gotten in the way of a methodology that can predict with 90%+ accuracy if couples will be together in seven years! It’s hard to understand when the approach is going to get in the way, and when it won’t.”
For mental model, the researcher conducting the interview has to be skilled to be able to elicit the kind of deep rationales that we need. Thus the interaction between participant and interviewer is key. Without it, the research results stay at a much shallower level. “I like to go to movies alone.”
I’ve never personally done the review-the-video-tape type of research, but I think what is happening is that it’s a high-stress situation where the person being documented does not know she’s on film, or if she does, it’s of very little importance to her focus. There’s a patient dying! She is focused on that, not the camera. There’s also the flip side of the deal that participants really want to help. They are smart monkeys and know about the perils of being influenced to say the wrong thing. It’s been so amazing to me how much participants really engage in the process. The are not ‘being read.’ They are playing an active part in the research, helping it forward. They love the attention, too.