In the first five months of 2009, I’ve guided four teams through making their mental models. We have combed transcripts, labeled quotes, and grouped the labels from the bottom up to create the structure of the mental models. We have made 11 different models together. What has come up again and again is the difficulty of choosing what to include in the model and what to exclude. I see two recurring prominent mistakes. Avoiding these mistakes will greatly simplify the complexity of piecing together the data.
First, someone will break one thing an interview participant says into granular steps. We all have a natural tendency to treat details as king in data analysis. Yes, when building mental models we need to read between the lines of the conversation and pull out implied behaviors. But often researchers pull out too much, including things the participant says which are off-the-cuff statement-of-facts intended to support the central idea the participant is talking about. Here’s an example. I found the following five labels combed out of one transcript by a team-member:
- Participate in Local User Groups in My City – “And a lot of times, we just use these things for ideas … down here in Atlanta, there’s an Atlanta.net users group”
- Hear How Industry Professional Implemented a New Technology – “And typically what happens is someone will come in, usually like a industry professional, and they give a talk on a technology they use, and how they implement it.”
- See What Else Is Out There – “it kind of lets us see what else is out there”
- See Demos & Code Walk-Thrus of New Technologies – “They do demonstrations, and actually low level … actually doing code walk-throughs”
- See Code Debugging Examples – “and debugging”
The team-member is trying too hard to pull implied behaviors from the off-the-cuff statements. The interview participant was trying to say that he learns a lot from seeing the code examples people present at the local users group. The first two and last two labels above are statements of fact. “There’s a users group here in Atlanta where industry professionals give us talks. I see demos, code walk-throughs and debugging examples.” Statements of fact are not behaviors. The participant is not doing anything here except passively attending. You want to look past this, deeper into what the participant is conveying. You want to record the root behavior in the mental model. The five labels above can be replaced with one label:
- Get Ideas from Professional’s Code at Users Group – “we just use these things for ideas … Atlanta.net users group … someone will come in, usually like a industry professional, and they give a talk on a technology they use, and how they implement it … lets us see what else is out there … They do demonstrations … code walk-throughs … debugging”
Note that the actual label could vary and still convey the essential meaning: “Attend Users Group to get Ideas from Professional’s Code” works equally as well.
Second, I frequently see a single concept repeated as separate labels and quotes. The participant has mentioned something a few times in the course of the conversation, and the researcher has not taken the time to pull the quotes together into one label. Instead, when grouping the labels, the team is faced with finding a task already in place for this participant about this concept, and they must rip apart the entry, move the quote into the original concept, and deleted the redundant entry. Here’s an example:
- Find Out Progress of Everyone In Scrum Meetings – “We had scrum meetings every day … kind of finding out the progress of everyone”
- Collaborate at Scrum Meetings – “one of our biggest points for collaboration is we have these – we have daily scrum meetings”
These two redundant entries can be combined into one:
- Find Out Progress of Everyone In Scrum Meetings – “We had scrum meetings every day … kind of finding out the progress of everyone … one of our biggest points for collaboration is we have these – we have daily scrum meetings”
I should admit that it is difficult to avoid this second trap when you are combing one transcript over the course a couple of days. It’s hard to remember that you already captured an idea. If you have the slightest inkling that you have recorded the concept before, do a search right then to see if it already exists in this set of combed labels you are working on. It is worth the time to do it in the moment, rather than grapple with it later when you are grouping.
It’s also possible that you see a nuance–a slight difference–between ideas and you want to label them separately. If you can clearly and easily label the two ideas distinctly, then it is likely the nuance is significant enough to record separately. More often it is the case that you struggle to label them, and after a few minutes of consideration, you decide that the nuance is not important enough to capture in the model after all.
If your team can avoid these two classic mistakes while combing the transcripts and making labels, then grouping those labels will go so much faster. Less brainpower will be needed to understand how the labels relate to each other within the context of one participant, so that more brainpower can be put towards comparing the labels to other participants instead.
At nearly every presentation and workshop I give, someone comes up to me and asks, “Have you tried (insert tool name here) with your method? It’s really cool.” I shake my head no and politely ask them what they think the tool would do. Every explanation boils down to this: it would automate the analysis of all those interviews, make affinity groups, and do away with all that manual work. I ask them to email me the tool name, then I file it away untouched. I always thought this was just a personal quirk of mine–that I want to do the analysis myself. I don’t want to use a tool to comb through transcripts for me because I’m the one who is reading between the lines and guessing at implied meanings.
I finally realize this is not a personal quirk. Manually analyzing the transcripts is a requirement.
Why? Why should we do things manually when there are tools to automate them? In a nutshell: none of the tools is good enough. Yesterday I watched a Marvin Minsky lecture on how artificial intelligence is not making any breakthroughs. A chimpanzee can still recognize things better than a machine can. You and I can recognize intentions, motivations, implied philosophies, and emotions better than a piece of software can.
But there is a more compelling argument than this. This argument is predicated on an analogy to food. Nutritionists have been telling us that we should eat closer to the source–eat foods that are not processed, that are closer to their natural state. Books mention that the reason food manufacturers enrich their products with nutrients is because the nutrients have been removed during processing. Enrichment only adds a small percentage back. So my argument goes that by analyzing the interviews yourself, you are going closer to the source. (This argument also works for deciding who to interview: the business stakeholders at the organization, the people who currently serve a customer, or the customer herself.) It is far more nutritious for your brain to get enmeshed in the complex strands of meaning in a conversation than to look at the pre-sorted list of software-processed groups. It’s like going out to the garden and seeing how the plants grow, learning how they use the soil, watching pollination, understanding how disease spreads, harvesting the fruits, and then using them in your kitchen, as opposed to sitting on the couch eating potato chips. Those chips bear little resemblance to an actual potato.
The point of mental models is to understand your customers deeply–so deeply you could live their life, walk in their shoes, and make decisions exactly the way they would. If you use processed data to create your understanding of customers, you only create empathy with the tool or the tool’s designer, not your customers.
I just finished my Australian Road Show with the Web Directions folks. It was really illuminating doing the workshop three times in a row. I conduct five classroom exercises in each day-long workshop, and one thing really stood out for me this week. When workshop attendees tried their hand at grouping data (represented by a deck of cards with verb + noun labels on them) by affinity, things sometimes fell apart in two different ways.
The first part of the problem was because the data I gave them to group has to do with training for marathons, which, to my chagrin, is not widespread in Australia. By the last workshop, I managed to alleviate misinterpretations of the data by explaining what it all meant up front. However, the folks in the Canberra workshop really struggled with marathoning concepts. (Sorry for that!)
The second, more striking, observation I made is that workshop attendees often tried to group too high by making a few labels like “prepare,” “monitor,” and “track,” then putting all the cards into these three or four big broad categories. Each category would have 20 cards or so. Then the groups would try to investigate each of those piles of cards for more detailed subsets and encounter difficulty.
(One of the groups sorting marathon cards at the Melbourne workshop.)
That’s a top down approach. Even though I asked them to work from the bottom up, to refrain from putting their own model of the data together at the top level, to avoid making boxes and sorting stuff into them, they did this. I don’t think the groups realized they were working from the top down. They were thinking it was more of a way to break down the large amount of data into more manageable piles. I guess it is a natural tendency for many of us. In our own process, if we have a lot of data, we want to break it down into a few sets and attack each set separately. It reduces cognitive overhead, if you will, and makes us feel less overwhelmed.
Truly the best way of grouping data into subsets from the bottom up is to randomly select one piece of data to begin with and compare it to all the other data to see what is like it. Ask yourself, “What does this person intend by saying this?” What’s behind a label like “Stop to Remove Rock from My Shoe” or “Enjoy the Fall Colors?” For the first one, the intent is to make my running as comfortable as possible, since it is quintessentially an uncomfortable process. The second label represents the intention to get a spiritual boost from the run, perhaps associated with distracting myself from the pain. By looking at the intent behind a label, workshop attendees found it much easier to find similarities between cards spread out on the table before them.
I figured these observations were important enough to share with the general group. We’re all data analysts of one stripe or another. It helps to remind ourselves that the data needs to be assessed one droplet at a time, rather than as a set.
Apparently Mental Models is the book to read right now. Two UX Book Clubs are covering it for their upcoming meetings.
Nathan Shedroff’s book is out, and his message will not fall on deaf ears. One of the key points he makes is that for the past decade or two, design has been “about appearance, or margins, or offerings and market segments, and not about real people–their needs, abilities, desires, emotions, and so on.” Notice something that Rosenfeld Media’s books have in common?
The fact that we are all emphasizing the importance of caring about people–respecting their own thought processes–won’t go unnoticed. It may take a bit of persuasion at the upper echelons of many organizations, but we are good messengers, all of us. Eventually organizations will stop “creating consumers” and start supporting what people need and love to get done. Mental models are one tool to help organizations do that.
If we base what we produce and what we serve people on their inherent humanity, it will lead to a better world. The direction Nathan would have us go is toward “design that is about systems solutions, intent, appropriate and knowledgeable integration of people, planet, and profit … design that can lead to healthy, sustainable solutions.” An excellent point!
Erich Joachimsthaler, author of the book, Hidden in Plain Sight: How to Find and Execute Your Company’s Next Big Growth Strategy,” says this about mental models:
“The book talks much about the design process, but what I find the most important lessons is how to develop a strategy for a firm, from the point of view of the behaviors of the people the company serves. This has the potential to add entirely new thoughts about strategy formulation.“
It’s thrilling to see mental models recognized as a strategic tool. Read more of Erich’s book review and his sister approach of episodic reconstruction. I also talk about tactics and strategies in my Future Practice seminar.
First the small print: I use the archaic term “task” to mean any behavior or motivator that a person mentions. “Task” is limited in definition, but it’s simpler to say than any other combination of words that I actually mean, like “Behavior, Belief, and Emotion.” Just wanted to point this out …
During combing, I always use two levels of task: Atomic Task and Task. In spreadsheet format, this means I have two task columns. The reason for the two levels is so that if multiple voices blend together to say basically the same thing, then I can keep all the quotes from the multiple voices, but have only one Task box in the Tower. The reason I do this voice blending into one task is so that I can avoid having too many really similar tasks in one tower. I want to make each tower as clear, but concise (small), as possible.
Each task box in a tower might represent one voice, two voices, or multiple voices. Each voice has her/his quote attached in the spreadsheet. No one wants to lose those quotes. When I urge people to consolidate similar tasks, they ask me all the time, “But which quote should I delete?” I don’t want you to delete any quotes. Keep them, and group them into a blended-voice task with a label in the Task column.
Simply put, you want to begin combing transcripts by copying quotes out, marking them with a participant’s ID number, and then labeling the quote with a verb + noun in the Atomic Task column. After you have all the quotes for all the participants copied and labeled, then you begin comparing one participant’s labels to another. Start grouping Atomic Tasks together where they are similar, and label this group in the Task column, leaving the Atomic Task and the Quote columns alone.
Make sure each voice in a Task group is distinct. If it’s a set of three Atomic Tasks, all from the same voice, then just combine all those quotes into one cell. The participant was just repeating himself a lot.
The words in the labels will make or break the mental model. Here is some advice to bear in mind.
First, please start each label with a clear, present tense verb. I often see things like “worried” and “manager” and “needed” as the first word in mental model labels. Ix-nay. Verbs have power. Use them.
Second, always use the personal pronoun “I” in your labels. Don’t use “she” or “he” because that distances the reader.
Third, avoid compound labels (coming from compound quotes). When reviewing people’s work, I frequently pull apart several quotes and made them into two or three separate rows in the spreadsheet (or separate stickies on the wall), each with a separate label expressing a separate thing. Example: “Communicate with friends about news, vacation, social events” becomes three different tasks: “Share latest industry news with friends,” “Tell friends my vacation plans,” and “Chat about upcoming social events with friends.” See how the verb is more concise and powerful than the more academic-sounding “communicate?”
With these three points in mind, go through your mental model right now and make corrections. I guarantee the result will be stronger.
On the back-channel at my Future Practice Seminar last December, the participants asked (and answered) several questions. I recently went through all the chatter and collected a set of questions I think that people would like to see. Check out the Q&A.