In the first five months of 2009, I’ve guided four teams through making their mental models. We have combed transcripts, labeled quotes, and grouped the labels from the bottom up to create the structure of the mental models. We have made 11 different models together. What has come up again and again is the difficulty of choosing what to include in the model and what to exclude. I see two recurring prominent mistakes. Avoiding these mistakes will greatly simplify the complexity of piecing together the data.
First, someone will break one thing an interview participant says into granular steps. We all have a natural tendency to treat details as king in data analysis. Yes, when building mental models we need to read between the lines of the conversation and pull out implied behaviors. But often researchers pull out too much, including things the participant says which are off-the-cuff statement-of-facts intended to support the central idea the participant is talking about. Here’s an example. I found the following five labels combed out of one transcript by a team-member:
- Participate in Local User Groups in My City – “And a lot of times, we just use these things for ideas … down here in Atlanta, there’s an Atlanta.net users group”
- Hear How Industry Professional Implemented a New Technology – “And typically what happens is someone will come in, usually like a industry professional, and they give a talk on a technology they use, and how they implement it.”
- See What Else Is Out There – “it kind of lets us see what else is out there”
- See Demos & Code Walk-Thrus of New Technologies – “They do demonstrations, and actually low level … actually doing code walk-throughs”
- See Code Debugging Examples – “and debugging”
The team-member is trying too hard to pull implied behaviors from the off-the-cuff statements. The interview participant was trying to say that he learns a lot from seeing the code examples people present at the local users group. The first two and last two labels above are statements of fact. “There’s a users group here in Atlanta where industry professionals give us talks. I see demos, code walk-throughs and debugging examples.” Statements of fact are not behaviors. The participant is not doing anything here except passively attending. You want to look past this, deeper into what the participant is conveying. You want to record the root behavior in the mental model. The five labels above can be replaced with one label:
- Get Ideas from Professional’s Code at Users Group – “we just use these things for ideas … Atlanta.net users group … someone will come in, usually like a industry professional, and they give a talk on a technology they use, and how they implement it … lets us see what else is out there … They do demonstrations … code walk-throughs … debugging”
Note that the actual label could vary and still convey the essential meaning: “Attend Users Group to get Ideas from Professional’s Code” works equally as well.
Second, I frequently see a single concept repeated as separate labels and quotes. The participant has mentioned something a few times in the course of the conversation, and the researcher has not taken the time to pull the quotes together into one label. Instead, when grouping the labels, the team is faced with finding a task already in place for this participant about this concept, and they must rip apart the entry, move the quote into the original concept, and deleted the redundant entry. Here’s an example:
- Find Out Progress of Everyone In Scrum Meetings – “We had scrum meetings every day … kind of finding out the progress of everyone”
- Collaborate at Scrum Meetings – “one of our biggest points for collaboration is we have these – we have daily scrum meetings”
These two redundant entries can be combined into one:
- Find Out Progress of Everyone In Scrum Meetings – “We had scrum meetings every day … kind of finding out the progress of everyone … one of our biggest points for collaboration is we have these – we have daily scrum meetings”
I should admit that it is difficult to avoid this second trap when you are combing one transcript over the course a couple of days. It’s hard to remember that you already captured an idea. If you have the slightest inkling that you have recorded the concept before, do a search right then to see if it already exists in this set of combed labels you are working on. It is worth the time to do it in the moment, rather than grapple with it later when you are grouping.
It’s also possible that you see a nuance–a slight difference–between ideas and you want to label them separately. If you can clearly and easily label the two ideas distinctly, then it is likely the nuance is significant enough to record separately. More often it is the case that you struggle to label them, and after a few minutes of consideration, you decide that the nuance is not important enough to capture in the model after all.
If your team can avoid these two classic mistakes while combing the transcripts and making labels, then grouping those labels will go so much faster. Less brainpower will be needed to understand how the labels relate to each other within the context of one participant, so that more brainpower can be put towards comparing the labels to other participants instead.