Mental Models Blog

Aligning Design Strategy with Human Behavior

Posts written by Indi Young

  • Examples of Tough Combing Labels

    Posted on

    Hey, you wanted to test your combing/labeling skills, right? You wanted to hone your ability to grab the most descriptive verb possible, and pull out the implications of what the person is really trying to say? Here is a set of examples with a little discourse about why I suggest the label I suggest. The original labels have been suggested by people I am mentoring through the combing process.

    • original label: Compare Other (Higher Paying but Fewer Benefits) Jobs w/Mine – “I put a pencil to it. I mean, I know exactly what my benefits are, and I’ve looked at other jobs … if I was to give up my benefits and retirement … that I get now, because the first five years you went somewhere, you’re not vested, so you’re basically giving that up, and that’s ten percent of my salary, so you put a pencil to it, and then a lot of people don’t pay all your health insurance …”
    • my change: Since his emphasis is “putting a pencil to it,” I assumed that meant that he calculated the value of his benefits, and I changed the label to start with the verb “calculate.”
    • question: He calculates the value of his benefits because he is comparing his low-pay/good-benefits job with a high-pay/poor-benefits one. Should there be 2 tasks: ‘compare my job…’ w/a subtask of ‘calculate value…’? or one: ‘calculate value of benefits to compare my low-pay job w/a high-pay one’?
    • my answer: Good point. I actually used the label “Calculate How Much My Benefits Add to My Income Here” so that it emphasizes that he is calculating to come to a decision point. In conversation with him, the gist of this bit was that he was deciding if the benefits were still high enough to compensate for going out and looking for another job. Instead, I can change this label to “Decide If Benefits Here Add Up to Compensate Low Wage.” There are several other tasks around “Consider Leaving Low Paying Job if Good Benefits Cut” and “Value Benefits Equally w/Career Interest” that I don’t think we need to break it into sub-tasks. The answer about sub-tasks is always in the context of what other tasks you’ve pulled, and if they basically cover what the guy is intending to communicate to us.
    • original label: Feel Stripping Out Benefits Hurts Company in Long Run – “Now, that’s been changed, and a lot of new people coming in … aren’t gonna get that, so… I actually think they’ve hurt their selves. A lot of benefits are being stripped … as new people come in, and I think … in the long run it’s gonna hurt them.”
    • my change: This is his opinion. We can skip opinions.
    • question: What if his philosophy is: “good benefits are needed to retain talented people?” He is only staying at this job for the excellent benefits.
    • my answer: Since he is not in a position to hire people, and he is not talking about hiring policies or changing the benefits, it’s just him griping. He hasn’t even made a move to recommend changing the benefits policy yet. If he had formally recommended changes instead of just complaining, then we could use this. But his beliefs about good benefits to retain talent were not put into practice at all. He was complaining because we gave him an opportunity to complain. He might not have anyone who solicits complaints in his normal day-to-day life, so it may not be his normal behavior and should not be captured in the mental model.
    • original label: Think Lower Quality Employees Result From Stingier Benefits – “… and the new people coming in … you get less quality people, I think.”
    • my change: This is the same opinion, basically. I was trying really hard to steer him away from “like this/hate this” discussions, but he definitely started out thinking this was a conversation about what his opinions were. Usually these things are the fault of the interviewer, and you get to read it with a grain of salt and say to yourself, “Wait just a minute here. This doesn’t explain anything. It starts to get at a philosophy but falls way short.” So skip this one. Delete it.
    • original label: Serve on Committee to Write 5 Yr Tech Plan – “I’m on the committee to write the technology plan, you know, the five-year plan we’ve got to get
      rewritten.”
    • my change: If he was talking about writing a 5-year plan, then the verb “write” is the key here.
    • original label: Work w/Internal & External Groups – “I also had to work with not only our people here, but I went to … meet with the contractors and do follow-ups and then meet with contractors individually.”
    • my change: “Work” is a pretty non-descriptive verb. I changed this to “Follow Up with External Contractors.”
    • original label: Keep in Touch w/Local & State Employees via Meetings, Calls, & Emails to Determine Needs – “The state, the meetings, yes, there were meetings with the local employees to see what they needed, because they have a supervisor on
      site, and then the rest of it was basically phone calls, emails with the people from the state with their organization.”
    • my change: I clarified the label to “Solicit Network Requirements from State Employees.”
    • question: His behavior is “meet w/people” but the underlying reason is to “determine requirements.” Hence your label, right?
    • my answer: Yeah. “Meet” is a really vague verb, as is “Keep In Touch.” When I see those, it’s an indication to dig deeper into the reason for the meetings and the keeping in touch. “What is going on here? Oh, it’s to solicit the requirements from people, over time, via a bunch of channels, but my main goal is to solicit requirements.”
    • original label: Consider Future Needs when Making Recommendations – “a cheap one that you could get with … 50 licenses, or you could spend another 100
      bucks and get an unlimited number of licenses … and she’s like, ‘Well, we don’t need that. We’ll never have more than 25,’ … I’m like, ‘You don’t know this.'”
    • my change: The label is a little too high level. I changed it to “Argue That $100 Is Cheap Compared to Risk of Needing More Than 25 Licenses.”
    • question: Does the specificity of “$100” complicate the grouping?
    • my answer: Nope. I will ignore the specifics as I group. At this point when you are labeling quotes, I want you to be specific so that sometimes I can just group the label without even reading the quote. I want the label to cut to the chase so I can use it quickly, since grouping takes so much mental capacity. I will probably put this with other tasks that are specific about other things, but general about persuading someone about spending or future costs or upgrade decisions.
    • original label Get Called for Help b/c My Extensive Working Knowledge of the System – “There’s probably nobody else here that really understands it from A to Z… if something goes wrong, they call me, because … I have such a working knowledge of every intricate bit and piece … I normally do have the answer.”
    • my change: “Get Called” is a passive verb–he’s not doing anything. I labeled it “Answer Questions About The Systems I Know So Well.”
    • original label: Express Frustration w/Inability to Reach Vendor – “I brought up the fact … that I couldn’t get hold of them, and the guy that’s in charge … he’s on there, too, and he goes, “Well, you can always call the center. They always know how to get hold of the guy.”.. And so I told him … “I couldn’t even get through your center … even they weren’t answering,” and he, you could tell, is silent, you know.”
    • my change: This label is in the middle. Either it’s an emotion (Feel Frustrated) or it’s a behavior (Express My Frustration to Vendor). I think it’s the latter.
      And I would label it more clearly as “Point Out Vendor is Not Always Available,
      Despite Claims.”

    Forecast Confusion

    Posted on

    So it has been raining in the San Francisco Bay Area. You may have heard. When a brief ray of sun broke out this afternoon, my associate Eric Fain wrote, “Weather channel says 90% of rain ALL DAY. I’m so happy it’s sunny.” I interpreted that to mean he expected it to rain all day, and so this sunshine had beat the weatherman’s* odds. The weatherman had said 90%, right? Hang on — 90% what? Is the weatherman 90% certain it will rain today? Is he saying 90% of the day will be filled with rain? Is he saying it might only rain for two minutes, but he’s 90% sure some sort of precipitation will happen? Or is he saying something crazy like “We will be receiving 90% of the storm front’s precipitation today?” I think the first explanation is what he means … and that’s when I realized I had been reading it as 90% of the day will be filled with rain. Yeah, those percentages are confusing.

    Seven day forecast icons from NOAA point forecast.

    Okay, maybe this needs some design love. I figured it was time for some conversations with people. Why do people look at weather forecasts? What are they figuring out inside their heads? A quick canvass resulted in these intentions:

    • “Whether or not I should bring a coat or hat for rain …”
    • “I want figure out what to wear, whether to bring a jacket, sweater, raincoat, or umbrella.”
    • “… to see if i need to bring an umbrella.”
    • “I try to plan for what to wear and what accessories I need. (umbrella, etc.)”
    • “It’s usually for my commute purposes.”
    • “If it’s supposed to snow, I’ll leave for work earlier and plan to leave for home earlier.”
    • “… deciding when to head to the gym–try to beat the rain and ensuing traffic.”
    • “I want to know if there will be a dry spell when I can go for a run today.”
    • “I want to decide whether I should bike to work or not today.”
    • “I look at the current temperature to figure out how many layers to wear when going for a run or a bike ride.”
    • “If an upcoming activity will be rained upon, I want to see if I should change my plans accordingly.”
    • “I might decide if I really want to go on that hike in the rain.”
    • “I’m checking to see if it’s going to be dry so I can go on a ride.”
    • “I want to know if I should plan to prune the roses today, or if I should do one of my inside projects instead.”
    • “It’s to guess at if it’s safe to go grocery shopping, or will that be a miserable experience. I look at the hourly forecast.”
    • “I want to see how hot it is expected to get the next day, so I can plan when to rack off wine.”
    • “If it’s going to rain this weekend at the Pinnacles, we won’t go climbing.”
    • “If I’m going on vacation, I’ll look up the weather there to know how to pack.”
    • “When I am getting on a plane, what to bring …”
    • “I need to figure out what to pack for a business trip.”
    • “Our basement flooded. Did I tell you how close we are to the canal?”
    • “I’m interested in how much rain has fallen in the past 24 hours.”
    • “If we are going paragliding, we look for which way the wind would blow.”
    • “… to see how close the next storm is on the radar map.”
    • “I’m a weather junkie. I want to see if there’s something exciting to look forward to.”
    • “I have weather on my iPhone in four parts of the world just to see what my friends are experiencing, for fun, to contrast with the weather in California.”

    You can see that the audience segment I contacted skews toward people who commute and go outside for exercise. If I asked people who deliver packages or paramedics or strawberry farmers, I would get some different responses in addition to those above. So let’s agree that we’re designing only for people who commute and go outside for exercise, not for the others at the moment.

    From this quick set of conversations, I can see that people are looking to see whether they will be affected by the weather during certain activities, and they may adjust the time of those activities or the things they wear to make the activity more comfortable. And they want to know how much more rain is coming, adding to the mess in their basement.

    So an icon like on the NOAA site above saying “This Afternoon 100% Rain” understandably makes people believe that they will get wet during their commute, trip to the gym, or run/hike/ride. It’s 100% certain. Imagine their consternation when they get outside and all the rain gear is unnecessary because it has finished raining. No wonder the weatherman gets a lot of complaints.

    There are a few things about the icons above that I’m not sure about. There was no easily-identifiable legend to explain things, and I didn’t contact NOAA to find out.

    • Why are the percentages part of the first read? Why are they so catchy and important-looking?
    • What is the difference between rain and showers? (Today it said rain, but it came in little 20-minute sets, and it was dry in between. I would have defined that as a shower.)
    • What does this scale of likelihood mean, exactly: Likely, Chance, Slight Chance.
    • If the percentage is the certainty of the forecast, how does it relate to the likelihood? If it is the same thing, why is it represented twice?

    It looks like the weatherman is trying to represent the data he has: how much rain we might get, and how sure he is we’ll get it. (I’m not actually sure about the former; maybe it’s just the latter, in general.) Juxtaposed to that, people I interviewed want to know how much of the day will be wet, and when those periods of wetness will occur. We all realize that the weatherman probably can’t tell us that, exactly, so we forgive them and accept less specific predictions. So, if the data we need does not exist, then let us at least clean up those confusing percentages. What if the forecast icons had a little rain gauge next to each icon that predicted how much water is expected? We can then see a little chart over the next few days of what to expect, so we can put off pruning or hiking until a more suitable day. What if we make the word placement consistent, so the type of weather expected (“Rain,” “Showers”) is always the first line under the icon, and the likelihood (“Chance,” “Slight Chance”) is the second line under the icon? It would make the set of icons easier to scan.

    Cleaned up forecast icons for the NOAA point forecast.

    Note that the rain gauge allows the weatherman to show how the intensity of precipitation may change, even though the likelihood decreases or stays the same. Notice, too, that I moved the date over to the right, just above the first icon, so that we can associate them easily. (I have to confess that a couple of years ago NOAA changed the URL for my point forecast, so my bookmark was old. For a few weeks I kept staring at the same nine icons of fog wondering when the weather would ever change. I didn’t see the “Last Update” or “Forecast Valid” lines in among all the text above the heading.)

    In the interest of clean space, and taking advantage of our super-associative brains, I thought one more design change might work. We already know that if our loved one writes us a note and double-underlines the word “important,” we had better pay attention to what the note says. We can telegraph the importance of the weather type in the same way, indicating likelihood with a double underline, or “slight chance” with a couple of question marks. I’m not sure I understand the scale the weatherman is using for likelihood, so my attempt to put it into a four-part scale may be incorrect.

    Cleaned up forecast icons for the NOAA point forecast.

    This consolidation leaves us room to layer in something else that may be important and happening at the same time, like wind or thunderstorms or tornadoes, instead of shoving it further down the page.

    I admit that these redesigns are not pretty, nor do they address one of the concerns people I talked to had, “Which hours of the day will be rainy?” I did address the concerns, “How much more rain is coming?” and “Should I put this off until a better day?” If “Showers” and “Rain” get more clearly differentiated by the weatherman, I can also address “Will there be a break in the rain for me to go for a run?” Otherwise, I tried to work within my imagined data limitations, and I tried to offer a very slight change to the format which may have a chance of getting through bureaucracy. At least it gets rid of the confusing percentages.

    * “The weatherman” is my euphemism for all the people who team up with satellites and reams of data to bring us these weather predictions. I am using the word tongue-in-cheek. There was a time in my early 20’s when I desperately wanted to work for NOAA Earth System Research Lab in Boulder, Colorado, so I got an invitation there to see what it was like there. Impressive! They have my respect.

    Support Intentions, Not Existing Workflows

    Posted on

    This week I was chatting with someone who works at an organization that does not yet recognize the value of generative research before defining products. She said to me, with exasperation in her voice, “The product managers here still go around collecting needs from our customers and giving us lists of features to implement.” She had some money left over from a budget (that doesn’t happen often!) and wanted to spend it on a small research project that would get to the root of what people were trying to do–people who were not yet customers. Her dream is to be able to show the product managers and executives at her company results from the generative research illuminating several new, previously uncharted activities that her company can support.

    Her situation got me thinking about how most organizations go about product design, and how short-sighted their method is. Usually the focus is on existing practices of customers using existing products. If you’re lucky, maybe the focus is on existing practices, but of people using other companies’ products. People spend lots of effort to capture the step-by-step procedure customers use to achieve something. They produce a lot of boxes-and-arrows diagrams that portray all the nuances of the customer process. A lot of time and sweat is invested in making those boxes-and-arrows precise, which is unnecessary, in my opinion. If the precise diagram only traces the workaround someone developed to make things go how he wants them to go, then you are only perpetuating a flaw. Sure, people buy flawed services and products, but not because they want to. It’s because it’s all the choice they have.

    It’s time to go past the existing workflow and get into peoples’ intentions.

    Examples of steps in a workflow versus the intention behind them

    In each of these examples, you can imagine different ways to support the actor. Take the hungry person who just wants to spend time with her husband after their very busy days. Cooking and shopping are just two things she can eliminate from her schedule. She could also eliminate picking up the dry cleaning or writing checks to pay bills. But let’s say your organization is an organic produce farmers market. You have a basic philosophy of helping people eat healthy foods. You offer a weekly home-delivery service of a box of local, in-season fruit and vegetables. Along with that service, you have a selection of recipes online to help people cook vegetables they are unfamiliar with. For years you and your peers have been discussing how to convince people to stop doing the fast food thing, but the ease of getting quick, cheap (usually sweet and fatty and produced somewhere far, far away–but I digress) food seems like too much to overcome. Examine her underlying motive: spend more time with her husband. Let’s say that you’ve already captured the market that happily cooks and have convinced their spouses to cook together. That leaves the couple that doesn’t like cooking. What if you get a professional kitchen license and create some simple, wholesome, organic “TV dinners,” packaged in recyclable paper take-out boxes, that you offer for weekly delivery. True, this is a business risk and probably costs almost as much to produce as you could charge for it, but it is worth exploring since you already have the delivery infrastructure in place. If the hungry person is motivated more by quality time than by cost, then you might have a great idea here.

    When you spend time with people who might become someone you produce a service or a product for, concentrate on finding these underlying intentions. Deliberately jump past the details of how they execute something currently and spend time instead asking them what’s behind this step. What are they trying to accomplish besides the step itself? Frequently, people haven’t really thought past the steps, and your conversation turns into more of a psychotherapy session, helping the person work through the underlying issues and describe them for you. When this happens, you know you’re on the right track. With the results of several conversations like this, you can guide your organization into areas you hadn’t previously considered or been consciously aware of. This direction leads to services and products that support what a person really intends to do and makes their life smoother. And that is a very attractive proposition to most of us.

    “Aloof” Wasn’t What You Meant

    Posted on

    The operational difference between an organization with happy customers and an organization with disgruntled customers is easy to define. The latter understands customers only at their highest level. They reach out to see how customers interact only with the options they already have in place. These organizations set in place processes and business rules that keep their employees from wasting time or offering the wrong service. Subsequently, employees only feel confident about these line items. Any other situation gets met with the phrase, “That’s not part of my job description.”

    Sound familiar? Check out this example from an after-hours customer service chat session. The customer, a father who works full time and like many parents, must conduct his business after his toddler is in bed, is trying to find where to enter a code on his bank website to verify a connection he set up between his account and his joint account with his wife at another bank:

    Trevor: Hello, my name is Trevor. Thank you for being a valued Bank of America customer. I hope that you are having a great day.
    Nick: thank you
    Trevor: Hi, how are you doing today?
    Nick: fine
    Trevor: You are most welcome.
    Nick: i keep getting emails about verifying an external pay from account
    Nick: i can see the deposit in my external pay from account, but see no way on the BOA site to verify it
    Nick: please advise
    Trevor: I understand you are concerned about the e-mails you are receiving.
    Nick: nope, not at all, the emails seem legit, i checked them out
    Nick: i just want to verify this account
    Trevor: Please note that the e-mail you received is from Bank of America for verification.
    Nick: yes i realize that
    Nick: but there is no instruction and the bankofamerica.com site is a labyrinth
    Nick: anyway, i cannot figure out how to verify this accont
    Nick: please help me verify this account
    Trevor: However, if you feel that the e-mail you received is suspicious, I request you to send the e-mail to abuse@bankofamerica.com.
    Nick: i am not suspiciuis
    Trevor: If the e-mail is not legitimate, we will work with the authorities to prosecute the creators of this e-mail.
    Nick: it IS NOT MY CONCERN
    Nick: i cannot be more explicit
    Trevor: I am sorry for the miscommunication.
    Trevor: Okay!
    Nick: please inform me how a pay from account is verified once i have received the verification deposit
    Trevor: Please note, I request you to click on the following link and see the demo where you need to enter the Verification code.
    Trevor: Please click on the following link and watch the demo, how it works:
    Trevor: http://infocenter.bankofamerica.com/uploads/20090731-53434501-1008673/MakeaTransfer.pdf
    Nick: excellent
    Trevor: Thank you for your co-operation and understanding.
    .
    .
    .
    Nick: it’s not what i’m trying to do
    Nick: bank of america deposited 18 cents into my account
    Nick: and i need to use that number somewhere to verify the external account
    Nick: i just cannot discern where I would do this on the bank of america site
    Trevor: I understand that.
    Nick: i am trying to set up electronic payment
    Trevor: However, I request you to contact my colleagues at 1.800.622.8731. They are available Monday through Friday between 7.00 a.m. and 10.00 p.m. Pacific Time, or Saturday and Sunday between 8.00 a.m. and 5.00 p.m. Pacific Time.
    Trevor: I assure you that they are able to assist you in this regard.
    Trevor: Is there anything else I can assist you with?
    Nick: never mind. this is more frustrating than the site

    In this example, Trevor is probably at a desktop running several chats at once. His software has pre-defined phrases that he can click into the chat window quickly, so he can appear to be responsive while also looking at other chats. His first three entries are all pre-defined phrases. The entry where he begins “I understand that you are concerned about” is also pre-defined, and then he selects “emails you are receiving” from another pre-defined list. He has only read one of Nick’s entries, the one begining, “I keep getting emails.” Trevor’s eyes see that phrase and he connects it to the “emails you are receiving” phrase in his choice list. He is only equipped to deal with emails that are phishing for account into. He does not realize it was the wrong topic until Nick types in CAPITAL LETTERS, and then he clicks the “I am sorry for the miscommunication” phrase. So he sends Nick a link to a demo. (Oddly, his “Okay!” and his “I request you to click” phrases–which also appear in other parts of the chat dialog which I didn’t include above–read as angry and commanding in our culture. Let’s let cultural mis-alignment slide for now.) It does seem like Trevor understands that Nick needs to enter a Verification Code, but sadly the Verification Code he thinks of is for transfering funds not establishing a conduit to an external account. Nick follows the link and reports that “Make a Transfer” is not what he’s trying to do. Trevor signals that he is watching the chat by clicking the “I understand that” phrase. He sees Nick’s key phrase “set up electronic payment” and knows that is not a service he is able to provide. So he hits the button that directs Nick off to the daytime customer service number, followed by two more pre-defined pleasantries. Trevor thinks he has helped the customer and will get good marks from his supervisor. Nick is about to kick the computer in frustration because he doesn’t have time in his schedule for the next couple of days to call Bank of America during the daytime, and he’s scared that his electronic mortgage payment won’t go in on time.

    What went wrong? Bank of America had done its homework and had policies in place for when customers call complaining about suspicious email. Trevor followed his process perfectly. He did pick up on the idea of entering a Verification code, but he didn’t have the right answer for Nick. If Trevor had been trained to spend time understanding the situation Nick was in more deeply, he would have understood the fear Nick had of a mortgage payment not going through and blemishing his brand new record as a home owner. Knowing this emotional state, Trevor could have put a higher priority on finding an answer for Nick about Verification codes for external accounts. He could have searched an internal Bank of America database for “verify external account” or something similar. He could have asked his co-workers or his supervisor. He could have even logged in to the Bank of America site as a customer himself and helped Nick search page by page for the right form to enter the Verification code. Instead, he stayed within the boundaries defined for him inside the chat application, limiting his answers to pre-defined phrases he could click. The company inadvertently shuttered his humanity and made him behave no better than a poorly-programmed machine. (Or, as I suspected at first, Trevor is really a keyword-recognition ‘bot.)

    What made that happen? Someone at the company was responsible for setting up the pre-defined phrases in the chat application. There was probably a fair amount of research that went into the possibilities, based on customer call center logs. The person in charge probably tore their hair out trying to make sure they covered all the branches that might occur. But call center logs only showed this person the places customers were tripping over existing glitches or obsfucating terminology. There was nothing about emotional states like fear of pending changes in a credit score based on not being able to perform a transaction. There was no way in the branching structure to show employees how to prioritize or how to keep trying to define what the problem is and find the right answer. Despite the fact the person who set up the system studied actual call data, they were still detached from the real life of the customers who called.

    Detachment is the critical failing of many organizations. Having open-ended conversations with customers (and people who might be potential customers) is difficult and scary. It’s much easier to analyze data and run numbers. It sounds more scientific and reliable. Talking to strangers takes social skills. Deriving meaning from a set of transcripts is tricky. It’s hard to see where real decisions can be made based on this squishy, qualitative way of “grokking” things. But honestly, at this point it’s the best tool out there for getting beyond the branching, pre-defined phrases that prevent company representatives from creating a happy customer. Is the efficiency savings really that much higher than the cost of letting horror stories circulate and prevent new customers from signing up? Can organizations afford to remain separated this way from the people they serve?

    I have been working with a symphony recently, trying to learn what cultural-arts attendees want from an evening out. It has been a learning experience for me to work with an arts organization mainly because the human emotional element–the passion to achieve something–is very much present and embraced. In most other organizations I work with, humanity is given a back seat to effectiveness and calculations, if it is acknowledged at all. I have learned that a music director is not as aloof, let’s say, as his business counterparts. A music director understands where the owners/contributors want the symphony to go for the season, he knows what attracts ticket-buyers, and he also knows the skills and challenges of each and every musician working with him. He doesn’t choose what to perform based on statistics from iTunes or based on popularity on the world classical circuit. A good music director talks to audience members to understand not only reactions to performances but what a symphony-goer is trying to fulfill in an evening at the music hall. Moreover, it’s not about figuring everything out in advance, planning and training people, and just putting it on autopilot the night of the performance. A symphony conductor is constantly watching and listening–the musicians and the audience–and making minute adjustments as he goes so that the oboe player has time to catch a breath after her extra-passionate solo, or so that the guy coughing during the pause between movements has a chance to clear his throat in order to comfortably enjoy the next emotional passage. I have a lot of admiration for a director who knows the passions that make his audience, his contributors, his co-workers, and his board members tick. I wish businesses weren’t adverse to knowing this kind of information. (Positive examples do exist! Zappos has received quite a bit of press about their warm-hearted practices.)

    Aloof means remote, separate, detached, and uninvolved. Antonyms for aloof are concerned, friendly, and sociable. Notice anything that looks like current technology trends?

    Make Your Mental Model Diagram in OmniOutliner

    Posted on

    Several people have told me that they suspect OmniOutliner would be a great
    tool to comb, group, and create the mental model diagram. Well, their
    suspicions are correct! Not only is it a nice way to collect the data,
    organize it into labels, quotes, participant IDs, dates, team member, etc.,
    but it’s also great for massaging all that data into hierarchical groups.
    Not only that, but now I’ve seen proof you can use the native diagramming
    feature to create the mental model itself! It is true: you are not obliged
    to use any Microsoft products, nor my homegrown Python script. The rest of this post was guided by two people I am grateful to: Voltaire Miran Santos and Jeremy Yuille.

    Thumbnail image of University of Buffalo prospective faculty or student mental model

    A mental space generated with Omni Outliner & Omni Graffle

    Here’s how to do it. (The video demonstrations by Voltaire will help clarify the written instructions. The videos are a bit fuzzy because he wants to show the whole screen to get the best view of how the tools work, so you’ll have to use your imagination a little.) First, get all the data into rows in Omni Outliner.

    1. Set up columns for your data by using the “add column” feature on the toolbar. You can create one Outliner document for each interview. Here is Voltaire’s combing file template (2Kb zip file).
    2. Copy/comb a transcript quote directly into a new row in OmniOutliner and label
      it. Also fill in associated data like participant ID, etc.
    3. Once you’ve completed combing at least a few (if not all) the interviews, identify your best-combed document and use that as the start of your master grouping file. Select a second combed document and begin the grouping process by dragging and dropping individuals rows from that document into your master file. Look at each row of data and decide what the participant’s intent was. Begin to put these rows in some sort of associative order by grouping these rows closely together. You might want to designate a section of your master file as a parking lot for rows you can’t decide about yet.
    4. As you start to promote sets of (or single) rows to become behaviors/tasks that will appear as the boxes in the towers, you can use the “group” function to cluster rows together under a Behavior/Task header row. (See Voltaire’s template for these additional columns (2Kb zip file), although they are easy to add by yourself.) Behaviors can eventually be grouped together in the same way under Tower header rows, and Towers can be clustered together under Mental Space rows. You can assign some prefixes for atomic behavior/task, behavior/task, tower, and mental space using the prefix function in the Row Numbering Inspector.
    5. When you feel the tug of memory, go through the parking lot to find matches for those rows you are remembering.
    6. Try to keep these groups pretty small. Break up quotes that seem to match two areas so you can place each quote in the proper place. Meanings will shift as you include more and more rows, building on the associations you are making. In Omni Outliner, it’s simply a matter of using the “drag-and-drop dot” at the leftmost part of each row to re-arrange things.
    7. Add another layer of hierarchy to indicate a tower where it makes sense. Add the mental space layer when you start getting real differences in the mindset of what people are doing.
    8. Lather, rinse, repeat until everything is in a four-layer hierarchy and you are satisfied with the clusters.

    Now for making the diagram!

    1. Get your document ready to import the file into OmniGraffle, rendering the diagram of the mental model. (First, create a backup of the file!) Strip out extraneous columns past the Task level. The diagram should only shows three levels of the hierarchy: mental spaces, towers, and tasks/behaviors.
    2. Drag the icon for your master file and drop it onto the OmniGraffle icon in your dock. OmniGraffle will give you the option of which diagram style you’d like to use. Choose the organizational chart diagram. The software will automatically render a hierarchical chart for you, with each mental space at the top of each cluster of towers and their children.
    3. At the bottom, create each of the towers by marrying the task to the tower and grouping them together. You will end up formatting these a number of different times to get them just right, visually. (It will take an hour or two for a big mental model.) Delete the extraneous arrows. Go to the next part and do this again. You can decorate the mental model as you go using stencils. You can create a stencil within OmniGraffle that will have formatting options for colors and lines you might want to use, possibly to show certain audience segments.
    4. Alternately, you can download Voltaire’s stencil (23Kb Graffle file).
    5. Scaling the view to fit can help here, since most diagrams are quite wide.
    6. Alternately, you might want to turn off auto arrange and start stacking the tasks from the bottom up. You can color code the boxes according to level (behavior/task, tower, mental space) and then manually arrange them. You’ll have some modelling work to do now, once you see it all together in one view. Snap-to-grid helps here and deleting the connecting lines will also help.

    Oxymoron: “Scientific Survey”

    Posted on

    Lots of people want to have mental models tomorrow, without all the hard work that goes into collecting the data and massaging it until it takes shape. I hardly blame them. When someone asks me how this can be done, I encourage them to sketch a mental model together with their team, over the course of two hours. This model can be a representation of what they know now about peoples’ behaviors and feelings and philosophies, written from the perspective of these people, with pronouns like “I” and “My.” It’s when someone says, “Can’t I just send out a survey and collect the data?” that I start to shake.

    Yes, I shake.

    Surveys are a cop-out for this kind of work. Surveys don’t let you have conversations. Surveys don’t give participants the opportunity to tell you their perspective in their own words, in the order they think is important. Most of all, surveys don’t let participants tell you things you wouldn’t think to ask. All of these statements are patently true about surveys. (Tell your boss.)

    Yet for some reason folks respect surveys and the data gathered thereby. Somehow, as a population, we believe that the more people we ask a question of, the more significant the result.

    Image of a Microsoft Customer Service Quality survey

    The Webster’s definition of “survey” (the noun) is “a broad treatment of a subject” or “a poll.” A Wordnik definition more specifically states “a gathering of a sample of data or opinions considered to be representative of a whole.” Look at the words poll, opinions, and data. The last word is the one organizations tend to set more store by. It is a word that inspires confidence and reduces risk simply by sounding scientific. Yet, truly, most surveys you have encountered at your organization are about opinion. “How did you rate our service today?” Or, a more subtle example: “Which of the following brands of chocolate do you typically buy?” (What if the brand you just bought isn’t listed? What if you don’t buy chocolate? What if the majority of the population the organization supports doesn’t buy chocolate, yet the organization get duped into thinking that 63% of people buy Ghirardelli?) The surveys about data have to do with, for example, assessing biodiversity on a particular island, not with people or what they are conjecturing. So, if surveys that we normally see have to do with opinion, preference, and conjecture, what is actually “scientific” about that? We know opinions and preferences change over time. They change based on how the question is asked. They change based on who is listening. We know that conjectures are just guesses about future reactions and behaviors. And surveys about past activities rely on imperfect or biased memories and recall. How can opinion and conjecture inspire the confidence we need for design?

    There are other things we know about surveys. They can be written purposely or by accident to reach a certain conclusion. The result can be greatly influenced by whom you ask to participate. They can grow long and so tedious that participants start marking answers randomly. No matter how deeply we think when we write the answers to a particular survey question, it is still a bounded list. Participants have to match the answer they would have said out loud to one of the things we thought to write down as a selection–if anything is even close. Participants feel compelled to select an answer even if none of the possibilities is close.

    If we peer at what we frequently use surveys for, if we really break it down to its root, it is to persuade or provide evidence to support a decision. Or its use is to get attention, as in the news media. These are valid uses of surveys; I am not judging, here. What I want to point out that what we do, “design,” is not “persuade” or “decide” or “get attention.” We are doing something that is different than marketing departments or executives seeking to change the perception of a product/service on the market. “Design” is closer to “invent,” “devise,” “formulate,” “plan,” “create,” or “contrive.” We cannot use surveys to support our design activity.

    Infer Emotions from Belief-Like Statements

    Posted on

    In my previous post about how to recognize a belief in a transcript, I ended with the guideline to ask yourself:


    “Is this something the person thinks is going on, or is it
    something they know and hold as a guiding light in their life?”

    This is how you will recognize a belief. I listed examples in three boxes labeled Opinions, Guesses, and Preferences, Etc. (which included statements of fact and complaints). The one other thing to watch out for is emotion. Emotion is something we do want to capture in the mental model. As humans, according to our various cultures, we tend to veer away from stating our emotional states outright. (Half the time we aren’t even sure of our emotional state, anyway, right?) It should be no surprise that our emotions can come out as sentences that sound like beliefs at first glance.

    Examples of belief labels re-cast as emotion labels

    Do your best to recognize these beliefs as the emotions they truly are, and re-cast the labels into an emotional state. The new labels will greatly clarify what’s going on behind the statement, even if it’s an inference on your part. Your team (and your gut feeling) will help you decide if the emotion you re-cast is a valid one.

    February 2010 in Savannah, Georgia

    Posted on

    I’ll be at the IxDA Interaction Conference in Savannah, Georgia on 04 February 2010. My half-day workshop is in the morning. If you are properly caffeinated, this half-day workshop will be a blast because we get to practice interviewing skills and all the fun parts of crafting a mental model. Cat-lovers will be in for a nice surprise, and dog-lovers will get a grin out of it, too. Early-bird price is US$200 –sign up before 30-Nov-09 to get this price.

    Getting the Right Level of Granularity in Your Content Map

    Posted on

    What, exactly, do you include in a content map? It will contain objects that you will slot beneath towers. You don’t want to spend forever slotting objects, so what is the right level of granularity to record in the content map? I went through this exercise recently with a client; this example ought to help illustrate how to decide the right level of granularity for your own situation.

    At the outset, I tallied the many blogs and forums my client operates on their site. The client has 15 products which break up into 55 separate offerings. There was not a one-to-one relationship between the products and the blogs–there are 18 blogs.
    Additionally, there was not a one-to-one relationship between the products and the forums–there are 133 forums. This made me suspect that the blogs and forums were supporting certain cross-product behaviors in some instances. In other instances they were probably only about a particular product line. In this case, the behavior-related blogs may map to individual towers representing each behavior, but the product blogs may map to multiple towers representing all sorts of behaviors, depending on what is discussed in those blogs. The same went for the forums. As my associate Eric Fain put it, “There seems to be a huge potential to muddle things up and make slotting this content a living nightmare.”

  • Adaptive Server Enterprise
  • Workspace
  • Security
  • Eclipse and Open Source
  • SOA
  • Data Integration
  • Metadata
  • Mobility
  • Clusters
  • Web Services
  • Linux
  • AJAX
  • Company News
  • Kernel
  • Enterprise Information Integration
  • Virtualization
  • Master Data Management
  • ETL

So I thought about it. Usually I would just have an object called “Blogs” and one called “Forums” and they would represent all the topics, and I would slot them under towers where people might be doing things like “keep up with latest techniques” or “read the latest news.” However, here each blog topic could represent many different behaviors. Moreover, the existing content was not “designed,” and each topic supports many different behaviors. “Data Integration” might have blog entries about new techniques or perhaps a product release that helps make it easier to do. This made things a mess, but that’s part of the whole content audit exercise. You want to see the extent of the mess.

So in this case I decided we ought to write down all 18 blogs annotated with the topics and reasons they are written. We slotted those, which turned out to be mostly product-related, but each entry covered some different behaviors/towers. We were going to need a good tagging system to sort them out. Then we went through all 133 newsgroups and wrote down, for each, the topics and reasons again, but then we made affinity groups of these. That gave us a smaller number than 133 to deal with. Then we slotted the affinity groups. Again, there was behavior embedded in each entry, but since no tagging system was in place yet, we merely placed them under the towers where we imagined they might be discussing behaviors. Again, tagging will be necessary to sort them out into the new system.