20 User Research Questions Answered by Laura Klein & Steve Krug
10/24/2016I recently got to interview Steve Krug, author of Don’t Make Me Think and Rocket Surgery Made Easy, as the lunchtime entertainment during the User Research for Everyone conference. I always enjoy talking with Steve, which meant that we spent most of our time chatting and then ran out of time to answer all the questions submitted by the audience.
Sorry! To make up for that, Steve and I have answered some of those questions here. A few have been edited slightly for clarity and brevity.
Question: Our company ran an unmoderated usability test where we tasked users to try to find a bit of information via search. 95% of people didn’t type anything in the search box before clicking done. They just wanted the incentive. I looked through all the records of their sessions to see who seemed to put in a good faith effort and who didn’t. But that took a lot of time and effort. How can we get around this?
Laura: The first thing I thought when I read this question was, “You should be watching all the sessions.” That’s the value of usability testing–seeing people use your product. If you’re just making people use the product and then giving them a survey at the end, you won’t get very good feedback.
Most companies that help you do this sort of testing let you rate your participants. If you feel like the participants really aren’t fulfilling their end of the bargain and are just clicking through at random for the incentives, give them a 1-star rating and ask for your money back. Companies aren’t interested in recruiting bad testers for their panels.
In your case, I want to know why they didn’t search. Was it because they couldn’t find the search box? Or because they could find what they were looking for without searching? Did they understand the task? You’ll only understand their motivations for not completing the task if you actually watch the sessions you’re paying to have people run.
Steve: Well said, all of it! You’ll only understand their motivations if you watch what they do (and listen to them thinking aloud, of course). This is fundamental. It’s the reason why qualitative methods give you insight into why things happen, as opposed to quantitative methods that only tell you what happened.
Question: Roughly what % of companies do you believe actually conduct usability testing?
Laura: I have no idea, but it’s far too low. In my experience, it varies wildly based on industry. Even what people count as usability testing varies. For example, a lot of consumer packaged goods companies do lots of customer testing, but they’re not run in the same way that a software company might run it. A lot of companies say they usability test, but what they really mean is that they’ve run a few tests in the past once in awhile. Or they talked to that user only one time.
I’m curious why it matters to you? The important thing is that the percentage of those who test keeps going up!
Steve: 21.5% (in keeping with the notion that 64.4% of all statistics are made up on the spot). Laura’s right: whatever the actual number is, it’s far too low. It’s why I spend a lot of my time showing people how they can reduce the amount of time and effort required, and make sure they’re really doing a usability test (i.e., watching people try to use the “product” while listening to them think aloud). The good news is that even though the percentage is still way too low, it’s much higher than it was a few years ago. And it continues to grow.
Question: What’s the simplest user research we can do?
Laura: As Steve said during the session, and in greater detail in the interview I did with him for my new book Build Better Products (shameless plug–you can buy it now), the simplest form of user research to do is usability testing on competitive products. It’s a fantastic and useful method for getting feedback on products.
Cindy Alvarez also mentioned using this method in her talk. She pointed out that it’s a great way to get people excited about usability testing and start to learn the techniques without threatening to make anybody inside the organization look bad.
Steve: And there are other research methods that are reasonably simple: like doing some user interviews. Interviews can be particularly simple if you do them remotely since you don’t even need to physically “get out of the building.” (Sidenote: the need to at least metaphorically get out of the building is one of the many terrific things the Lean UX movement—and Laura’s books, etc.—have introduced a lot of people to.) Interviews produce whole different kinds of insights than usability tests, but they both can be very rich.
Question: I am currently the UX strategist for a global law firm where I am tasked with designing tools for attorneys. My users are within reach, which is a great thing. But lawyers are pressured to fill all working hours during a working day with billable hours. I can’t figure out how to reach these users given their tight time constraints. Any ideas?
Laura: The easiest thing you can do is to compensate them for their time. Whenever you’re recruiting professionals, it’s important to recognize that their time is valuable and offer them something. It might not be money. It might be a donation to charity or a bottle of wine. But it has to be something that they think is valuable.
Also, try getting smaller chunks of their time. Asking for an hour is harder than asking them for a 15 minute call or screenshare. Offer times at lunch and before or after work. Try to do it remotely or go to them so that you’re not asking as much. Look for other types of lawyers: some in-house counsel jobs have regular salaries, and those lawyers aren’t as likely to have to account for every second. If you’re just doing usability testing, you don’t necessarily need the exact lawyers that you work with to help. You might be able to use other lawyers to see if they understand the interface. That will give you a lot larger pool of possible participants.
Steve: All great. And what about working with management to get the time they spend testing categorized in some way that recognizes its value?
Apart from this corner case (billable hours), testing internal tools (or intranets) is often easy, because you know exactly where to find a large pool of actual users who have a pretty good incentive to help. Often the best subset is new hires, because while they’re actual users with domain knowledge, they’re not yet familiar with the internal tools.
Question: How important is it to do usability testing on a user’s own computer/device/environment? Compared to bringing them into an office?
Laura: It always depends on what you want to learn. When possible, I prefer remote to forcing people to come into your office, since it means you’re not being geographically biased in your participant sample. On the other hand, seeing their environment in person can give you a lot of insight into their context of use.
Steve: And of course remote has so many other advantages, like no travel time and expanding the recruiting pool from “people who live or work near you” to “almost everybody.”
Laura: I’d say that contextual learning (home visits, etc.) is a lot more important for user research as opposed to usability testing. If you’re just interested in seeing if people can perform certain tasks given a particular interface, I’d go with remote testing. If you’re really digging into the way people work or perform tasks in their natural environment, consider whether you need to be more immersed in that environment.
Steve: Only one caveat: if you’re bringing them in, you need to make some effort to ensure that your computer setup isn’t going to get in their way. The most common case is sitting people down at a laptop with a touchpad when they’re only comfortable with a mouse. The solution, of course, is to have both available. You may also run into issues with Mac vs. PC users, but only if the test tasks are going to involve interacting directly with the operating system. If you’re testing something in a Web browser for instance, it probably won’t matter.
Question: What user research would you recommend for non-homogeneous target groups? From users with basic knowledge to super-professionals?
Laura: I’d recommend focusing on a specific subset of people and making things better for that identifiable group. Often, when you try to make improvements “for everybody” you end up making things better for nobody. Understand who you think your changes/new features/improvements are for and what behavior you expect to change, and then usability test on that subset of your users.
Steve: And I’d recommend beginning by focusing on the people who live on the “basic knowledge” end of the scale. If they find it usable, the people with more knowledge/experience will probably be able to figure it out. Remember, you’re not dumbing things down: you’re making things clear. Everyone appreciates clarity—even power users.
Laura: If you are making a global product change, make sure you understand the different segments of your users and recruit participants from each segment. You’ll most likely want to run a few more sessions than you would otherwise, to make sure you don’t have just one of any group.
Question: What are some ways you’ve successfully distributed findings from usability testing—particularly when some of the most interesting findings aren’t relevant to the topic you were researching?
Laura: It depends on who the user is of the findings. I wrote a post on deliverables that might be helpful. Whenever you create a deliverable of any sort, treat the recipient of the deliverable as a user of a product. Present the findings in an appropriate way for that person.
In general, the best way to make people familiar with the findings is to invite them to help with analysis, but I understand that that’s not always realistic.
Question: We have a really distributed team. What are some good strategies for getting everyone to watch people use our software when it means asking everyone to watch two hours of video?
Laura: Does everybody need to watch all two hours of video, or could you cut a highlights reel? I’ve had better luck going through the videos and grouping responses or tasks together to make it easier for people to consume.
Another option is to schedule an online analysis session with stakeholders where you use a digital whiteboard like Murally to share insights. You can require stakeholders to watch the research ahead of time. Just givethem a deadline and a reason to encourage them to watch the videos.
Steve: If you don’t have the time to create a highlights reel, you can simply crop out the time-wasting parts. And then give the viewers a tool that lets them watch what’s left at faster-than-normal speed. I never watch a usability test video at less than 1.4x normal.
Question: If you only have three slots and test three different pieces of functionality, is there enough enough feedback to recommend changes?
Laura: It depends, unfortunately. There have been usability tests where I knew there were enormous problems after half a session. I still ran a few more sessions after that to confirm, but it was very clear. In general, you run the sessions you need to run to start to see patterns.
Question: For an overall junior team, is it fine to start out doing internal usability testing–despite its limitations due to things like bias–to get up to speed and gain confidence?
Laura: Sure. In fact, I really like the suggestion from one of the speakers that you run a pilot on a member of your own team (like the designer or PM) at the beginning of all usability testing, regardless of how senior your moderators are.
Just keep in mind that what you’re learning from internal people will likely be extremely biased. Use it as a training tool, but I wouldn’t take the results nearly as seriously as I would results from real or potential users.
Also, ask yourself whom you’re protecting by keeping your junior folks away from users. If you feel like they’re going to damage customer relationships, try getting them some one-on-one coaching or training. Then have them practice on friendly customers. If you’re worried that they’ll get bad results, they’re no more likely to get bad results from external participants than they are from internal ones.
Steve: Yes, pilot tests. It’s too bad we didn’t get to talk about pilot tests since they’re absolutely essential. If you don’t do a quick pilot test before you bring your participants in, you’re not doing it right.
Question: How strongly do you feel about having one person per session? I’m part of a team developing an internal tool for a large company. I’ve got access to users in-person (our employees) twice a month and my boss thinks it’s better to test or interview two or three people at a time.
Laura: I have very strong feelings about this. As I mentioned during the Q&A session, I’d want to know why my boss wanted to interview two or three people at a time. Is it for convenience? Or because they think you’ll get better results? Hopefully you can talk them out of it, because neither scenario is true. You’ll end up getting a lot of echoing and learn more about power dynamics in your company by interviewing multiple people at once.
If it’s a tool that people will use jointly, it can make a lot of sense to structure sessions in a way that helps you understand how that will happen.
Question: The big benefit of the model that you’re describing for usability testing comes from synchronous participation (and discussion afterwards.) How do you apply this model in globally distributed teams?
Laura: Well, you can get huge benefits from usability testing when it’s just the researcher/designer/PM doing the research, finding problems, and fixing them. You don’t technically need everybody involved in that process.
The more people you have involved, the easier it will be to get the changes made. Especially if you’re in an environment where people tend to want to ship it, forget it, and move on to the next thing. Get creative—create things like video highlight rolls that you can share widely or snippets of information that are easy for people to digest.
Steve: It’s true: you don’t technically need everyone involved in the debrief process. But I think it’s worth bending over backwards to get as many people involved as you can (hence my “make it a spectator sport” maxim). Having more people involved does increase your chances of having durable buy-in and getting the changes made. But you’re also filtering the problem through more minds with different perspectives and valuable bits of information about where the bodies are buried. I also think that the process of observing and debriefing makes people better at their jobs, something I think of as “informing their design intelligence.”
For globally distributed teams, maybe some people watch the recorded sessions at a more convenient hour, then come together the next day for a full-group debriefing session.
Question: Are your users who you think they are?
Laura: Sometimes?
Question: Laura, what are the three most common questions you get about research?
Laura:
- How do I recruit people?
- How do I answer this particular question?
- How do I convince people that research is valuable?
Not necessarily in that order and not necessarily in those exact words. I was happy to see that those were big themes in our research before the conference, since now I can point people to a bunch of experts answering those questions.
Question: In a publishing environment, the editor is considered the authority on editorial content. What’s the best way to disavow editors of the notion that their editorial power also carries into UX of that editorial content?
Laura: Ha! You could literally replace “editors” with any other professional who is an expert in their thing – lawyers, doctors, scientists, etc.
Be open to what they’re recommending. Remember, they do have more domain expertise than you do. Understand why they want what they want. It may be that there’s a third solution that takes their concerns into account better than what you’re proposing.
And never underestimate the power of testing—either usability or a/b. If you’re advocating for one solution and your stakeholder is advocating for another, you’re going to have to devise a test that will definitively show which one of you is correct. This is literally what a/b testing is great for if you have enough users. If you don’t, then having a neutral party run usability tests on two different variations of a prototype can also be very eye opening.
Question: What do you think about pinging other UX professionals for usability and testing of prototypes other UX pro’s are working on?
Laura: I’m not a fan of asking other UX folks to do usability testing of prototypes, unless, it’s a product for other designers. You’re typically much better off testing on a few of your real users or potential users rather than asking someone who does this for a living.
Remember, UX designers aren’t magic. We don’t automatically know what problems real people will have and instinctively know how to fix these problems. If we did, we wouldn’t need to usability test! UX designers have the curse of knowledge. We don’t look at a product the same way other people look at it. If it’s a product for a specific group of people, designers can lack the domain expertise to give you decent feedback about what might be confusing.
That said, after you’ve run usability tests with your target market, other UX designers can often be very helpful in suggesting other ways to approach the problems you’ve identified. They may have seen other users have similar problems and have ideas about what worked to fix those problems.
But you still have to test with users.
Question: What’s the approach for testing content heavy, inspirational websites? Think TED.com.
Laura: Same approach for everything else–figure out what you’re actually testing and then come up with something that will help you learn what you need to know. Are you trying to test whether the website is usable? Inspirational? Understandable? Useful? Those are all very different types of experiments that you want to run.
The type of product is not what determines how you test. You test based on what you want to learn.
Question: I’m wondering if we’re too biased to effectively help others.
Laura: Depends on what you mean by “we” and “help.” Humans are extremely biased. It’s kinda our thing. There are ways to acknowledge and counter that bias to some degree, although I don’t think that most of us will ever even come close to getting rid of it entirely.
In general, the more time you spend listening to others and the less time you spend thinking about you and how you would do things, the less biased you’ll be.
Question: I often hear “I would have designed it differently.”
Laura: Yeah. We all do!
The best follow up is, “I see. Why would you do it that way? What problem would that solve for you?”
Frankly, I don’t care if they want a big red button to push that says “push me” on it. I want to know why they think that button will help them. Think of solution requests as a jumping off point to understand what their problem is.
Steve: We in the usability racket like to say “users aren’t designers.” And the truth is, they aren’t. Every once in awhile (once in a great while, actually), a test participant will make a terrific suggestion. You know when this has happened, because everyone in the observation room slaps their forehead simultaneously and says “Why didn’t we think of that?” But 98% of the time, design suggestions from participants aren’t great. I always recommend listening politely while they make the suggestion so they know you’re paying attention to them. Then at the end, after they’re finished with the tasks and you’ve moved on to open-ended probing, you can remind them of their idea and ask them to explain how it would work. In my experience, they’ll almost always end up by saying some variant of “But actually, I probably wouldn’t use [the new version] because [reason]. It’s better the way it is.”
Question: Is there a point of diminishing returns with too little user research? That is, is there a fear that a little learning can be a dangerous thing, leading to a false sense of UX security?
Laura: Sure.Talking to one person and then immediately diving in and solving all of their problems can certainly cause problems. If that person is an outlier. You don’t necessarily want to design your product around exactly one user. On the other hand, you can also spend an awful lot of time trying to “prove” yourself right.
I wrote a blog post on Predictive Personas that is probably applicable here. You can use this for usability testing just as easily as for predicting who will use your product. The idea is to run some sessions, try to spot patterns, predict what you’re going to see next, and then see if your predictions hold. Once you’re making predictions accurately, you’ll have a good idea of the recurring problems in your product and can more confidently start fixing them.
Laura Klein is a Lean UX and Research expert in Silicon Valley who teaches companies how to get to know their users and build products people will love. She’s a Rosenfeld Media expert and author of Build Better Products, and UX for Lean Startups (O’Reilly). Follow her on Twitter or subscribe to her blog and podcast at Users Know.
Steve Krug is best known as the author of Don’t Make Me Think: A Common Sense Approach to Web Usability and Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems. Steve spends most of his time teaching usability workshops and consulting. Follow him on Twitter.
Buy on-demand access to the User Research For Everyone program.