Now available: Managing Priorities by Harry Max

Sample Chapter: Interviewing Users (2nd edition)

This is a sample chapter from Steve Portigal’s book Interviewing Users: How to Uncover Compelling Insights (2nd edition). 2023, Rosenfeld Media.

Chapter 1

Interviewing Addresses a Business Need

A few years back, I worked with a company that had the notion to turn a commodity safety product—the hard hat—into a premium product. They would incorporate advanced features and then charge a higher price point. I don’t actually know where their idea came from, but one can imagine that they had seen all kinds of everyday products be reformulated to generate a higher scale of profit (think about Starbucks, gourmet salt, smartphones, Vitamix blenders, or horsehair mattresses). They sketched out a set of features that would improve the functional performance of the hard hat.

When I interviewed people who wore hard hats for work, I didn’t ask them to evaluate the features my client had been considering. Instead, I asked them generally about their work, so I was able to uncover insight into the most significant aspects of their experience. What they were concerned about fell into an entirely different category. They talked about leaving the job site to get lunch (for example) and how awkward they felt among other people while dressed in their prominent, brightly colored safety equipment. Indeed, makers of other safety equipment like bicycling helmets, safety footwear, and safety goggles had already redesigned their products to echo fashionable caps, boots, and sunglasses, suggesting this concern was being felt broadly.

If there were to be a TEDx version of this story, then this team would have become very excited about this new and surprising area of opportunity, despite it being different from what they had already invested in (financially, intellectually, and even emotionally). They’d have torn up those plans, drawn up new ones, and eventually raked in the dough. But you know that isn’t really how these things play out! In these interviews, we uncovered a significant business risk in pursuing their existing idea, so they stopped product development for their hard hat with extra functionality. On the other hand, these interviews identified another opportunity: to produce a hard hat that would address the issue of social performance. That wouldn’t have fit with their organization’s technical or cultural competencies, so they chose to avoid the business risk of developing a fashionable hard hat. What we learned from these interviews informed their decision not to bring any product to market.

When you get down to it, that’s what we do as user researchers: We gather information about users in order to inform critical decisions about design, product, or other parts of the business or organization. To do this means that we go to people’s homes, their offices, wherever their context is. We ask what they do. We ask them to show us. We get stories and long answers where we don’t always know what the point is. We want them to explain everything about their world to us.People may not have a ready answer as to why they do something, but we have to listen for why. We have to ask follow-up questions and probe and infer to try to understand, for ourselves, just why some-thing is happening the way it is. We make sense of this disparate information and show the way to act on what we’ve learned.

Interviewing is a specific method in user research to accomplish these goals. (User research is also referred to by other terms such as design research, user experience research, or UXR.) This book is about interviewing users (also referred to variously as site visits, contextual research, or ethnographic research) as a method to conduct user research, so beyond an in-depth examination of best practices for interviewing users, we’ll also consider user research in general. And we’ll also look at other user research methods that can be integrated and combined with interviews.

Nomenclature aside, the broad outline for interviewing users is:

  • Thoughtfully planning out objectives, who we’ll interview, and how we’ll go about it
  • Deeply studying people, ideally in their context
  • Exploring not only their behaviors, but also the meaning behind those behaviors
  • Making sense of the data using inference, interpretation, analysis, and synthesis
  • Using those insights to point toward a design, service, product, or other solution

Learning About Users to Inform Decisions

Typically, when you interview people, you visit your users in their homes, their offices, their cars, their parks, and so on. But this isn’t always the case. When planning a project, ask yourself if it’s more insightful to bring participants in to see your stuff (say, prototypes you’ve set up in a facility meeting room) than it is for you to go out and see their stuff. Overall, your objective is to learn something profoundly new. (There are situations where quickly obtained, albeit shallow, information is beneficial, but that’s not what we’ll focus on here.)

Note: Every organization can benefit from research

Sometimes, companies declare that they don’t need to do user research. What they typically mean is that they don’t need to do generative user research (learning about people in order to identify product opportunities), but they are probably doing evaluative user research (testing the thing they are developing to make sure it’s usable by people). Denying the value of generative research (because, as they might say, people don’t know what they want and it’s the company’s mission to invent that anyway) belies a poor understanding of how user research is conducted and applied. For one thing, it’s not simply asking people “what they want.”

For another, it’s not credible that they possess an innate talent for building stuff that people love. Even if they themselves are users of the snowboards, photography equipment, or mixing gear that they make, they will choose and use those solutions differently than someone who is not inside their industry. They will be blind to differences in income, access, use cases, and so on. And they will have difficulty expanding their offering in an innovative way, because they are stuck in this model of being the user.

Often, the stated goal of interviewing users is to uncover their pain points. This approach mistakenly characterizes research with users as a sort of foraging activity, where if you take the effort to leave your office and enter some environment where users congregate, you’ll beheaded home with a heap of fresh needs. You can observe that people are struggling with X and frustrated by Y, so all you have to do is fix X and Y, and then all will be good.

Although this may be better than nothing, a lot of important information gets left behind. Insights don’t simply leap out at you. You need to work hard and dig for them, which takes planning and deliberation. Further complicating the foraging model is that what people in problem-solving professions (such as designers and engineers) see as “pain points” aren’t necessarily that painful for people. The term satisficing, coined by Herbert Simon in 1956 (combining satisfy and suffice), refers to people’s tolerance—if not overall embracing—of “good-enough” solutions.

Once while settling in for a long flight, I noticed that a passenger in the row in front of me had fashioned a crude sling for their iPhone using the plastic bag that the airplane blanket came in. They had twisted the bag into a makeshift “rope,” which they looped around the body of the iPhone and then jammed behind the latch that kept the tray table closed. They now had a (slightly askew) solution for watching their own device for the duration of the flight. Initially, I was critical of the ugly, inelegant result. But eventually, I realized it was beautiful in its own way—it was fashioned from the materials they had on hand. Since then, I’ve seen other examples of passengers making their own viewing solutions, and I’ve made a point of taking a picture. (See Figure 1.1 where the passenger has made an iPhone viewer out of the airline’s credit card brochure and some beverage napkins.)

Interviewing Users Figure 1.1

Figure 1.1 An airplane passenger viewing stand, made from the materials found on board.

Contrast these good-enough solutions with a more purpose-built accessory (see Figure 1.2): the passenger would have to have known about it, purchased it, remembered to bring it, and carried it with them. Of course, the ideal solution—not just the raw materials—would be provided by the airline itself (see Figure 1.3).

Interviewing Users Figure 1.2

Figure 1.2 TabletHookz is an accessory designed specifically to hold a mobile device in an airplane seat back for hands-free inflight viewing.

Interviewing Users Figure 1.3

Figure 1.3 A device holder built into the airplane seat-back allows passengers to watch videos on their own devices.

There have long been spaces online that exhibit samples of makeshift solutions. They are meant to amuse, but usually with a good measure of judgment and schadenfreude (this is the internet after all!). A good exercise for a user researcher is to seek out those images and reflect on what aspects of these solutions are successful for the people who implemented them.

I encounter satisficing in every research project: a computer desk- top with an unfiled document icon in each element of the grid, an overflowing drawer of mismatched food container lids, a not-yet-unwrapped car manual, and tangled, too-short cables connecting products are all “good-enough” examples of satisficing. In other words, people find the pain of this putative problem to be less acute than the effort required by them to solve it. What you observe as a need may actually be something that your customer is perfectly tolerant of. Would they like all their food in containers matched with the right lids? Of course. But are they going to make much effort to accomplish that? Probably not.

Beyond simply gathering data, interviewing customers is tremendous for driving reframes, which are crucial shifts in perspective that flip an initial problem on its head. These new frameworks, which come from rigorous analysis and synthesis of your data, are critical. They can point the way to significant, previously unrealized possibilities for design and innovation. Even if innovation (whatever you consider that to be) isn’t your goal, these frames also help you understand where (and why) your solutions will likely fail and where they will hopefully succeed. To that end, you can (and should!) interview users at different points in the development process. Here are some situations where interviewing can be valuable:

  • As a way to identify new opportunities before you know what could be designed.
  • To refine design hypotheses when you have some ideas about what will be designed.
  • To redesign and relaunch existing products and services when you have history in the marketplace.

From My Perspective: Gaining Insight vs. Persuading the Organization

While doing ethnographic research in Japan, I accompanied my clients as they conducted an unrelated study. They brought users into a facility and showed them elegantly designed forms for printer ink cartridges. They were smooth, teardrop shapes that were shiny and coated with the color of the ink. They also showed users the current ink cartridge design: black blocks with text-heavy stickers.

Can you guess what the research revealed? Of course. People loved the new designs, exclaiming enthusiastically and caressing the shapes. Regardless of method, there was no insight to be gained here. I’ve gone back and forth about whether this was good research or bad research. It didn’t reveal new information, but it provided tangible evidence for the organization. This team’s approach suggested that there were other issues with the design process (perhaps that leaders wouldn’t make decisions without supporting data from users) and while their research might have been the best way to move their process forward, ideally it wasn’t the best use of a research study.

A High-Level Research Plan

The operational aspects of interviewing users will be covered in the next chapter (“Research Logistics”), but here let’s consider the three (plus one special guest) elements of a high-level plan. And by “plan,” it’s less about how you document the plan and more about the thinking that makes for an effective research project. A plan should summarize the project as you understand it at the time, including the business problem, the research questions, and the agreed-upon research method. Reviewing this plan with your team will ensure that you are aligned, with an opportunity to clarify, reprioritize, or expand the work.

Note: The answer to a never-ending story

This book defaults to considering research as projects that have a beginning and an ending. But there are other models. Rolling research is a way of providing designers with regular access to participants who can provide feedback on whatever they are working on. Typically, a small number of participants are scheduled on a weekly basis. Designers and researchers determine earlier in that week what they’ll show to the participants, and what questions they’ll ask. Continuous discovery involves the entire product team, through the entire development cycle, and includes designing, prototyping, and getting feedback from users.

Even if you are interviewing users through one of these approaches, most of the guidance in this book (for instance, Chapter 6, “The Intricacies of Asking Questions”) will apply directly.

The Business Problem

The business problem (or business objective) is what your organization— the producer of products, services, solutions, and so on—is faced with, as shown in Table 1.1.

Table 1.1 Business Problem Examples

Business Problem

    • We’re sunsetting a legacy product and replacing it with one that uses a different technology.
    • Our new product didn’t do as well as we had hoped.
    • We want to move into a new market.
    • A new competitor is taking some of our market share.
    • We’re roadmapping what new features we’ll be developing for our current service.
    • Product feedback is strong but repeat orders are low.

To get an in-depth understanding of the business problem, you’ll probably want to talk with your stakeholders. You’ll learn more about this topic in Chapters 2 and 10, “Making an Impact with Your Research.”

From My Perspective: Uncover Misalignment Early

I once worked with a client who made a digital platform used for particularly complex transactions. They already supported the buyers, sellers, and their respective brokers, and now were looking at opportunities to incorporate the other entities (known as “third parties”) in these transactions. This research was a strategic priority, traceable to goals assigned from on high.

To kick off the project, we scheduled two activities (loosely based on the Questions Workshop3) with different groups of stakeholders. We set up a spreadsheet to capture decisions they were planning to make and what information about these other users would help in making those decisions. In the first workshop, the main project sponsor halted the proceeding to ask “Now, what do we mean by ‘third parties?’” I assumed they knew, and they assumed I knew! I was surprised, but glad they weren’t afraid to ask a “dumb” question. It was a disconnect, but an important one to uncover, and at the right time. We aligned on a definition and then moved forward with the questions. In the second workshop, a stakeholder kicked off the session by telling us “Just so you know, we’re already coding a solution.” Again, I was surprised, but this was very helpful to understand at the outset rather than later.

The Research Question

The research question identifies the information you need from users to provide guidance about the business problem. Whereas the business problem looks inward, the research question looks outward—in other words, the business problem is about you and the research question is about your users (see Table 1.2).

Sometimes the research questions are clustered and nested. For example, the business problem “We are investing heavily in social media and want our customers to promote our services more” might lead to this set of research questions.

  • What do people’s social networks look like? What tools do they use and how are their networks structured?
  • How are purchase decisions driven by the structure of people’s social network (on and offline)?
  • How do people leverage social networks for shopping and other kinds of decision-making? Who has influence with them currently?
  • Who among their social network (and beyond) are trusted sources of information for various decisions and purchases (particularly within the client’s area of business)?

Table 1.2 Research Question Examples

To further inform the research questions, you should review previous research reports, existing products, and in-development prototypes. Look for relevant research findings, explicitly stated assumptions or hypotheses, and implicit hypotheses in the decisions that have already been made.

Note: Find the specificity that’s right for you

When I ask teams to work on articulating their business problems and research questions, they often find it surprisingly challenging, but also enlightening. There won’t be a singular perfect answer, but the process of considering the specifics is valuable for developing a deeper intention and focus for the research. That process might include going back and forth on different variations and wordings. It might not produce a perfectly structured 1:1 relationship between the business problem and the research question. If you practice with a colleague, before long, you’ll have a feel for the right level of granularity and structure for you.

You should also conduct interviews with your stakeholders—they are often consumers of the research findings who are less likely to be involved in the day-to-day study. I typically aim for 6–8 stakeholders, although some clients ask for twice that amount. These are one-on-one conversations that run between 30 and 60 minutes and are used to dig deeper into objectives and set the stage for working collaboratively. Many of the interview techniques in this book (such as what I’ll cover in Chapter 5, “Best Practices for Interviewing”) apply to interviewing stakeholders, although you may find it less comfortable to ask “dumb” questions if you feel your credibility could be at stake. You should ask the stakeholders about the following:

  • Their history with the organization and the research topic
  • Business objectives for the project and specific questions the research should answer
  • Current beliefs about the customer, the user, and the proposed solution
  • Organizational or other barriers to be mindful of
  • Concerns or uncertainty around the method

Even though what you learn will undoubtedly inform all of the activities throughout the project, the immediate output is the research questions—articulating what you want to learn from the interviews.

Note: Get immersed in your research area

With the overall goal of trying to understand the problem space you’re exploring, gathering the language that is used to talk about that problem space, and planning what you’re going to ask your research participants, there are other activities that you can do at this point. Secondary research (also called desk research) gives you a sense of current and historical thinking through what’s been written about your topic already. Look at the mainstream press, the business press, academic papers, internal or external corporate reports, blogs, online forums, newsletters, books, and so on. Identify industry, academic, or other experts and interview them. You may also seek out a few experiences that will give you some perspective on the topic. Look at similar products and how they are being sold online or in retail. Try an experience yourself.

For a project that sought to understand how our client could facilitate a more emotional connection with their customers, we visited a handful of environments that had reputations for successfully bonding with their users (an Apple store; Powell’s Books in Portland, OR; the dog-friendly Fort Funston in San Francisco; a Wawa convenience store in Philadelphia; and Rainbow Grocery in San Francisco) and observed the environment, the people that were there, and hypothesized about what factors were either leveraging or contributing to the relationship. This led to topics to explore in the interviews and examples to compare and contrast with during the analysis stage.

The Research Method

The research method is how you will gather the information needed to answer the research question. Here are a few examples of user research methods (other than interviewing):

  • Usability testing: Typically done in a controlled environment, such as a lab, users interact with a product (or a prototype or simulation), and various factors (time to complete a task, error rate, preference for alternate solutions) are measured.
  • A/B testing: This type of testing compares the effectiveness of two different versions of the same design (e.g., advertisement, website landing page) by launching them both under similar circumstances.
  • Quantitative survey: A questionnaire, primarily using closed-ended questions, is distributed to a larger sample in order to obtain statistically significant results.
  • Web analytics: Measurement and analysis of various data points are obtained from Web servers, tracking cookies, and so on. Aggregated over a large number of users, Web analytics can highlight patterns in navigation, user types, the impact of day and time on usage, and so on.
  • Focus group: This is a moderated discussion with 4 to 12 participants in a research facility, often used to explore preferences (and the reasons for those preferences) among different solutions.
  • Central location test: In a market research facility, groups of 15 to 50 people watch a demo and complete a survey to measure their grasp of the concept, the appeal of various features, the desirability of the product, and so on.

Of course, researchers make up new methods regularly. (See more about methods in Chapter 3, “Contextual Methods—More Than Just Asking Questions.”)

Selecting an Appropriate Method

In the aptly named “When to Use Which User-Experience Research Methods” by Christian Rohrer, the article organizes some of the more common methods into a framework. (Does the method look at people’s behaviors or their attitudes? Is the method qualitative or quantitative? Does the method look at someone’s use of a product?) (See Figure 1.4.) The article provides guidance about which methods are best suited for different contexts. For example, if the goal of the research is to find new directions and opportunities, then the best methods (according to Rohrer) include diary studies, interviews, surveys, participatory design, and concept testing.

Interviewing Users 2 Figure 1.4

Figure 1.4 Christian Rohrer’s “Landscape” organizes user research methods by behavior/attitude and quantitative/qualitative.

Note: Market research and user research

In some companies, market research is a separate department from user research and may even report to different leaders. It also seems like a different career path; people find their way to either discipline from different backgrounds. But what’s the difference? It’s common—but wildly inaccurate—to attempt to distinguish the two by the methods used (market research does focus groups and surveys; user research does interviews and usability testing) or the objectives (market research looks at attitudes and user research observes behavior). Figure 1.4 invites us to consider a bigger picture—a broad set of methods and objectives that no one discipline “owns” exclusively.

Taking a different approach, Sam Ladner developed a guide shown in Figure 1.5 that recommends a research method based on where your product is in its lifecycle.

Interviewing Users 2 Figure 1.5

Figure 1.5 Sam Ladner organizes user research methods by the maturity stage of the product’s sales.

Combining User Research Methods

Interviewing can be used in combination with other techniques. Mixed methods refer to combining multiple methods (typically qualitative and quantitative) together in one study. I’ve used an exploratory interviewing study to identify topics for a global quantitative segmentation study. I’ve combined a Central Location Test (where larger groups watched a demo in a single location such as a research facility and filled out a survey) in parallel with in-home interviews to get a deeper understanding of the potential for the product. I’ve also mixed together different qualitative activities (say, a larger sample for a diary study, and then follow-up interviews with a subset of participants). It can be valuable to combine a set of approaches and get the advantages of each.

Note: Quantitative user experience research

Kitty Z Xu, a quant user experience researcher, explains how this emerging discipline uses two kinds of data: sentimental (such as feelings, perceptions and understanding) from surveys and behavioral (from logging data, usage metrics and more). Researchers in quant UXR make use of skills from a variety of fields, including user research, survey science, data science, and analytics. While interviewing (or qualitative user experience research) looks for insights in a small sample, quant UXR builds insights at scale—meaning collecting hundreds or thousands of samples that are representative of a larger population.

Choosing Interviewing

Interviewing isn’t the right approach for every problem. Because it favors depth over sample size, use interviewing when you don’t need statistically significant data. Being semi-structured, each interview will be unique and reveal something new about what you’re trying to understand (but it can be challenging to objectively tally data points across the sample). Although you are ideally interviewing in context, you are now a participant in that environment. Sitting with users to show you how they use a website isn’t supposed to be naturalistic (versus the way a tool that intercepts and observes users who visit that website captures their actual behavior).

People are not good at predicting their future behavior, especially not for brand-new, hypothetical situations (see “Manage Bias” in Chapter 4). There are bad questions and bad ways of asking questions (see Chapters 6 and 7), but you should be skeptical of broadly dogmatic interviewing advice that warns you never to ask about future behavior, like “How much would you pay for this?” You can definitely ask the question, but it’s important to understand what you can and can’t do with the answer. You won’t get a number that is helpful for your pricing strategy, but you can learn about their rationale for that number or hear a thoughtful reflection about perceived value. Your questions in an interview can reveal mental models that exist today, which will be insightful for the decisions you have made, but the literal responses about future behavior probably won’t be accurate.

Participant Questions

This isn’t really part of the high-level plan, but it’s included here because discussion about the research question sometimes drifts into specific questions that people imagine asking participants. I led a workshop with creative entrepreneurs who struggled to articulate what they wanted to learn from their interviews but were brimming over with what questions they wanted to ask. Because they really were unable to come up with research questions, our workaround was to build out the participant questions and then step back and ask what those questions were collectively in service of (in other words, the research question).

You may generate (or collect) some participant questions during this high-level planning process. Unless they are helpful in getting you unstuck on your research questions, just file them away for now. In Chapter 2, we’ll focus more on the questions we plan to ask.

Aligning on the Research Plan

Since you’re seeing this in a book, where the different elements of the plan (business problem, research question, and research method) are presented in sequence, you might reasonably conclude that you should also proceed linearly. First, get clarity on your business challenge, then uncover your research questions, and then choose the best method to answer those questions! Sounds good?

Ah, but it doesn’t usually work that way. Depending on how a project is initiated (a prospective client generates a Request for Proposal, a stakeholder sends a request by email, and so on), it may be more or less based on one of the three. You may be asked Here’s the situation, how can research help us? Or We need to learn such-and-such about these users. Or Can we complete this method of research within this time frame? But no matter how the conversation begins, it’s up to you to fill in the rest of the pieces.

If you’re given a research question, ask why that information is needed. If you’re given a research method, ask what they hope to learn, and then ask why that information is needed. Sometimes, the people you’re going to work with haven’t thought about this, but often it’s just implicit and your questions will help make it explicit. You want to make sure that not only are you and the clients or stakeholders aligned, but crucially that these different pieces are in alignment: the method has to produce the information that is needed, and the information that is needed should be in support of the actions the team plans to take.

The people who need the results of the research don’t necessarily understand the range of methods and when to use them. Don’t agree to use a prescribed method that doesn’t align with the necessary results, because the blame will fall to you at the end when you can’t deliver. Facilitating the alignment between challenge, question, and method is part of the expertise a researcher brings. People who do research should seek an experienced researcher to advise on these high-level aspects of the research plan.

To Interview Well, One Must Study

Much of the technique of interviewing is based on one of your earliest developmental skills: asking questions (see Figure 1.6). You all know how to ask questions, but if you asked questions in interviews the way you ask questions in typical interactions, you would fall short. In a conversational setting, you are perhaps striving to talk at least 50 percent of the time, and mostly to talk about yourselves. But interviewing is not a social conversation. Falling back on your social defaults is going to get you into trouble!

Interviewing users involves a special set of skills. It takes work to develop these skills. The fact that it looks like an everyday act can actually make it harder to learn how to conduct a good interview because it’s easy to take false refuge in existing conversational approaches. Developing your interviewing skills is different than developing a technical skill (say, milkshake-machine recalibration) because you had nothing to fall back on when learning about milkshake machines. With interviewing, you may need to learn how to override something you already know. Think of other professionals who use verbal inquiry to succeed in their work: whether it is police officers interrogating a suspect, a lawyer cross-examining an opposing witness, or a reference librarian helping a patron, the verbal exchange is a deliberate, learned specialty that goes beyond what happens in everyday conversation. For you as an interviewer, it’s the same thing.

We’ll revisit improving as an interviewer in Chapter 7, “Better Interviews.”

Interviewing Users 2 Figure 1.6

Figure 1.6 Childhood is marked by frequent, inevitable question-asking.

The Impact of Interviewing

Interviewing creates a shared bonding experience, often a galvanizing one, for the product development team (which can include researchers, designers, engineers, marketers, product management, and beyond). In addition to the information you learn from people and the inspiration you gain from meeting them, there’s a whole other set of transformations you go through. You might call it empathy—say a more specific understanding of the experience and emotions of the customer—which might even be as simple as seeing “the user” or “the customer” as a real live person in all their glorious complexity. But what happens when people develop empathy for a series of individuals they might meet in interviews? They experience an increase in their overall capacity for empathy.

This evolution in how individual team members see themselves, their connection to their colleagues, their design work, and the world around them starts to drive shifts in the organizational culture (see Figure 1.7). This capacity for empathy is not sufficient to change a culture, but it is necessary.

Interviewing Users 2 Figure 1.7

Figure 1.7 Team experiences that are challenging and out-of-the-ordinary create goodwill and a common sense of purpose.

More tactically, these enlightened folks are better advocates for customers and better champions for the findings and implications of what has been learned in interviews.

The wonderful thing about these impacts is that they come for free (or nearly). Being deliberate in your efforts to interview users will pay tremendous dividends for your products, as well as the people who produce them.

Scope Growth

In a Twitter thread, Mollie Ruskin wrote about a civic design project, saying,

While the research was “about” operations and staff capacity and a complex process for answering heaps of emails, I quickly found we were stumbling over a set of questions fundamental to the function of our representative democracy.

So, as much as you work to identify and align on your business problem and your research questions, that alignment is limited by the fact that the only information you have comes from before you have done any research. Mollie reminds us that our understanding of the problem (and the opportunity) can change.

The worst thing a research team can do, however, is to come back to the project sponsors and say “Welp, we know we were looking at operations and capacity but really the issue is the underpinnings of our democracy.” Ideally, the broader team is collaborative enough that they will see these reframes together and can decide what to do about them. When I’m in this situation, I try to address the initial scope (“Here’s what we know about the gaps in the operations and how this impacts staff capacity”) and present the emergent topic as one that builds on the original goals (“and, the real issue that connects these infrastructure decisions is the very nature of our democratic processes.”). If the organization isn’t ready (yet) to address the larger insight (and often they won’t be—just look at the size of the shift in Mollie’s example!) at least they can move forward on their original problem, and you’ve planted the seed for a future effort. This probably won’t be the last time this underlying issue emerges, and at some point, it may not be possible to ignore it any longer.

The Last Word

It’s become increasingly common, perhaps even required, for companies to include user research in their design and development process. Among many different approaches to user research, interviewing (by whatever name you want to call it) is a deep dive into the lives of customers.

  • Interviewing can be used in combination with other techniques, such as identifying key themes through interviews and then validating them quantitatively in a subsequent study.
  • At a distance, interviewing looks just like the everyday act of talking to people, but interviewing well is a real skill that takes work to develop.
  • Interviewing can reveal new “frames” or models that flip the problem on its head. These new ways of looking at the problem are crucial to identifying new, innovative opportunities.
  • Interviewing can be used to help identify what could be designed, to help refine hypotheses about a possible solution that is being considered, or to guide the redesign of an existing product that is already in the marketplace.
  • Teams who share the experience of meeting their users are enlightened, aligned, and more empathetic.

Back to Interviewing Users (2nd Edition)