Scaling Accessibility Through Design Systems
Incorporating accessibility can be seen as a daunting task, especially for products that have already been released. Alexis Lucio, Senior Accessibility Lead at Splunk, will share her journey in making accessibility a first-class citizen within Splunk Design System. Topics include: how to advocate for accessibility, utilizing use cases to optimize design and dev, how to utilize user input, and ideas on how to collaborate with cross-functional partners.
DesignOps: A Conduit for Inclusion
Accessibility and inclusion, two extremely important topics for every organization, and certainly of paramount concern to the design function. What role can DesignOps play in advancing these initiatives? As the new Director of Design Operations, Laura sees the numerous opportunities across the business – in research, communications, tooling and more. She is leading the way, forging a path, and allowing other teams to jump aboard.
Takeaways:
How Design Operations can directly and indirectly improve the experience of prospective employees, and also to our customers on the receiving end of our products by helping everyone design for (all) humans.
Distributed DesignOps Management (Videoconference)
- Selecting meeting/project collaboration tools
- Maintaining healthy team culture
- Inclusivity
- Managing critical meetings
- Dealing with feelings of isolation
- Consistent documentation and processes
For Prospective Authors
Should you write a book?
Writing a book is a difficult, time-consuming, and occasionally painful undertaking. It’s also a gift: it’s rare to have the opportunity, with sufficient time and ample support, to develop an idea that you care deeply about.
Before you proceed, ask yourself these questions:
- Do I really want to spend most of a year (or more) of my evenings and weekends on such a challenging pursuit?
- Is a book the best format to bring my idea to life, especially when other formats may play better to both my strengths as a communicator and to the topic itself?
If your answer is ‘no’ to either question, writing a book is probably not your best option.
Should you write a Rosenfeld book?
We’re glad that Rosenfeld Media is on your radar!
Founded in 2005, Rosenfeld Media is a small, independent publishing house that works closely with its authors to develop and promote books on user experience design, product design, and related topics. We typically publish no more than ten books annually, and we lavish each with tender loving care throughout the processes of writing, production, and marketing. Please note that we are not a hybrid publisher—we’re not “pay for play”.
There are two things you should know before you propose a book to us:
- Approach: We’re unusually collaborative—and you will need to be too. That means we don’t accept manuscripts that have already been written, or that you don’t want to work together to develop. Our approach is to work with you from proposal to publication. That’s true of marketing your book too; book marketing works best when author and publisher partner.
- Audience: Our core audience is user experience designers, researchers, writers, and product designers.
If you’ve got a different audience in mind, or don’t feel the need to collaborate on your writing, or expect your publisher to handle the bulk of book marketing, we are not the right publisher for you.
What Rosenfeld is looking to publish
Most of our books cover user experience design principles, tools, and methods. UX is a broad field that synthesizes ideas from many disciplines—from architecture to graphic design to human factors to librarianship—with the goal of helping people use and enjoy their experiences with all kinds of products and services.
- Our UX books teach craft and actionable skills to designers, researchers, writers, and other people who care about delivering great user experiences. These books tend toward practical advice: a rule of thumb is 25% “what and why” content, and 75% “how” content.
- We also publish another imprint, Two Waves Books, that explores the convergence of business and design. These titles cover topics that may have originated in the design world or benefit in special ways from a designer’s perspective, but are important to related audiences, like product managers, data scientists, and customer experience professionals.
Regardless of imprint, our house voice and tone emphasize warmth and accessibility as much as intelligence. Through the use of plain language and a conversational tone, our authors serve as trusted guides and facilitators as much as authoritative experts.
The ideal length of our books is 45,000-75,000 words (150-250 pages). Our authors use illustrations, examples, and stories to show as much as tell. And we prefer topics that are “evergreen”: while some of their examples might be especially current, each book’s principles and frameworks should stand the test of at least a few years’ time.
What Rosenfeld avoids publishing
Never say never, but we generally avoid publishing books that are:
- On topics we’ve addressed in a recent book. We avoid covering the same topics twice in the space of a few years; it’s simply not fair to our authors. Please review what we’ve published or are planning to publish and make sure we don’t already offer a recent book on your topic.
- Already written. Most publishers love receiving a manuscript that’s “ready to go.” We’re not one of those publishers. We prefer to pool your ideas with ours about the topic, its audience, your research, and the writing and production of the book. If you’ve already written a manuscript, it’s actually more work for us to produce your book.
- Compilations written by multiple authors. Regardless of how good you are at herding cats, compilations often suffer from uneven coverage, voice, tone, and quality.
- Repackaged blog entries. “Writing short” can be a good way to test your ideas and content in public, and it can get you part of the way toward a full manuscript. But a book is more than the sum of its parts; you’ll still need to make significant changes to your individual entries before they work together as a book.
- Based on a proprietary process or method. You or your company may be promoting a unique and novel approach. But is it broadly relevant and generalizable for readers?
Other things to know about Rosenfeld
- Our philosophy. Whether we’re co-creating books, workshops, conference programs or presentations, our approach comes down to three words: inclusion, collaboration, and iteration.
- Research equals promotion. Collaboration goes beyond you and us. We’ll work together to engage influencers, subject matter experts, and the broader community with your ideas with two goals in mind: to help you improve your content, and to give them a sense of and stake in the final outcome. The more people who feel a part of your book’s development, the more will support and promote it once it launches.
- We invest, you invest. We assign a developmental editor to each book. They work with you as a writing coach/project manager to get you through the process. And our marketing team works directly with you from development through launch. Both of these things are rare in our industry. In return, we expect you to be at least an equal partner in creating and launching your book—not just writing it but working hard to promote it.
- Speak with, not to readers. We want the world to be a better place thanks to your ideas and expertise. So we work with our authors to avoid jargon and other forms of poor communication that can get in the way of our readers learning from our books.
- We don’t play favorites. Our books receive an equal amount of editorial and marketing support, and all are consistently produced to meet our uncommonly high quality standards. If we’ve signed you, we are about you as much as any other author.
- Book formats. Our paperbacks are 6” x 9” (15.24cm x 22.86cm), and are printed with four-color covers and, in most cases, interiors. Our ebooks come in ePUB and DAISY DRM-free digital formats. We also market foreign language and audiobook rights to your book to major publishers in those channels.
- Business terms. We pay royalties twice annually on net sales (the money left over after production and printing costs are covered, as well as from audiobooks and translations). Our royalty rates are typically higher than the industry standard. We don’t pay advances. You own the copyright to your book, and license it to us.
- Distribution. We sell directly via our website and fulfill orders globally. We also sell via Amazon and other major retailers and wholesalers that do business with Ingram Publisher Services, the world’s largest book distributor.
- Covers. We’re glad to have your input, but the final design up to us. We’ve worked with the acclaimed design team from The Heads of State to develop each of our covers.
Ready to propose a book?
If you got this far, you’re probably ready to propose your idea for a book. Here’s what to do next:
If you’re at the idea stage:
Send us a pre-proposal that’s no more than one page, and includes:
- Working title: Ideally both descriptive and engaging.
- Elevator pitch: No more than 100 words that address both the challenge or problem readers need your help addressing, and what your solution is.
- Primary audience: Describe who the book must reach. Please be specific; “everyone” or “all humans who design” is too general. Also list any secondary audiences you’d love to reach if possible.
- 3-5 take-away bullet points: Explain how your idea will benefit your audience. What will they be able to do after reading your book?
- Your objective: What does success look like once your book is published?
If you’re further along with your proposal:
Send us a complete proposal that includes everything above, plus:
- Table of contents: Include a sentence or two that describes the goal of each chapter, some bullets describing what each chapter will contain, and how many pages (at 300 words/page) you estimate for each chapter. Also let us know the total number of pages you estimate the book will be. (Our sweet spot is 150-250 pages.)
- Writing sample: 2-3 pages, ideally from the proposed book; we’re also happy to read other samples of your professional writing.
- Competitive review: What other books will your book compete with, and how will your book be different from those titles?
- About you: Why are you the right person to write this book?
- About your ability to market your book: How will you market your book? Are you well-known in the field? Do you have a strong following on social media, a newsletter, or something else?
When you’re ready, email us your document. As you might imagine, we receive a lot of proposals. We do our best to review them at least once per quarter; please keep that in mind if you don’t hear from us right away. And if you decide to go in another direction, please let us know.
Thanks and good luck!
Frequently Asked Questions
We’ve identified some common questions about web form design. The short answers are provided here; and longer ones are available, of course, in the book.
- Why does Web form design matter?
Forms enable commerce, communities, and productivity on the Web to thrive. If you are in online retail, your goal is to sell things. But standing in the way of your products and your customers is a checkout form. If you are developing social software, your goal is to grow your community. Standing in between you and community members is a form. If you’ve built a productivity-based Web application, forms enable key interactions that let people create and manage content. - How should I organize my Web form—within one Web page or across several?
Who is filling the form in and why? Answering this up front allows us to think about our forms as a deliberate conversation instead of the inputs for a database. When you approach forms as a conversation, natural breaks will emerge between topics. When these distinct topics are short enough to fit into a few sections, a single Web page will probably work best to organize them. When each section begins to run long, multiple Web pages may be required to break up the conversation into meaningful, understandable topics. - If my form spans several Web pages, do I tell people what page they’re on?
When the questions that need to be answered before a Web form is complete are spread across multiple Web pages, you may want to include an overview of the number of Web pages involved (scope), an indication of what page you are on (position), and a way to save and return to your progress (status). Though closely integrated, these three progress indicators perform different functions. - Should I top-, right-, or left-align the labels for input fields?
When you are trying to reduce completion times or if you need flexible label lengths for localization, consider top-aligned labels. When you have similar goals but vertical screen real estate constraints, consider right-aligned labels. When your form requires people to scan labels to learn what’s required or to answer a few specific questions out of many, consider left-aligned labels. - How are smart defaults used in Web forms?
Smart defaults can help people answer questions by putting default selections in place that serve the interests of most people. There are many opportunities within Web forms to utilize the power of smart defaults to reduce the number of choices people have to make and thereby expedite form completion. - When should I include help text on my forms?
You should consider adding help text when: forms ask for unfamiliar data; people question why they are being asked for specific data; people may be concerned about the security or privacy of their data; there are recommended ways of providing data; and certain data fields are optional or required when the bulk of the form is not. - How should I indicate required input fields?
If most of the inputs on a form are required, indicate the few that are optional. If most of the inputs on a form are optional, indicate the few that are required. When indicating what form fields are either required or optional, text is the most clear. However, the * symbol is relatively well understood to mean required. - What’s the difference between a primary and secondary action?
Actions such as Submit, Save, or Continue are intended to enable completion, which is the primary goal of just about anyone who has started filling in a form. As a result, they are often referred to as primary actions. Secondary actions, on the other hand, tend to be less utilized.
Sample Chapter: Surveys That Work
This is a sample chapter from Caroline Jarrett‘s book Surveys That Work: A Practical Guide for Designing and Running Better Surveys. 2021, Rosenfeld Media.
Chapter 1: Goals: Establish Your Goals for the Survey
In this chapter, you’re going to think about the reason why you’re doing the survey (Figure 1.1).
Figure 1.1
It’s easier to hit a target if you know which one you’re aiming for.
By the end of the chapter, you’ll have turned the list of possible questions into a smaller set of questions that you need answers to.
Write down all your questions
I’m going to talk about two sorts of questions for a moment:
- Research questions
- Questions that you put into the questionnaire
Research questions are the topics that you want to find out about. At this stage, they may be very precise (“What is the resident
population of the U.S. on 1st April in the years of the U.S. Decennial Census?”) or very vague (“What can we find out about people who purchase yogurt?”).
Questions that go into the questionnaire are different; they are the ones that you’ll write when you get to Chapter 3, “Questions.”
Now that I’ve said that—don’t worry about it. At this point, you ought to have neatly defined research questions, but my experience is that I usually have a mush of draft questions, topic titles, and ideas (good and bad).
Write down all the questions. Variety is good. Duplicates are OK.
Give your subconscious a chance
If you’re working on your own, or you have the primary responsibility for the survey in a team, then try to take a decent break between two sessions of writing down questions. A night’s sleep gives your subconscious a chance to work out what you really want to find out. If that isn’t practical, then maybe try a walk in the fresh air, a break to chat with a friend, or anything else that might provide a pause.
Get plenty of suggestions for questions
If you’re working with a team or you’re in an organization, then often when word gets out that there’s a survey ahead, colleagues will pile in with all sorts of suggestions for their questions. This can feel a little overwhelming at first, but it’s best to encourage everyone to contribute their potential questions as early as possible so that you can carefully evaluate all of them, focus on some goals for this specific survey, and have a good selection of other questions available for follow-up surveys and other research.
If I’m too restrictive at the very beginning, I find that everyone tries to sneak just one little extra essential question into the questionnaire a day—or even an hour—before the fieldwork starts. By then, it is too late to test the little extra questions properly, and they could sink my whole survey.
But while you’re still establishing the goals for the survey? Great! Collect as many questions as possible. Encourage everyone to join in—colleagues, stakeholders, managers, whoever you think might be interested. If you’re running a workshop, give the introverts some space by having a bit of silent writing where everyone captures their individual question ideas by writing them down.
Create a nice big spreadsheet of all the suggestions, a pile of sticky notes, or whatever idea-gathering tool works for you.
Ideally, make it clear that there’s a cutoff: suggestions before a particular date will get considered for this survey; miss the date, and they’ll be deferred until the next opportunity. This helps to encourage the idea of many Light Touch Surveys.
Challenge your question ideas
When you’ve gathered or created question ideas, it’s time to confront them with these four detailed challenges in Figure 1.2:
- What do you want to know?
- Why do you want to know?
- What decision will you make based on the answers?
- What number do you need to make the decision?
Figure 1.2
What decisions will you make based on the answers?
Ask: What do you want to know?
Surprisingly, I find that the question suggestions that I create or collect from colleagues often do not relate to what we want to know. Many times, I’ve challenged a question by saying, “OK, so you’re thinking about <xxx question>. What do you want to know?” and it turns out that there’s a gap between the question and the reason for asking it.
Probably the most common example is the question: “Are you satisfied?” The question is OK but very general.
Ask: Why do you want to know?
I’m usually working with someone else when I’m doing a survey. To help narrow down from ”every possible suggestion” to a sensible set of goals for the survey, I ask “Why do you want to know the answers to these questions?” and we then go on to challenge ourselves with the three questions in Figure 1.2.
If I’m on my own, then I find it helps to add “this time” or “right now”—to help me focus on the practical matter of getting my ideas down to something manageable. Come to think about it, that’s not a bad idea for a team, too—it helps all of them realize that they don’t have to ask everyone everything all at once.
Ask: What decision will you make based on the answers?
If you’re not going to make any decision, why are you doing the survey?
Look very hard at each of the suggested questions and think about whether or not the answers to them will help you make a decision.
Don’t worry at this stage about the wording of the questions or whether people will want to answer them. You’ll work on those topics in upcoming chapters.
But if the answers to a question won’t help you make a decision, set that question aside. Be bold! The question might be fascinating. You might be looking forward to reading the answers. But you’re trying to focus really hard on making the smallest possible useful survey. You don’t need to waste the question—it can go into the possible suggestions for next time.
At this point, you’ll have some candidate questions where you know what decisions you’ll make based on the answers.
Ask: What number do you need to make the decision?
In the opening chapter, “Definitions,” I emphasized that a survey is a quantitative method and the result is a number. Sometimes you’ll realize at this point that although you have candidate questions, you do not need numeric answers to them in order to make the decisions. That’s fine, but it also means that a survey is probably not the right method for you. Your work so far will not be wasted because you can use it to prepare for a more appropriate method.
Choose the Most Crucial Question (MCQ)
If you were only allowed answers to one of your candidate questions, which would it be?
That’s your Most Crucial Question (MCQ).
-
- The
Most Crucial Question
- is the one that makes a difference. It’s the one that will provide essential data for decision-making.
You’ll be able to state your question in these terms:
-
- We need to ask _______.
- So that we can decide _______.
At this stage, don’t worry if it’s a Research Question (in your language, maybe even full of jargon) or the question that will go into the questionnaire (using words that are familiar to the people who will answer).
Test your goals: Attack your Most Crucial Question
Try attacking every word in your Most Crucial Question to find out what you really mean by it. Really hammer it.
Here’s an example: “Do you like our magazine?”
- Who is “you”? Purchaser, subscriber, reader, recommender, vendor, or someone else?
- What does “like” mean? Admire? Recommend? Plan to purchase? Actually purchase? Obsessively collect every edition? Give subscriptions as gifts?
- What do you mean by “our”? Us as a brand? A department? A team? As a supplier to someone else?
- What do you mean by “magazine”? Every aspect of it? The paper edition? The online one? The Facebook page? The article they read most recently? Some parts, but not others? Does it matter if they’ve read it or not?
I found a great attack on a question by Annie Pettit, survey methodologist. She starts with the question:
- “When was the last time you bought milk?”
Here’s how Annie attacks “bought” and “milk”:
-
- Wait, do you care if the milk was purchased? Or could it be that we have an arrangement whereby we don’t actually pay for milk? Perhaps people who live on a farm with dairy cows, or people who own a convenience store?
Do you mean only cow milk? What about milk from goats, sheep, buffalo, camel, reindeer? Or what about milk-substitutes from nuts or plants like soy, almond, rice, and coconut that are labeled as milk? Were you really trying to figure out if we put a liquid on cereal? (Pettit, 2016)
(And she added a whole lot more about topics, like whether or not chocolate milk counts.)
Decide on your defined group of people
When you’ve really attacked your MCQ, look back and think about your “defined group of people”—the ones who you want to answer. Add them to your statement like this: We need to ask (people who you want to answer). The question (MCQ goes here). So that we can decide (decision goes here).
If your defined group of people is still vague—“everyone” or something equally woolly—then try attacking again. A strong definition of the group you want to answer at this point will help tremendously when you get to the next chapter, “Sample.”
But before you proceed to Chapter 2, let’s pause for a moment and think about your plans.
Check that a survey is the right thing to do
Is your research question something that you must explore by asking people, or would it be better to observe them?
Do you want to know “why?”—qualitative—or “how many?”—quantitative?
Let’s look at this definition again:
-
- >A
survey
- is a process of asking questions that are answered by a sample of a defined group of people to get numbers that you can use to make decisions.
I’m going to contrast that with this definition:
-
- An
interview
- is a conversation where an interviewer asks questions that are answered by one person to get answers that help to understand that person’s point of view, opinions, and motivations.
Both of them rely on asking: the interview is about “why”— qualitative—and the survey is about “how many”—quantitative, as in Figure 1.3.
Figure 1.3
Contrasting interviews as qualitative and surveys as quantitative.
Must your MCQ be answered by people?
One of my favorite questions was on a printer manufacturer’s survey:
- “How many pages do you print in a month?”
I had no idea. I knew the answer was more than one and less than a full box of paper because I hadn’t bought a box of paper that month—but I didn’t feel sufficiently motivated to work out how many pages are in a full box. I guessed, wildly. Very poor data.
The real irony, though, was that my printer was connected to their customer feedback program and was giving them the exact figure all the time: their analytics should have told them.
Here’s another example that arrived in my inbox recently:
- We need to ask visitors to our website whether pop-ups make them feel less like buying from us so that we can decide whether to remove pop-ups.
I’m sure that client must have some good business reasons for using pop-ups that make them hesitate about removing them, but asking people whether they “feel like buying” is a notoriously unreliable thing to do. They may feel like buying, but not actually buy, or feel unlike buying, but buy anyway. (We’ll return to this topic in Chapter 3 when we look at the “Curve of Prediction.”)
There’s a much better quantitative method for questions like this: A/B testing, where you publish two versions and use analytics to decide which one contributes better to the desired outcome. A/B tests and the many other different types of analytics silently observe what people do without bothering them with questions. These are contrasted with surveys in Figure 1.4.
Figure 1.4
Analytics and A/B tests are ways of observing how many people do something without asking them.
Do you want to find out “why”?
You may have spotted that we’re sneaking up on the four-way matrix in Figure 1.5. The quadrant we haven’t yet looked at is the top-left corner: observing to find out “why.”
It’s not always obvious why people are doing something. For example, if people tell you they can’t find things on your website, then search log analytics will tell you what they are searching for—but not why they are searching. Did they try searching straight away? Did they try a few clicks without success? Did they see your term for what they’re searching for but not recognize it because they had something different in mind?
Here’s another MCQ that I see quite often:
- We need to ask visitors to our website the question: “What do you dislike about our site?” so that we can decide what to improve.
Figure 1.5
A matrix for choosing the right method.
Leaving aside the problem that “What do you dislike” doesn’t have a numeric answer, you’ve got the more fundamental problem that there isn’t a direct connection between “What do you dislike” and “What should we improve?” You need to know why people dislike something in order to get ideas about how to change it.
You might turn to interviews, but it’s unreasonable to expect most people to retain all the little details that made something easy or difficult. Observing them as they use the thing is much easier for them—and much richer data for you.
In a usability test, you can observe a participant who is tackling some tasks—often in a research facility. Or you can go out to observe people in their natural setting, a field study.
Consider “why” alongside “how many”
A four-way matrix always makes it look as if the ideas are separate, doesn’t it? Of course, in reality, the techniques complement each other.
- The route in Figure 1.6 is one that I took around the matrix for a client recently.
- Analytics showed that sales of one product had dropped.
- Usability tests revealed that people thought the website was no longer maintained, so the product must also be out-of-date.
- Interviews at the same time revealed that people often left a long gap between deciding to buy the product and actually using it.
- A survey told us that the out-of-date problem was affecting more people than the wait-to-use problem.
Figure 1.6
One of many possible routes around the matrix.
I would love to encourage you to try some triangulation.
Triangulation
- is when you use a mixture of research methods and compare the results to improve your overall insights.
A draft presentation can help you decide between “why” and “how many”
A couple of years ago, I was chatting about surveys with user experience consultant Natalie Webb. Her tip was:
- “Create a draft of your presentation, based on the results you expect to get from your survey.”
It seemed a strange idea to me at first, but the more I’ve tried it, the more I like it as a way of testing whether I’ve really thought enough about what I want to ask and whether the number that I will get as a result of my survey really will help me to make a decision—the “so what” of surveys in Figure 1.7.
Figure 1.7
A draft presentation helps you to think about the “so what?” of your survey.
I worried that by drafting the presentation first, I’d be somehow constraining the direction of the research—preventing my team from thinking freely about what they were doing, closing down what they might learn.
Gradually, I realized that this is part of the power of surveys. Because you’re finding out “how many” of something, you need to understand the “why” before you start. If you don’t yet know enough about “why,” then you should be choosing to start with observation and interviews.
Think about what sort of number you need
Thinking about the “so what” and the number that you’ll need for the decision you’ll make also helps with another point to consider now: what sort of number do you need as your result? It may seem early, but statisticians will tell you that you must work out your statistical strategy before you collect the data, not afterward.
Do you need to know the actual number of people who answer a question in a particular way? For example, when I helped with a survey about planning an office move, I wanted to know how many people said that when the office moved to the new location, their commute would become excessively long.
Is it the proportion who answer one way rather than another? For example, I wanted to compare the proportion of people who claimed they would leave if the office moved to a new location to the proportion who said they would be likely to accept the change.
Are you looking for a mean (the arithmetical average)? For example, I might have considered whether increasing the mean commute by more than an hour would kill the idea.
Are you looking for a median (the value right in the middle when you place them all in order from largest to smallest)? Means can get easily distorted by one or two outlandishly large values. If one person’s commute suddenly became nearly impossible—10 hours or more—that would greatly increase the mean, but the median wouldn’t be affected very much.
And for design, I’m often looking at ranges and modes. The range is the difference between the largest and the smallest values, so with a 10-hour commute and another commute that’s zero because the person lived in an apartment above the possible new location, my range would be 10 hours. The mode is the most frequent value, and something that I find I have to consider very carefully for many design challenges—both to design for the people who answered with the most frequent value and to make sure that I’m not accidentally excluding people who don’t fit “the norm” for any reason.
Or something else? You may be doing a comparative survey so you’ll be considering what you want to compare from this survey to the next, or a modeling survey where you’ll do all sorts of advanced statistical manipulations, or something quite different.
Whatever you’re planning to do with the answers to your survey, some careful thought at this stage about those statistics will be well worth the time you put into it—and may send you back to have another review of your Most Crucial Question and how you plan to use it.
Determine the time you have and the help you need
So, you have a Most Crucial Question, you know the decision you’ll make, and you’ve thought a bit about the type of number you need to make that decision. It’s a good moment to think about timing and who needs to be involved.
First, think about the time available:
- When do you need to have a result, and how much time can you put into it?
- If you’re lucky enough to have team members to work with, how much time can they spare?
- When will you deliver the report from the survey?
Next, think about the tools:
- Do you or your organization already have a survey tool?
- Do you know how to use it?
- Will you need to buy or subscribe to one?
Finally, and perhaps most importantly, who else is involved?
- Who needs to be involved in the survey but isn’t part of your team, such as the privacy or legal people?
- Who will get the results from the survey?
- Who is involved in making the decisions based on the results?
Interview first, survey later
A common mistake is to think that you’ll do a survey first and then do follow-up interviews with some of the people who answer.
The rule is: interview first, survey later. Two especially useful types of interviews are:
- Interviews to find out what your defined group of people think about the topic of your survey (covered in Chapter 2)
- Cognitive interviews—a special type of interview just for survey questions—to help you discover whether the questions are working (Chapter 3)
And, in fact, to get the best results from your survey, you’ll complement these interviews with two other techniques from the matrix, aa noted in Figure 1.8:
- Usability tests of the questionnaire (Chapter 4, “Questionnaire”)
- A pilot test between the usability test and the survey itself (Chapter 5, “Fieldwork”)
Figure 1.8
We’ll use techniques from other parts of the matrix on our way to the survey.
If you want a couple of ideas for how to fit all those activities into the time you have available, then skip ahead to Chapter 8, “The Least You Can DoTM.” A recent survey where I worked hard to get a single Most Crucial Question took me four days—spread out over a month, admittedly, but only because I had a week’s vacation in the middle.
What could possibly go wrong with the goals?
For many years, I was quite a purist about surveys. If you’d asked me “What can go wrong when choosing a goal for your survey?” I’d have answered, “Insisting on doing a survey when it’s the wrong method for the research problem.”
These days, I’ve mellowed. I know that sometimes colleagues or clients will carry on with a survey for all sorts of reasons, good and bad, when it’s not the ideal thing to do. If that’s happening to you, don’t worry. Keep making good choices, aim for a Light Touch Survey, and iterate as much as possible. No matter what the outcome is, you’ll definitely learn a lot about how to do a better survey next time.
Strictly between you and me, I’ve also become more relaxed about some of the other aims of this chapter. Couldn’t get down to exactly one Most Crucial Question? If you still have dozens of MCQs: definitely not. But five or six candidates for MCQ? Not so bad—you can whittle them down when you start working on them in Chapter 3. Not entirely clear about the decision you’ll make? Have a go, and revisit it when you’ve done some more steps. You can iterate, after all.
But I wouldn’t often admit that to the team or the client because I know that when we can agree on one Most Crucial Question with a clear decision to be made, the rest of the survey process is going to be much easier and quicker. So I try pretty hard to persuade them to get there.
To be valid, the goals and questions must match
This brings me to the first of the challenges that you’ll meet through the steps of the survey process. In this chapter, you’ve been looking at the first tentacle of the Survey Octopus: “The reason you’re doing
it,” as shown in Figure 1.9.
Figure 1.9
Lack of validity.
There’s always an error between each tentacle and the next one. In this case, it’s “lack of validity.”
Lack of validity
- happens when the questions you ask do not match the reason why you are doing the survey and what you want to ask about.
Or in other words:
- A survey is valid when the questions you ask are a good match to the reason why you are doing the survey and what you want to ask about.
So work really hard on the reason why you are doing it, the decision that you’ll make, and that Most Crucial Question.
At this point, you will know
To have an easier ride with the next steps in the survey process, it helps a lot of at this point if you know:
- The resources you have for the survey
- Who you want to answer your question—your defined group of people
- The decision you’ll make based on the results
- The Most Crucial Question to help you make the decision
- Whether the Most Crucial Question needs to be answered by people or not
- Whether a survey is the right thing to do
Pain and Curiosity Precede Successful Design Systems Change with Dan Mall
While we’ve been developing design systems for years, we’re only just now learning how to create systems that are successful and sustainable. Dan Mall is the author of the soon-to-be released Design That Scales: Creating a Sustainable Design System Practice, which explores the cultural elements that contribute to sustainable design systems.
Not surprisingly, it’s usually pain that motivates change. In fact, companies occupying the number one spot in their respective markets usually have the least incentive to change. As the saying goes, “Number two tries harder.” But even in the most-ready-for-change scenarios, design systems sit, at best, at a third level of priority. Dan asserts that the challenge is to approach design systems as a byproduct of the products and features that bring customers value. Otherwise, design systems will always be on the backburner.
Dan and Lou discuss tricky topics around design systems:
- Designers’ fear of job loss to design systems.
- As we move toward sustainable design systems, who should make the decisions? Who does what and when?
- How to approach design systems in a sustainable way.
- The best way for product and systems teams to collaborate.
What you’ll learn from this episode:
- Why culture, rather than product maturity, will determine whether design systems are successful
- How to address fears of job loss as a result of design systems
- How to keep people motivated through a systems change
- How product and design teams can work together efficiently
- How design systems have changed over time
- The role of governance in systems change
- Why following precedent within your company will get you farther faster
Quick Reference Guide
[0:00:32] Introduction of Dan Mall and his book Design that Scales – Creating a Sustainable Design System Practice
[0:04:49] On reaching cultural alignment
[0:07:01] What prompts design systems change
[0:09:26] When jobs feel threatened
[0:12:21] Cultural signs and markers of design system success
[0:16:59] November 29th, 2023 – Design in Product Conference
[0:18:20] On governance and sustainability
[0:24:44] On collaboration between product and design teams
[0:27:33] The evolution of design systems for ICs
[0:30:35] Design Systems University
[0:32:38] Dan’s gift to listeners
Resources and links from today’s episode:
Design in Product Conference on November 29, 2023 https://rosenfeldmedia.com/design-in-product
Design Systems University https://designsystem.university/
The Useful School https://usefulschool.com
Sample Chapter: Interviewing Users (2nd edition)
This is a sample chapter from Steve Portigal’s book Interviewing Users: How to Uncover Compelling Insights (2nd edition). 2023, Rosenfeld Media.
Chapter 1
Interviewing Addresses a Business Need
A few years back, I worked with a company that had the notion to turn a commodity safety product—the hard hat—into a premium product. They would incorporate advanced features and then charge a higher price point. I don’t actually know where their idea came from, but one can imagine that they had seen all kinds of everyday products be reformulated to generate a higher scale of profit (think about Starbucks, gourmet salt, smartphones, Vitamix blenders, or horsehair mattresses). They sketched out a set of features that would improve the functional performance of the hard hat.
When I interviewed people who wore hard hats for work, I didn’t ask them to evaluate the features my client had been considering. Instead, I asked them generally about their work, so I was able to uncover insight into the most significant aspects of their experience. What they were concerned about fell into an entirely different category. They talked about leaving the job site to get lunch (for example) and how awkward they felt among other people while dressed in their prominent, brightly colored safety equipment. Indeed, makers of other safety equipment like bicycling helmets, safety footwear, and safety goggles had already redesigned their products to echo fashionable caps, boots, and sunglasses, suggesting this concern was being felt broadly.
If there were to be a TEDx version of this story, then this team would have become very excited about this new and surprising area of opportunity, despite it being different from what they had already invested in (financially, intellectually, and even emotionally). They’d have torn up those plans, drawn up new ones, and eventually raked in the dough. But you know that isn’t really how these things play out! In these interviews, we uncovered a significant business risk in pursuing their existing idea, so they stopped product development for their hard hat with extra functionality. On the other hand, these interviews identified another opportunity: to produce a hard hat that would address the issue of social performance. That wouldn’t have fit with their organization’s technical or cultural competencies, so they chose to avoid the business risk of developing a fashionable hard hat. What we learned from these interviews informed their decision not to bring any product to market.
When you get down to it, that’s what we do as user researchers: We gather information about users in order to inform critical decisions about design, product, or other parts of the business or organization. To do this means that we go to people’s homes, their offices, wherever their context is. We ask what they do. We ask them to show us. We get stories and long answers where we don’t always know what the point is. We want them to explain everything about their world to us.People may not have a ready answer as to why they do something, but we have to listen for why. We have to ask follow-up questions and probe and infer to try to understand, for ourselves, just why some-thing is happening the way it is. We make sense of this disparate information and show the way to act on what we’ve learned.
Interviewing is a specific method in user research to accomplish these goals. (User research is also referred to by other terms such as design research, user experience research, or UXR.) This book is about interviewing users (also referred to variously as site visits, contextual research, or ethnographic research) as a method to conduct user research, so beyond an in-depth examination of best practices for interviewing users, we’ll also consider user research in general. And we’ll also look at other user research methods that can be integrated and combined with interviews.
Nomenclature aside, the broad outline for interviewing users is:
- Thoughtfully planning out objectives, who we’ll interview, and how we’ll go about it
- Deeply studying people, ideally in their context
- Exploring not only their behaviors, but also the meaning behind those behaviors
- Making sense of the data using inference, interpretation, analysis, and synthesis
- Using those insights to point toward a design, service, product, or other solution
Learning About Users to Inform Decisions
Typically, when you interview people, you visit your users in their homes, their offices, their cars, their parks, and so on. But this isn’t always the case. When planning a project, ask yourself if it’s more insightful to bring participants in to see your stuff (say, prototypes you’ve set up in a facility meeting room) than it is for you to go out and see their stuff. Overall, your objective is to learn something profoundly new. (There are situations where quickly obtained, albeit shallow, information is beneficial, but that’s not what we’ll focus on here.)
Note: Every organization can benefit from research
Sometimes, companies declare that they don’t need to do user research. What they typically mean is that they don’t need to do generative user research (learning about people in order to identify product opportunities), but they are probably doing evaluative user research (testing the thing they are developing to make sure it’s usable by people). Denying the value of generative research (because, as they might say, people don’t know what they want and it’s the company’s mission to invent that anyway) belies a poor understanding of how user research is conducted and applied. For one thing, it’s not simply asking people “what they want.”
For another, it’s not credible that they possess an innate talent for building stuff that people love. Even if they themselves are users of the snowboards, photography equipment, or mixing gear that they make, they will choose and use those solutions differently than someone who is not inside their industry. They will be blind to differences in income, access, use cases, and so on. And they will have difficulty expanding their offering in an innovative way, because they are stuck in this model of being the user.
Often, the stated goal of interviewing users is to uncover their pain points. This approach mistakenly characterizes research with users as a sort of foraging activity, where if you take the effort to leave your office and enter some environment where users congregate, you’ll beheaded home with a heap of fresh needs. You can observe that people are struggling with X and frustrated by Y, so all you have to do is fix X and Y, and then all will be good.
Although this may be better than nothing, a lot of important information gets left behind. Insights don’t simply leap out at you. You need to work hard and dig for them, which takes planning and deliberation. Further complicating the foraging model is that what people in problem-solving professions (such as designers and engineers) see as “pain points” aren’t necessarily that painful for people. The term satisficing, coined by Herbert Simon in 1956 (combining satisfy and suffice), refers to people’s tolerance—if not overall embracing—of “good-enough” solutions.
Once while settling in for a long flight, I noticed that a passenger in the row in front of me had fashioned a crude sling for their iPhone using the plastic bag that the airplane blanket came in. They had twisted the bag into a makeshift “rope,” which they looped around the body of the iPhone and then jammed behind the latch that kept the tray table closed. They now had a (slightly askew) solution for watching their own device for the duration of the flight. Initially, I was critical of the ugly, inelegant result. But eventually, I realized it was beautiful in its own way—it was fashioned from the materials they had on hand. Since then, I’ve seen other examples of passengers making their own viewing solutions, and I’ve made a point of taking a picture. (See Figure 1.1 where the passenger has made an iPhone viewer out of the airline’s credit card brochure and some beverage napkins.)
Figure 1.1 An airplane passenger viewing stand, made from the materials found on board.
Contrast these good-enough solutions with a more purpose-built accessory (see Figure 1.2): the passenger would have to have known about it, purchased it, remembered to bring it, and carried it with them. Of course, the ideal solution—not just the raw materials—would be provided by the airline itself (see Figure 1.3).
Figure 1.2 TabletHookz is an accessory designed specifically to hold a mobile device in an airplane seat back for hands-free inflight viewing.
Figure 1.3 A device holder built into the airplane seat-back allows passengers to watch videos on their own devices.
There have long been spaces online that exhibit samples of makeshift solutions. They are meant to amuse, but usually with a good measure of judgment and schadenfreude (this is the internet after all!). A good exercise for a user researcher is to seek out those images and reflect on what aspects of these solutions are successful for the people who implemented them.
I encounter satisficing in every research project: a computer desk- top with an unfiled document icon in each element of the grid, an overflowing drawer of mismatched food container lids, a not-yet-unwrapped car manual, and tangled, too-short cables connecting products are all “good-enough” examples of satisficing. In other words, people find the pain of this putative problem to be less acute than the effort required by them to solve it. What you observe as a need may actually be something that your customer is perfectly tolerant of. Would they like all their food in containers matched with the right lids? Of course. But are they going to make much effort to accomplish that? Probably not.
Beyond simply gathering data, interviewing customers is tremendous for driving reframes, which are crucial shifts in perspective that flip an initial problem on its head. These new frameworks, which come from rigorous analysis and synthesis of your data, are critical. They can point the way to significant, previously unrealized possibilities for design and innovation. Even if innovation (whatever you consider that to be) isn’t your goal, these frames also help you understand where (and why) your solutions will likely fail and where they will hopefully succeed. To that end, you can (and should!) interview users at different points in the development process. Here are some situations where interviewing can be valuable:
- As a way to identify new opportunities before you know what could be designed.
- To refine design hypotheses when you have some ideas about what will be designed.
- To redesign and relaunch existing products and services when you have history in the marketplace.
From My Perspective: Gaining Insight vs. Persuading the Organization
While doing ethnographic research in Japan, I accompanied my clients as they conducted an unrelated study. They brought users into a facility and showed them elegantly designed forms for printer ink cartridges. They were smooth, teardrop shapes that were shiny and coated with the color of the ink. They also showed users the current ink cartridge design: black blocks with text-heavy stickers.
Can you guess what the research revealed? Of course. People loved the new designs, exclaiming enthusiastically and caressing the shapes. Regardless of method, there was no insight to be gained here. I’ve gone back and forth about whether this was good research or bad research. It didn’t reveal new information, but it provided tangible evidence for the organization. This team’s approach suggested that there were other issues with the design process (perhaps that leaders wouldn’t make decisions without supporting data from users) and while their research might have been the best way to move their process forward, ideally it wasn’t the best use of a research study.
A High-Level Research Plan
The operational aspects of interviewing users will be covered in the next chapter (“Research Logistics”), but here let’s consider the three (plus one special guest) elements of a high-level plan. And by “plan,” it’s less about how you document the plan and more about the thinking that makes for an effective research project. A plan should summarize the project as you understand it at the time, including the business problem, the research questions, and the agreed-upon research method. Reviewing this plan with your team will ensure that you are aligned, with an opportunity to clarify, reprioritize, or expand the work.
Note: The answer to a never-ending story
This book defaults to considering research as projects that have a beginning and an ending. But there are other models. Rolling research is a way of providing designers with regular access to participants who can provide feedback on whatever they are working on. Typically, a small number of participants are scheduled on a weekly basis. Designers and researchers determine earlier in that week what they’ll show to the participants, and what questions they’ll ask. Continuous discovery involves the entire product team, through the entire development cycle, and includes designing, prototyping, and getting feedback from users.
Even if you are interviewing users through one of these approaches, most of the guidance in this book (for instance, Chapter 6, “The Intricacies of Asking Questions”) will apply directly.
The Business Problem
The business problem (or business objective) is what your organization— the producer of products, services, solutions, and so on—is faced with, as shown in Table 1.1.
Table 1.1 Business Problem Examples
Business Problem
-
- We’re sunsetting a legacy product and replacing it with one that uses a different technology.
- Our new product didn’t do as well as we had hoped.
- We want to move into a new market.
- A new competitor is taking some of our market share.
- We’re roadmapping what new features we’ll be developing for our current service.
- Product feedback is strong but repeat orders are low.
To get an in-depth understanding of the business problem, you’ll probably want to talk with your stakeholders. You’ll learn more about this topic in Chapters 2 and 10, “Making an Impact with Your Research.”
From My Perspective: Uncover Misalignment Early
I once worked with a client who made a digital platform used for particularly complex transactions. They already supported the buyers, sellers, and their respective brokers, and now were looking at opportunities to incorporate the other entities (known as “third parties”) in these transactions. This research was a strategic priority, traceable to goals assigned from on high.
To kick off the project, we scheduled two activities (loosely based on the Questions Workshop3) with different groups of stakeholders. We set up a spreadsheet to capture decisions they were planning to make and what information about these other users would help in making those decisions. In the first workshop, the main project sponsor halted the proceeding to ask “Now, what do we mean by ‘third parties?’” I assumed they knew, and they assumed I knew! I was surprised, but glad they weren’t afraid to ask a “dumb” question. It was a disconnect, but an important one to uncover, and at the right time. We aligned on a definition and then moved forward with the questions. In the second workshop, a stakeholder kicked off the session by telling us “Just so you know, we’re already coding a solution.” Again, I was surprised, but this was very helpful to understand at the outset rather than later.
The Research Question
The research question identifies the information you need from users to provide guidance about the business problem. Whereas the business problem looks inward, the research question looks outward—in other words, the business problem is about you and the research question is about your users (see Table 1.2).
Sometimes the research questions are clustered and nested. For example, the business problem “We are investing heavily in social media and want our customers to promote our services more” might lead to this set of research questions.
- What do people’s social networks look like? What tools do they use and how are their networks structured?
- How are purchase decisions driven by the structure of people’s social network (on and offline)?
- How do people leverage social networks for shopping and other kinds of decision-making? Who has influence with them currently?
- Who among their social network (and beyond) are trusted sources of information for various decisions and purchases (particularly within the client’s area of business)?
Table 1.2 Research Question Examples
To further inform the research questions, you should review previous research reports, existing products, and in-development prototypes. Look for relevant research findings, explicitly stated assumptions or hypotheses, and implicit hypotheses in the decisions that have already been made.
Note: Find the specificity that’s right for you
When I ask teams to work on articulating their business problems and research questions, they often find it surprisingly challenging, but also enlightening. There won’t be a singular perfect answer, but the process of considering the specifics is valuable for developing a deeper intention and focus for the research. That process might include going back and forth on different variations and wordings. It might not produce a perfectly structured 1:1 relationship between the business problem and the research question. If you practice with a colleague, before long, you’ll have a feel for the right level of granularity and structure for you.
You should also conduct interviews with your stakeholders—they are often consumers of the research findings who are less likely to be involved in the day-to-day study. I typically aim for 6–8 stakeholders, although some clients ask for twice that amount. These are one-on-one conversations that run between 30 and 60 minutes and are used to dig deeper into objectives and set the stage for working collaboratively. Many of the interview techniques in this book (such as what I’ll cover in Chapter 5, “Best Practices for Interviewing”) apply to interviewing stakeholders, although you may find it less comfortable to ask “dumb” questions if you feel your credibility could be at stake. You should ask the stakeholders about the following:
- Their history with the organization and the research topic
- Business objectives for the project and specific questions the research should answer
- Current beliefs about the customer, the user, and the proposed solution
- Organizational or other barriers to be mindful of
- Concerns or uncertainty around the method
Even though what you learn will undoubtedly inform all of the activities throughout the project, the immediate output is the research questions—articulating what you want to learn from the interviews.
Note: Get immersed in your research area
With the overall goal of trying to understand the problem space you’re exploring, gathering the language that is used to talk about that problem space, and planning what you’re going to ask your research participants, there are other activities that you can do at this point. Secondary research (also called desk research) gives you a sense of current and historical thinking through what’s been written about your topic already. Look at the mainstream press, the business press, academic papers, internal or external corporate reports, blogs, online forums, newsletters, books, and so on. Identify industry, academic, or other experts and interview them. You may also seek out a few experiences that will give you some perspective on the topic. Look at similar products and how they are being sold online or in retail. Try an experience yourself.
For a project that sought to understand how our client could facilitate a more emotional connection with their customers, we visited a handful of environments that had reputations for successfully bonding with their users (an Apple store; Powell’s Books in Portland, OR; the dog-friendly Fort Funston in San Francisco; a Wawa convenience store in Philadelphia; and Rainbow Grocery in San Francisco) and observed the environment, the people that were there, and hypothesized about what factors were either leveraging or contributing to the relationship. This led to topics to explore in the interviews and examples to compare and contrast with during the analysis stage.
The Research Method
The research method is how you will gather the information needed to answer the research question. Here are a few examples of user research methods (other than interviewing):
- Usability testing: Typically done in a controlled environment, such as a lab, users interact with a product (or a prototype or simulation), and various factors (time to complete a task, error rate, preference for alternate solutions) are measured.
- A/B testing: This type of testing compares the effectiveness of two different versions of the same design (e.g., advertisement, website landing page) by launching them both under similar circumstances.
- Quantitative survey: A questionnaire, primarily using closed-ended questions, is distributed to a larger sample in order to obtain statistically significant results.
- Web analytics: Measurement and analysis of various data points are obtained from Web servers, tracking cookies, and so on. Aggregated over a large number of users, Web analytics can highlight patterns in navigation, user types, the impact of day and time on usage, and so on.
- Focus group: This is a moderated discussion with 4 to 12 participants in a research facility, often used to explore preferences (and the reasons for those preferences) among different solutions.
- Central location test: In a market research facility, groups of 15 to 50 people watch a demo and complete a survey to measure their grasp of the concept, the appeal of various features, the desirability of the product, and so on.
Of course, researchers make up new methods regularly. (See more about methods in Chapter 3, “Contextual Methods—More Than Just Asking Questions.”)
Selecting an Appropriate Method
In the aptly named “When to Use Which User-Experience Research Methods” by Christian Rohrer, the article organizes some of the more common methods into a framework. (Does the method look at people’s behaviors or their attitudes? Is the method qualitative or quantitative? Does the method look at someone’s use of a product?) (See Figure 1.4.) The article provides guidance about which methods are best suited for different contexts. For example, if the goal of the research is to find new directions and opportunities, then the best methods (according to Rohrer) include diary studies, interviews, surveys, participatory design, and concept testing.
Figure 1.4 Christian Rohrer’s “Landscape” organizes user research methods by behavior/attitude and quantitative/qualitative.
Note: Market research and user research
In some companies, market research is a separate department from user research and may even report to different leaders. It also seems like a different career path; people find their way to either discipline from different backgrounds. But what’s the difference? It’s common—but wildly inaccurate—to attempt to distinguish the two by the methods used (market research does focus groups and surveys; user research does interviews and usability testing) or the objectives (market research looks at attitudes and user research observes behavior). Figure 1.4 invites us to consider a bigger picture—a broad set of methods and objectives that no one discipline “owns” exclusively.
Taking a different approach, Sam Ladner developed a guide shown in Figure 1.5 that recommends a research method based on where your product is in its lifecycle.
Figure 1.5 Sam Ladner organizes user research methods by the maturity stage of the product’s sales.
Combining User Research Methods
Interviewing can be used in combination with other techniques. Mixed methods refer to combining multiple methods (typically qualitative and quantitative) together in one study. I’ve used an exploratory interviewing study to identify topics for a global quantitative segmentation study. I’ve combined a Central Location Test (where larger groups watched a demo in a single location such as a research facility and filled out a survey) in parallel with in-home interviews to get a deeper understanding of the potential for the product. I’ve also mixed together different qualitative activities (say, a larger sample for a diary study, and then follow-up interviews with a subset of participants). It can be valuable to combine a set of approaches and get the advantages of each.
Note: Quantitative user experience research
Kitty Z Xu, a quant user experience researcher, explains how this emerging discipline uses two kinds of data: sentimental (such as feelings, perceptions and understanding) from surveys and behavioral (from logging data, usage metrics and more). Researchers in quant UXR make use of skills from a variety of fields, including user research, survey science, data science, and analytics. While interviewing (or qualitative user experience research) looks for insights in a small sample, quant UXR builds insights at scale—meaning collecting hundreds or thousands of samples that are representative of a larger population.
Choosing Interviewing
Interviewing isn’t the right approach for every problem. Because it favors depth over sample size, use interviewing when you don’t need statistically significant data. Being semi-structured, each interview will be unique and reveal something new about what you’re trying to understand (but it can be challenging to objectively tally data points across the sample). Although you are ideally interviewing in context, you are now a participant in that environment. Sitting with users to show you how they use a website isn’t supposed to be naturalistic (versus the way a tool that intercepts and observes users who visit that website captures their actual behavior).
People are not good at predicting their future behavior, especially not for brand-new, hypothetical situations (see “Manage Bias” in Chapter 4). There are bad questions and bad ways of asking questions (see Chapters 6 and 7), but you should be skeptical of broadly dogmatic interviewing advice that warns you never to ask about future behavior, like “How much would you pay for this?” You can definitely ask the question, but it’s important to understand what you can and can’t do with the answer. You won’t get a number that is helpful for your pricing strategy, but you can learn about their rationale for that number or hear a thoughtful reflection about perceived value. Your questions in an interview can reveal mental models that exist today, which will be insightful for the decisions you have made, but the literal responses about future behavior probably won’t be accurate.
Participant Questions
This isn’t really part of the high-level plan, but it’s included here because discussion about the research question sometimes drifts into specific questions that people imagine asking participants. I led a workshop with creative entrepreneurs who struggled to articulate what they wanted to learn from their interviews but were brimming over with what questions they wanted to ask. Because they really were unable to come up with research questions, our workaround was to build out the participant questions and then step back and ask what those questions were collectively in service of (in other words, the research question).
You may generate (or collect) some participant questions during this high-level planning process. Unless they are helpful in getting you unstuck on your research questions, just file them away for now. In Chapter 2, we’ll focus more on the questions we plan to ask.
Aligning on the Research Plan
Since you’re seeing this in a book, where the different elements of the plan (business problem, research question, and research method) are presented in sequence, you might reasonably conclude that you should also proceed linearly. First, get clarity on your business challenge, then uncover your research questions, and then choose the best method to answer those questions! Sounds good?
Ah, but it doesn’t usually work that way. Depending on how a project is initiated (a prospective client generates a Request for Proposal, a stakeholder sends a request by email, and so on), it may be more or less based on one of the three. You may be asked Here’s the situation, how can research help us? Or We need to learn such-and-such about these users. Or Can we complete this method of research within this time frame? But no matter how the conversation begins, it’s up to you to fill in the rest of the pieces.
If you’re given a research question, ask why that information is needed. If you’re given a research method, ask what they hope to learn, and then ask why that information is needed. Sometimes, the people you’re going to work with haven’t thought about this, but often it’s just implicit and your questions will help make it explicit. You want to make sure that not only are you and the clients or stakeholders aligned, but crucially that these different pieces are in alignment: the method has to produce the information that is needed, and the information that is needed should be in support of the actions the team plans to take.
The people who need the results of the research don’t necessarily understand the range of methods and when to use them. Don’t agree to use a prescribed method that doesn’t align with the necessary results, because the blame will fall to you at the end when you can’t deliver. Facilitating the alignment between challenge, question, and method is part of the expertise a researcher brings. People who do research should seek an experienced researcher to advise on these high-level aspects of the research plan.
To Interview Well, One Must Study
Much of the technique of interviewing is based on one of your earliest developmental skills: asking questions (see Figure 1.6). You all know how to ask questions, but if you asked questions in interviews the way you ask questions in typical interactions, you would fall short. In a conversational setting, you are perhaps striving to talk at least 50 percent of the time, and mostly to talk about yourselves. But interviewing is not a social conversation. Falling back on your social defaults is going to get you into trouble!
Interviewing users involves a special set of skills. It takes work to develop these skills. The fact that it looks like an everyday act can actually make it harder to learn how to conduct a good interview because it’s easy to take false refuge in existing conversational approaches. Developing your interviewing skills is different than developing a technical skill (say, milkshake-machine recalibration) because you had nothing to fall back on when learning about milkshake machines. With interviewing, you may need to learn how to override something you already know. Think of other professionals who use verbal inquiry to succeed in their work: whether it is police officers interrogating a suspect, a lawyer cross-examining an opposing witness, or a reference librarian helping a patron, the verbal exchange is a deliberate, learned specialty that goes beyond what happens in everyday conversation. For you as an interviewer, it’s the same thing.
We’ll revisit improving as an interviewer in Chapter 7, “Better Interviews.”
Figure 1.6 Childhood is marked by frequent, inevitable question-asking.
The Impact of Interviewing
Interviewing creates a shared bonding experience, often a galvanizing one, for the product development team (which can include researchers, designers, engineers, marketers, product management, and beyond). In addition to the information you learn from people and the inspiration you gain from meeting them, there’s a whole other set of transformations you go through. You might call it empathy—say a more specific understanding of the experience and emotions of the customer—which might even be as simple as seeing “the user” or “the customer” as a real live person in all their glorious complexity. But what happens when people develop empathy for a series of individuals they might meet in interviews? They experience an increase in their overall capacity for empathy.
This evolution in how individual team members see themselves, their connection to their colleagues, their design work, and the world around them starts to drive shifts in the organizational culture (see Figure 1.7). This capacity for empathy is not sufficient to change a culture, but it is necessary.
Figure 1.7 Team experiences that are challenging and out-of-the-ordinary create goodwill and a common sense of purpose.
More tactically, these enlightened folks are better advocates for customers and better champions for the findings and implications of what has been learned in interviews.
The wonderful thing about these impacts is that they come for free (or nearly). Being deliberate in your efforts to interview users will pay tremendous dividends for your products, as well as the people who produce them.
Scope Growth
In a Twitter thread, Mollie Ruskin wrote about a civic design project, saying,
While the research was “about” operations and staff capacity and a complex process for answering heaps of emails, I quickly found we were stumbling over a set of questions fundamental to the function of our representative democracy.
So, as much as you work to identify and align on your business problem and your research questions, that alignment is limited by the fact that the only information you have comes from before you have done any research. Mollie reminds us that our understanding of the problem (and the opportunity) can change.
The worst thing a research team can do, however, is to come back to the project sponsors and say “Welp, we know we were looking at operations and capacity but really the issue is the underpinnings of our democracy.” Ideally, the broader team is collaborative enough that they will see these reframes together and can decide what to do about them. When I’m in this situation, I try to address the initial scope (“Here’s what we know about the gaps in the operations and how this impacts staff capacity”) and present the emergent topic as one that builds on the original goals (“and, the real issue that connects these infrastructure decisions is the very nature of our democratic processes.”). If the organization isn’t ready (yet) to address the larger insight (and often they won’t be—just look at the size of the shift in Mollie’s example!) at least they can move forward on their original problem, and you’ve planted the seed for a future effort. This probably won’t be the last time this underlying issue emerges, and at some point, it may not be possible to ignore it any longer.
The Last Word
It’s become increasingly common, perhaps even required, for companies to include user research in their design and development process. Among many different approaches to user research, interviewing (by whatever name you want to call it) is a deep dive into the lives of customers.
- Interviewing can be used in combination with other techniques, such as identifying key themes through interviews and then validating them quantitatively in a subsequent study.
- At a distance, interviewing looks just like the everyday act of talking to people, but interviewing well is a real skill that takes work to develop.
- Interviewing can reveal new “frames” or models that flip the problem on its head. These new ways of looking at the problem are crucial to identifying new, innovative opportunities.
- Interviewing can be used to help identify what could be designed, to help refine hypotheses about a possible solution that is being considered, or to guide the redesign of an existing product that is already in the marketplace.
- Teams who share the experience of meeting their users are enlightened, aligned, and more empathetic.
Bulk Orders
We’re happy to work with you to arrange a bulk order at a discount:
#Units/Order | Discount |
---|---|
21-100 | 25% |
101+ | 30% |
Contact us using this form and we’ll make it happen.
Want to Meet the Author?
We’d love to help equip your team with the tools they need to excel. When you purchase 20 or more copies of select Rosenfeld Media titles, you unlock an exclusive 30-minute presentation or AMA (ask-me-anything) session with the participating author*—a unique opportunity to learn directly from the expert behind the book. Ideal for in-house book clubs!
- Engage Your Team: Spark deeper conversations and learning with a personalized session.
- Tailored Expertise: Dive into the topics that matter most to your business.
- Team Empowerment: Provide resources that inspire action and innovation.
Contact us to unlock the additional learnings and unique insights that come from talking directly with a Rosenfeld author.
*Offer subject to author availability.
Frequently Asked Questions
These common questions about security and their short answers are taken from Heidi Trost’s book Human-Centered Security. You can find longer answers to each in your copy of the book, either printed or digital version.
Where does security impact the user experience?
Security impacts the user experience in nearly every part of the user journey. (Check out Chapter 1, “Security Impacts the User Experience,” for more details.) Security impacts the user experience most often when a user:
- Signs up or logs in.
- Sets up or configures a device, service, or account for the first time.
- Is asked for personal or financial information.
- Can view or edit the personal information of others (i.e., customer support).
- Receives communications about security or privacy (i.e., an email, a text message, or a security warning related to their device, account, or personal information).
- Has to make a security or privacy decision.
- Has to decide who or what to trust. (For example, is this message/post/website/warning legitimate?)
- Is using a connected device that could influence the physical world (i.e., a car, IoT device, or machinery).
In these scenarios, your users typically aren’t thinking about security, which means that you and your cross-disciplinary teams need to be thinking about security.
I’m a designer, so what do I bring to the table when it comes to security? Who will listen to me?
If you design products, I guarantee you are designing for the security user experience—even if you have never thought of it that way before. (Check out Chapter 1, “Security Impacts the User Experience.”) The design decisions you make influence the security (and privacy) choices that users make or the actions they take. You (and your cross-disciplinary team) understand your product more than your users ever will—including potential security threats that directly impact your users. You are in a unique position to solve for those threats and protect your users from them.
How do I get buy-in to improve the security user experience?
When talking to leadership, try reframing the conversation around trust, rather than focusing on the word “security.” Trust is where you gain or lose customers. In other words, trust is where the business makes money or loses money.
If people don’t trust you with their information, they won’t sign up. If they lose trust in you by the way you (mis)handle their information, or if they feel you’ve violated their safety or privacy, they’ll leave. If they can’t sign into their account, they’ll leave (or rather, ironically, you’ve made it so they can’t come back). Not to mention the thousands of confused and angry customer service messages you’ll receive and need to address. (See Chapter 1, “Security Impacts the User Experience” and Chapter 7, “Learn and Iterate.”)
When you’re looking to get buy-in and promote collaboration with cross-disciplinary teams, take a cue from my colleague, John Robertson, senior principal UX researcher at Secureworks. John actively seeks out different groups at his organization and joins their Slack channels or participates in discussions around the latest research papers on topics like AI and security. John doesn’t have to do this—it isn’t part of his job description. But, in these low-key forums, the exchanging of ideas is inevitable. John learns about data science and security. The data science and security teams learn about human-centered design. When they have a question, they are more likely to seek John out. And vice versa. Win-win! Do not underestimate these informal channels.1 Chapter 4, “Find the Right People, Ask the Right Questions,” also has more information about finding and collaborating with your cross-disciplinary team members.