Now available for pre-order: Design for Impact by Erin Weigel

Sample Chapter: Surveys That Work

This is a sample chapter from Caroline Jarrett‘s book Surveys That Work: A Practical Guide for Designing and Running Better Surveys. 2021, Rosenfeld Media.

Chapter 1: Goals: Establish Your Goals for the Survey

In this chapter, you’re going to think about the reason why you’re doing the survey (Figure 1.1).

Figure 1.1
It’s easier to hit a target if you know which one you’re aiming for.

By the end of the chapter, you’ll have turned the list of possible questions into a smaller set of questions that you need answers to.

Write down all your questions

I’m going to talk about two sorts of questions for a moment:

  • Research questions
  • Questions that you put into the questionnaire

Research questions are the topics that you want to find out about. At this stage, they may be very precise (“What is the resident
population of the U.S. on 1st April in the years of the U.S. Decennial Census?”) or very vague (“What can we find out about people who purchase yogurt?”).

Questions that go into the questionnaire are different; they are the ones that you’ll write when you get to Chapter 3, “Questions.”

Now that I’ve said that—don’t worry about it. At this point, you ought to have neatly defined research questions, but my experience is that I usually have a mush of draft questions, topic titles, and ideas (good and bad).

Write down all the questions. Variety is good. Duplicates are OK.

Give your subconscious a chance

If you’re working on your own, or you have the primary responsibility for the survey in a team, then try to take a decent break between two sessions of writing down questions. A night’s sleep gives your subconscious a chance to work out what you really want to find out. If that isn’t practical, then maybe try a walk in the fresh air, a break to chat with a friend, or anything else that might provide a pause.

Get plenty of suggestions for questions

If you’re working with a team or you’re in an organization, then often when word gets out that there’s a survey ahead, colleagues will pile in with all sorts of suggestions for their questions. This can feel a little overwhelming at first, but it’s best to encourage everyone to contribute their potential questions as early as possible so that you can carefully evaluate all of them, focus on some goals for this specific survey, and have a good selection of other questions available for follow-up surveys and other research.

If I’m too restrictive at the very beginning, I find that everyone tries to sneak just one little extra essential question into the questionnaire a day—or even an hour—before the fieldwork starts. By then, it is too late to test the little extra questions properly, and they could sink my whole survey.

But while you’re still establishing the goals for the survey? Great! Collect as many questions as possible. Encourage everyone to join in—colleagues, stakeholders, managers, whoever you think might be interested. If you’re running a workshop, give the introverts some space by having a bit of silent writing where everyone captures their individual question ideas by writing them down.

Create a nice big spreadsheet of all the suggestions, a pile of sticky notes, or whatever idea-gathering tool works for you.

Ideally, make it clear that there’s a cutoff: suggestions before a particular date will get considered for this survey; miss the date, and they’ll be deferred until the next opportunity. This helps to encourage the idea of many Light Touch Surveys.

Challenge your question ideas

When you’ve gathered or created question ideas, it’s time to confront them with these four detailed challenges in Figure 1.2:

  • What do you want to know?
  • Why do you want to know?
  • What decision will you make based on the answers?
  • What number do you need to make the decision?

Figure 1.2
What decisions will you make based on the answers?

Ask: What do you want to know?

Surprisingly, I find that the question suggestions that I create or collect from colleagues often do not relate to what we want to know. Many times, I’ve challenged a question by saying, “OK, so you’re thinking about <xxx question>. What do you want to know?” and it turns out that there’s a gap between the question and the reason for asking it.

Probably the most common example is the question: “Are you satisfied?” The question is OK but very general.

Ask: Why do you want to know?

I’m usually working with someone else when I’m doing a survey. To help narrow down from ”every possible suggestion” to a sensible set of goals for the survey, I ask “Why do you want to know the answers to these questions?” and we then go on to challenge ourselves with the three questions in Figure 1.2.

If I’m on my own, then I find it helps to add “this time” or “right now”—to help me focus on the practical matter of getting my ideas down to something manageable. Come to think about it, that’s not a bad idea for a team, too—it helps all of them realize that they don’t have to ask everyone everything all at once.

Ask: What decision will you make based on the answers?

If you’re not going to make any decision, why are you doing the survey?

Look very hard at each of the suggested questions and think about whether or not the answers to them will help you make a decision.

Don’t worry at this stage about the wording of the questions or whether people will want to answer them. You’ll work on those topics in upcoming chapters.

But if the answers to a question won’t help you make a decision, set that question aside. Be bold! The question might be fascinating. You might be looking forward to reading the answers. But you’re trying to focus really hard on making the smallest possible useful survey. You don’t need to waste the question—it can go into the possible suggestions for next time.

At this point, you’ll have some candidate questions where you know what decisions you’ll make based on the answers.

Ask: What number do you need to make the decision?

In the opening chapter, “Definitions,” I emphasized that a survey is a quantitative method and the result is a number. Sometimes you’ll realize at this point that although you have candidate questions, you do not need numeric answers to them in order to make the decisions. That’s fine, but it also means that a survey is probably not the right method for you. Your work so far will not be wasted because you can use it to prepare for a more appropriate method.

Choose the Most Crucial Question (MCQ)

If you were only allowed answers to one of your candidate questions, which would it be?

That’s your Most Crucial Question (MCQ).

    • The

Most Crucial Question

    is the one that makes a difference. It’s the one that will provide essential data for decision-making.

You’ll be able to state your question in these terms:

    • We need to ask _______.

 

    So that we can decide _______.

At this stage, don’t worry if it’s a Research Question (in your language, maybe even full of jargon) or the question that will go into the questionnaire (using words that are familiar to the people who will answer).

Test your goals: Attack your Most Crucial Question

Try attacking every word in your Most Crucial Question to find out what you really mean by it. Really hammer it.

Here’s an example: “Do you like our magazine?”

  • Who is “you”? Purchaser, subscriber, reader, recommender, vendor, or someone else?
  • What does “like” mean? Admire? Recommend? Plan to purchase? Actually purchase? Obsessively collect every edition? Give subscriptions as gifts?
  • What do you mean by “our”? Us as a brand? A department? A team? As a supplier to someone else?
  • What do you mean by “magazine”? Every aspect of it? The paper edition? The online one? The Facebook page? The article they read most recently? Some parts, but not others? Does it matter if they’ve read it or not?

I found a great attack on a question by Annie Pettit, survey methodologist. She starts with the question:

    “When was the last time you bought milk?”

Here’s how Annie attacks “bought” and “milk”:

    • Wait, do you care if the milk was purchased? Or could it be that we have an arrangement whereby we don’t actually pay for milk? Perhaps people who live on a farm with dairy cows, or people who own a convenience store?

Do you mean only cow milk? What about milk from goats, sheep, buffalo, camel, reindeer? Or what about milk-substitutes from nuts or plants like soy, almond, rice, and coconut that are labeled as milk? Were you really trying to figure out if we put a liquid on cereal? (Pettit, 2016)

(And she added a whole lot more about topics, like whether or not chocolate milk counts.)

Decide on your defined group of people

When you’ve really attacked your MCQ, look back and think about your “defined group of people”—the ones who you want to answer. Add them to your statement like this: We need to ask (people who you want to answer). The question (MCQ goes here). So that we can decide (decision goes here).

If your defined group of people is still vague—“everyone” or something equally woolly—then try attacking again. A strong definition of the group you want to answer at this point will help tremendously when you get to the next chapter, “Sample.”

But before you proceed to Chapter 2, let’s pause for a moment and think about your plans.

Check that a survey is the right thing to do

Is your research question something that you must explore by asking people, or would it be better to observe them?

Do you want to know “why?”—qualitative—or “how many?”—quantitative?

Let’s look at this definition again:

    • >A

survey

    is a process of asking questions that are answered by a sample of a defined group of people to get numbers that you can use to make decisions.

I’m going to contrast that with this definition:

    • An

interview

    is a conversation where an interviewer asks questions that are answered by one person to get answers that help to understand that person’s point of view, opinions, and motivations.

Both of them rely on asking: the interview is about “why”— qualitative—and the survey is about “how many”—quantitative, as in Figure 1.3.

Figure 1.3
Contrasting interviews as qualitative and surveys as quantitative.

Must your MCQ be answered by people?

One of my favorite questions was on a printer manufacturer’s survey:

    “How many pages do you print in a month?”

I had no idea. I knew the answer was more than one and less than a full box of paper because I hadn’t bought a box of paper that month—but I didn’t feel sufficiently motivated to work out how many pages are in a full box. I guessed, wildly. Very poor data.

The real irony, though, was that my printer was connected to their customer feedback program and was giving them the exact figure all the time: their analytics should have told them.

Here’s another example that arrived in my inbox recently:

    We need to ask visitors to our website whether pop-ups make them feel less like buying from us so that we can decide whether to remove pop-ups.

I’m sure that client must have some good business reasons for using pop-ups that make them hesitate about removing them, but asking people whether they “feel like buying” is a notoriously unreliable thing to do. They may feel like buying, but not actually buy, or feel unlike buying, but buy anyway. (We’ll return to this topic in Chapter 3 when we look at the “Curve of Prediction.”)

There’s a much better quantitative method for questions like this: A/B testing, where you publish two versions and use analytics to decide which one contributes better to the desired outcome. A/B tests and the many other different types of analytics silently observe what people do without bothering them with questions. These are contrasted with surveys in Figure 1.4.

Figure 1.4
Analytics and A/B tests are ways of observing how many people do something without asking them.

Do you want to find out “why”?

You may have spotted that we’re sneaking up on the four-way matrix in Figure 1.5. The quadrant we haven’t yet looked at is the top-left corner: observing to find out “why.”

It’s not always obvious why people are doing something. For example, if people tell you they can’t find things on your website, then search log analytics will tell you what they are searching for—but not why they are searching. Did they try searching straight away? Did they try a few clicks without success? Did they see your term for what they’re searching for but not recognize it because they had something different in mind?

Here’s another MCQ that I see quite often:

    We need to ask visitors to our website the question: “What do you dislike about our site?” so that we can decide what to improve.

Figure 1.5
A matrix for choosing the right method.

Leaving aside the problem that “What do you dislike” doesn’t have a numeric answer, you’ve got the more fundamental problem that there isn’t a direct connection between “What do you dislike” and “What should we improve?” You need to know why people dislike something in order to get ideas about how to change it.

You might turn to interviews, but it’s unreasonable to expect most people to retain all the little details that made something easy or difficult. Observing them as they use the thing is much easier for them—and much richer data for you.

In a usability test, you can observe a participant who is tackling some tasks—often in a research facility. Or you can go out to observe people in their natural setting, a field study.

Consider “why” alongside “how many”

A four-way matrix always makes it look as if the ideas are separate, doesn’t it? Of course, in reality, the techniques complement each other.

  • The route in Figure 1.6 is one that I took around the matrix for a client recently.
  • Analytics showed that sales of one product had dropped.
  • Usability tests revealed that people thought the website was no longer maintained, so the product must also be out-of-date.
  • Interviews at the same time revealed that people often left a long gap between deciding to buy the product and actually using it.
  • A survey told us that the out-of-date problem was affecting more people than the wait-to-use problem.

Figure 1.6
One of many possible routes around the matrix.

I would love to encourage you to try some triangulation.

Triangulation

    is when you use a mixture of research methods and compare the results to improve your overall insights.

A draft presentation can help you decide between “why” and “how many”

A couple of years ago, I was chatting about surveys with user experience consultant Natalie Webb. Her tip was:

    “Create a draft of your presentation, based on the results you expect to get from your survey.”

It seemed a strange idea to me at first, but the more I’ve tried it, the more I like it as a way of testing whether I’ve really thought enough about what I want to ask and whether the number that I will get as a result of my survey really will help me to make a decision—the “so what” of surveys in Figure 1.7.

Figure 1.7
A draft presentation helps you to think about the “so what?” of your survey.

I worried that by drafting the presentation first, I’d be somehow constraining the direction of the research—preventing my team from thinking freely about what they were doing, closing down what they might learn.

Gradually, I realized that this is part of the power of surveys. Because you’re finding out “how many” of something, you need to understand the “why” before you start. If you don’t yet know enough about “why,” then you should be choosing to start with observation and interviews.

Think about what sort of number you need

Thinking about the “so what” and the number that you’ll need for the decision you’ll make also helps with another point to consider now: what sort of number do you need as your result? It may seem early, but statisticians will tell you that you must work out your statistical strategy before you collect the data, not afterward.

Do you need to know the actual number of people who answer a question in a particular way? For example, when I helped with a survey about planning an office move, I wanted to know how many people said that when the office moved to the new location, their commute would become excessively long.

Is it the proportion who answer one way rather than another? For example, I wanted to compare the proportion of people who claimed they would leave if the office moved to a new location to the proportion who said they would be likely to accept the change.

Are you looking for a mean (the arithmetical average)? For example, I might have considered whether increasing the mean commute by more than an hour would kill the idea.

Are you looking for a median (the value right in the middle when you place them all in order from largest to smallest)? Means can get easily distorted by one or two outlandishly large values. If one person’s commute suddenly became nearly impossible—10 hours or more—that would greatly increase the mean, but the median wouldn’t be affected very much.

And for design, I’m often looking at ranges and modes. The range is the difference between the largest and the smallest values, so with a 10-hour commute and another commute that’s zero because the person lived in an apartment above the possible new location, my range would be 10 hours. The mode is the most frequent value, and something that I find I have to consider very carefully for many design challenges—both to design for the people who answered with the most frequent value and to make sure that I’m not accidentally excluding people who don’t fit “the norm” for any reason.

Or something else? You may be doing a comparative survey so you’ll be considering what you want to compare from this survey to the next, or a modeling survey where you’ll do all sorts of advanced statistical manipulations, or something quite different.

Whatever you’re planning to do with the answers to your survey, some careful thought at this stage about those statistics will be well worth the time you put into it—and may send you back to have another review of your Most Crucial Question and how you plan to use it.

Determine the time you have and the help you need

So, you have a Most Crucial Question, you know the decision you’ll make, and you’ve thought a bit about the type of number you need to make that decision. It’s a good moment to think about timing and who needs to be involved.

First, think about the time available:

  • When do you need to have a result, and how much time can you put into it?
  • If you’re lucky enough to have team members to work with, how much time can they spare?
  • When will you deliver the report from the survey?

Next, think about the tools:

  • Do you or your organization already have a survey tool?
  • Do you know how to use it?
  • Will you need to buy or subscribe to one?

Finally, and perhaps most importantly, who else is involved?

  • Who needs to be involved in the survey but isn’t part of your team, such as the privacy or legal people?
  • Who will get the results from the survey?
  • Who is involved in making the decisions based on the results?

Interview first, survey later

A common mistake is to think that you’ll do a survey first and then do follow-up interviews with some of the people who answer.

The rule is: interview first, survey later. Two especially useful types of interviews are:

  • Interviews to find out what your defined group of people think about the topic of your survey (covered in Chapter 2)
  • Cognitive interviews—a special type of interview just for survey questions—to help you discover whether the questions are working (Chapter 3)

And, in fact, to get the best results from your survey, you’ll complement these interviews with two other techniques from the matrix, aa noted in Figure 1.8:

  • Usability tests of the questionnaire (Chapter 4, “Questionnaire”)
  • A pilot test between the usability test and the survey itself (Chapter 5, “Fieldwork”)

Figure 1.8
We’ll use techniques from other parts of the matrix on our way to the survey.

If you want a couple of ideas for how to fit all those activities into the time you have available, then skip ahead to Chapter 8, “The Least You Can DoTM.” A recent survey where I worked hard to get a single Most Crucial Question took me four days—spread out over a month, admittedly, but only because I had a week’s vacation in the middle.

What could possibly go wrong with the goals?

For many years, I was quite a purist about surveys. If you’d asked me “What can go wrong when choosing a goal for your survey?” I’d have answered, “Insisting on doing a survey when it’s the wrong method for the research problem.”

These days, I’ve mellowed. I know that sometimes colleagues or clients will carry on with a survey for all sorts of reasons, good and bad, when it’s not the ideal thing to do. If that’s happening to you, don’t worry. Keep making good choices, aim for a Light Touch Survey, and iterate as much as possible. No matter what the outcome is, you’ll definitely learn a lot about how to do a better survey next time.

Strictly between you and me, I’ve also become more relaxed about some of the other aims of this chapter. Couldn’t get down to exactly one Most Crucial Question? If you still have dozens of MCQs: definitely not. But five or six candidates for MCQ? Not so bad—you can whittle them down when you start working on them in Chapter 3. Not entirely clear about the decision you’ll make? Have a go, and revisit it when you’ve done some more steps. You can iterate, after all.

But I wouldn’t often admit that to the team or the client because I know that when we can agree on one Most Crucial Question with a clear decision to be made, the rest of the survey process is going to be much easier and quicker. So I try pretty hard to persuade them to get there.

To be valid, the goals and questions must match

This brings me to the first of the challenges that you’ll meet through the steps of the survey process. In this chapter, you’ve been looking at the first tentacle of the Survey Octopus: “The reason you’re doing
it,” as shown in Figure 1.9.

Figure 1.9
Lack of validity.

There’s always an error between each tentacle and the next one. In this case, it’s “lack of validity.”

Lack of validity

    happens when the questions you ask do not match the reason why you are doing the survey and what you want to ask about.

Or in other words:

    A survey is valid when the questions you ask are a good match to the reason why you are doing the survey and what you want to ask about.

So work really hard on the reason why you are doing it, the decision that you’ll make, and that Most Crucial Question.

At this point, you will know

To have an easier ride with the next steps in the survey process, it helps a lot of at this point if you know:

  • The resources you have for the survey
  • Who you want to answer your question—your defined group of people
  • The decision you’ll make based on the results
  • The Most Crucial Question to help you make the decision
  • Whether the Most Crucial Question needs to be answered by people or not
  • Whether a survey is the right thing to do

Back to Surveys That Work