Introduction to usability testing for survey research

Posted on | Leave a comment

  • It’s always fascinating to encounter a profession with overlapping interests to our own in UX. The one I’ve been learning this year is survey methodology, and was delighted to find out that they’re really into UX as well.

    The short version of this post: I joined forces with Emily Geisen, a survey methodologist to teach a workshop on usability testing at the SAPOR conference. The slides are here: Introduction to usability testing for survey research.

    The longer version? Keep reading, for:

    • My visits to survey methodology conferences
    • Survey methodologists run pilot tests
    • Survey methodologists do cognitive interviewing
    • Introduction to usability testing at SAPOR

    My visits to survey conferences

    This year, I’ve
    attended three conferences for survey methodologists because I wanted to
    learn about the most recent research findings on surveys. I’m also
    fortunate that I’m working with Kantar, one of the world’s leading
    market research and insight businesses, helping them to understand and
    improve the user experience of their surveys, and they are very
    committed to sharing their knowledge and best practices.

    I’ve written about two of these events before:

    • The European Survey Research Association conference, where I co-presented with Richard Coombe of Kantar Operations on Using EEG in a usability test of a survey
    • The
      5th Internet Survey Methodology Workshop, which I attended at
      Statistics Netherlands in the Hague. We were given a charming and
      inspirational book about Dutch statistics that I chose as my August Book of the Month

    This post is about my recent visit to the Southern Association for Public Opinion Research conference (SAPOR) to talk about usability testing.

    Survey methodologists run pilot tests

    Testing
    is nothing new for survey methodologists. All of the leading text books
    tell us firmly that if you don’t have the resources to do a pilot test
    (also known as a field test) then you don’t have the resources to do the
    survey.

    A pilot test is a full run-through of the survey:

    You
    may ask: so what’s the difference between all of that and the full
    survey? Answer: not a lot – the only difference is the sample size. Your
    full survey goes to everyone in your sample; your pilot test goes only
    to a small sub-sample – perhaps 10 to 50 respondents for a
    straightforward survey.

    Survey methodologists do cognitive interviews

    From
    the UX point of view, we can see the value in a pilot test – but isn’t
    that a bit late? What if the actual questionnaire doesn’t make sense to
    the respondents? Survey researchers also see that as a problem, and have
    been conducting ‘cognitive interviews’ since the 1980s.

    Cognitive
    interviews focus on questions. Typically, an interviewer reads out a
    question and asks a respondent to think aloud while answering it. If you
    read this “how to” guide to cognitive interviewing,
    you’ll find that cognitive interviewers have exactly the same problems
    with persuading respondents to think aloud as we do in usability tests.

    When
    survey methodologists got into cognitive interviewing in the 1980s,
    most surveys were done by telephone or face-to-face with an interviewer.
    The respondent didn’t have to read or click anything; all the
    interaction was between respondent and interviewer. So interaction has
    never been a core concern in cognitive interviews.

    Survey methodologists are becoming enthusiastic usability testers

    Traditionally,
    survey methodologists have been quite cautious about online surveys.
    They are concerned about many difficulties such as:

    • Coverage:
      although most people have Internet access, there are still important
      populations who do not, or who prefer not to access surveys online.
    • Sampling
      frames: there are comprehensive directories of street addresses, and of
      landline telephone numbers, but no easy way of getting hold of a
      statistically robust sample of email addresses.
    • Mode
      effects: a human interviewer asking a question is a different mode to a
      computer asking the same question. People behave differently when
      interacting with another person compared to interacting with a computer,
      and their answers differ as well. 

    Despite these, and
    other, challenges, the survey methodologists see the obvious advantages of speed, convenience
    and cost. The workshop I attended at the Hague was the 5th in that
    series – and as they only hold the workshop every 18 months to 2 years, that’s been quite a long-running series. The
    Web Survey Methodology Bibliography has plenty of entries dating back to the 1990s.

    With online surveys, we have a computer. So that means we have human-computer interaction (HCI). And with HCI,
    we have usability, and usability testing. Many of us in UX have clients who are
    long-term enthusiastic adopters of usability testing, and others who
    seem never to have thought of it. That’s exactly the same in survey
    methodology: some are very experienced, others are only just getting
    into it.

    Introduction to usability testing at SAPOR

    SAPOR is the Southern Association for Public Opinion Research,
    a chapter of AAPOR. The SAPOR annual conference is held in Raleigh,
    North Carolina, which happens to be where Emily Geisen works at at RTI International.
    (RTI was originally Research Triangle Institute; the Research Triangle
    has one of its corners in Raleigh). She contacted me and suggested that
    we co-present a workshop on usability testing. It was great fun to work
    with her, as it was obvious that she’s got a lot of first-hand
    experience of usability testing surveys as well as her extensive
    experience as a survey methodologist.

    We had an enthusiastic
    group of attendees, who were researching a fascinating range of topics
    including: student opinions at universities, household expenditure, the
    care of prisoners, radio listening, and (clear winner for ‘most unusual
    topic’) elephant keepers. We had a great time and I’d like to thank Emily, RTI and SAPOR for making it possible for me to be there.

    Which brings me back to the short version of this post: you can find our slides here: Introduction to usability testing for survey research

    Leave a Reply