Survey that could be better; Radisson
05/30/2011Have you ever had a survey that was sort of OK, but tripped you up with a few questions here and there?
My friend and colleague Gerry Gaffney of Information and Design tweeted about a survey he had a few issues with and it seemed like a good time to start talking about some specific surveys and draw some general lessons from them, particularly as I was able to persuade him to write this guest post. Thanks, Gerry, and here we go.
Are you reaching the right respondents?
Recently I stayed at a Radisson hotel in Sydney for a few nights.
After I left, I was invited by email to respond to a survey about my experience. As these things go, it wasn’t badly designed. I’ve listed a few specifics below, but it’s also worth making some general comments to consider when sending out this type of survey:
- Respondents are self-selecting.
Why do people respond to your survey?Are you going to over-sample those who were really happy, those who were really unhappy, or those who have time on their hands?
I don’t know how to offset this, other than trying to get as high a response rate as possible. The lower your response rate, the more likely it is that you’ve only got responses from people with an unusual point of view.In this case, Radisson didn’t offer me any incentive to respond. Many busy people would automatically delete such invitations. As Caroline has pointed out previously on this blog, an incentive may help increase response. I confess that only my interest in forms design persuades me to respond to voluntary surveys from commercial organisations, but a $5 Amazon token would probably spark my acquisitiveness or loss-aversion bias.
- Respondents are busy.
Radisson promised that the survey would take “5 to 8 minutes”. A remarkable degree of precision, I thought, and wondered how it had been achieved. -
Respondents don’t care as much as you do.
This is a perennial problem that UX people spend so much time communicating. It’s often hard for teams to realise that in the real world “out there”, nobody cares very much about their product or service. This means that expecting deep thought and analysis from them is unrealistic.
Have you chosen the right values and labels for your ratings?
Radisson asks respondents to rate various elements on a 10-point sale. They’ve also added linguistic labels (“Excellent”, “Very Good” and so on).
Possibly the labels are to help people interpret the numeric scale.
If this is the case, then the numeric scale is inadequate, and should be redesigned.
The labels unfortunately present an unbalanced scale. At least three of the labels (“Excellent”, “Very Good”, “Good”) are unambiguously positive, and only one (“Poor”) is unambiguously negative. The lack of balance presents a bias – respondents have at least 6 places to assign a positive response, and a maximum of 4 to assign a negative one.
The lack of precision in the previous paragraph is a result of the ambiguous “Fair”. Some people may interpret this as a positive rating, some as a negative. For example, Elizabeth Buie in a tweet on this topic said that she interprets fair as “slightly negative”. This more or less aligns with my personal interpretation, although in other contexts “fair” means “reasonable” or “appropriate”. In any case, the apparent attempt to clarify the purpose of the numeric score has resulted in a muddied solution.
Have you offered a mid-point?
Many survey designers seem to hate to be neutral. I’ve often heard survey designers say things like “we want them to make a choice”. For example: The 10-point scale used by Radisson, if it is balanced, does not allow the respondent to be neutral.
In practice, when observing people filling in paper-based questionnaires, you will see people circle the two points on either side of the non-existent mid-point. If you demand that they make a single selection, they will choose one. However, what this achieves is to extract an incorrect response from someone who doesn’t believe that response is appropriate. It’s not exactly water-boarding, but it does amount to a forced confession whose value is tainted by the method of extraction.
Have you allowed for “don’t know”, “don’t care”, “can’t remember”, and “not applicable”?
Radisson wanted to know whether I’d been asked to join some rewards program. I can’t remember, but there is no option to tell them so.
In some cases, they did allow for “N/A” as an answer. But “N/A” is jargon. It’s better to spell it out, offering one of “not applicable”, “not available”, “no answer” or whatever specific wording works with your question. Just make sure that you always allow for it.
Are you asking too many questions?
At some point respondents get tired of your survey. I would hazard a guess that this occurs at a point way, way earlier than most organisations realise. In the case of the Radisson survey, they have done two things to delay the onset of fatigue:
- Provided an up-front estimate of time to complete (the “5 to 8 minutes”, remember?)
- Included a progress indicator at the top right.
Progress indicators are great; they can extract additional effort from the reluctant. However, their power is limited.
In this case, after several screens, Radisson will probably find that respondents are taking the path of least resistance, either abandoning the survey entirely or just clicking down a vertical line with very little attention to the individual questions they’re answering.
I wonder whether any respondent really wants to rate both water pressure and drainage on a 10-point scale?
The preventative for fatigue is brevity.
Are you using standard UI design?
Radisson used check boxes instead of option (radio) buttons in one screen. Strangely, they went to the extra effort of creating an error state to trap when a respondent chose two options that were supposed to be mutually exclusive.
The message should have read “Sorry. Please imagine that the check boxes are radio buttons”.
Are you using phrases that your respondents understand in the same way?
In usability testing, one of the first things you find out is when wording is confusing or, as in this example, unintentionally amusing. Radisson asks whether I “took” their breakfast. Not being a thief, of course I didn’t.<
Are you allowing enough space for the answers?
Finally, Radisson asks me if I experienced a problem, but if I don’t choose one of the pre-defined options, then I’m only allowed a paltry 30 characters to respond. Small, arbitrary limitations are rude and inconsiderate.
I’ve frequently had people justify such limits by saying “We don’t want them to send in ‘War and Peace’.” I’m not aware of any respondent who has actually attempted to do this, but it would certainly be reasonable to truncate anything after, say, two pages of text. But not 30 characters!
It’s fine to alert people that space is somewhat limited, but “words” rather than “characters” should be the measure. In English, a word is around 5 letters, so you can divide your total by this number, add a little fudge factor, and list that. For example, if your field is restricted to 1024 characters, you can list this as “200 words”. Only forms geeks like Caroline and me would ever count them and castigate you for it.
Conclusion: could do better
This Radisson survey wasn’t really terrible. It probably works quite well for the organisation. But a little more attention to detail, and a little usability testing, could make it better.
And that’s almost universally true.
Postscript from Caroline
Thanks very much, Gerry, for your discussion of this survey. If anyone else would like to talk about a particular survey they’ve come across, or would like me to talk about it for them, please get in touch.