{"id":184348,"date":"2011-05-31T16:03:31","date_gmt":"2011-05-31T16:03:31","guid":{"rendered":"https:\/\/staging.rm.gfolkdev.net\/?p=184348"},"modified":"2022-10-06T18:30:43","modified_gmt":"2022-10-06T18:30:43","slug":"survey-book-of-the-month-may-2011","status":"publish","type":"post","link":"https:\/\/rosenfeldmedia.com\/survey-book-of-the-month-may-2011\/","title":{"rendered":"Survey book of the month, May 2011"},"content":{"rendered":"

OK, this month it gets serious. My pick is:<\/p>\n

Survey Errors and Survey Costs<\/a> by Robert M. Groves (1989, reprinted 2004).
\n<\/p>\n

One of the most influential books on survey methodology<\/h4>\n

This book was named one of the 50 most influential books in
\nsurvey research by the American Association of Public Opinion Research.<\/p>\n

Its author, Robert M. Groves, is the Director of the U.S. Census Bureau. He was appointed by President Obama in April 2009, and you’ll see from his biography that he’s won a whole raft of awards for his important work in survey methodology, mainly from his previous role as a professor at the University of Michigan but also from a previous stint at the Census Bureau in the 1990s.<\/p>\n

Integrating concepts of errors across disciplines<\/h4>\n

The crucial point about this book is that Groves looks across a swathe of different disciplines, including survey methodology, econometrics, and psychometrics, to consider every part of the survey process, where errors might arise in it, and how those errors interplay with costs.<\/p>\n

To take one of the simplest examples: “sampling error”. Suppose you take a sample from a population and calculate something, such as the <\/p>\n

mean (arithmetic average). If you take a different sample, you’ll likely get slightly different mean. These variations give a “sampling <\/p>\n

error” and it’s rather easy to show that sampling error reduces as your sample size increases.<\/p>\n

But larger samples cost more, and are also more prone to other types of error. For example, there is an increased possibility that some of <\/p>\n

data will simply be written down incorrectly, or otherwise mangled during the measurement process – a type of ‘measurement error’.<\/p>\n

The four types of error: coverage, nonresponse, sampling and measurement<\/h4>\n

Groves discusses four types of error. I’ll describe them with some examples from our typical surveys today.<\/p>\n

    \n
  1. Coverage error<\/em>, the possibility that some parts of the population fail to be sampled at all
    \nExample: if your survey is online, then you’ll exclude everyone who doesn’t have internet access<\/li>\n
  2. Nonresponse error<\/em>, from the failure to collect data on all persons in the population
    \nExample: you send out your survey, but the only people who respond are those who are exceptionally grumpy about your product<\/li>\n
  3. Sampling error<\/em>, from the natural variability across your target population
    \nExample: some respondents like your product a lot, others are lukewarm, others hate it. Any sample will have some variablity depending on <\/p>\n

    how many of each group happen to be selected for this survey<\/li>\n

  4. Measurement error<\/em>, which Groves describes as arising “from inaccuracies in responses recorded on the survey instruments” but\n

    could equally be called “everything else”, such as<\/p>\n

  5. asking the wrong questions<\/li>\n
  6. recording the answers incorrectly<\/li>\n
  7. asking questions that provoke inaccurate answers<\/li>\n<\/li>\n<\/ol>\n

    .<\/p>\n

    An example of survey error in practice<\/h4>\n

    As it happens, I responded to a survey today that neatly exhibited all the different types of survey error. It was asking about the use of “online services in professional work”.<\/p>\n