Some of the Web’s Best Advice on Preventing Survey Bias
The Source Approach: Stattrek.com on Survey Bias From Non-Representative Sampling
As Stattrek says, “A good sample is representative.” In their article on the sampling bias as a cause of survey bias, they discuss how the recruiting of the respondent sample can influence results. Many researchers using the DIY tools available today are willing to rely on the (n=1000+) sample sizes they offer to make-up for sampling bias. As Stattrek observes, this isn’t always sufficient. In their article, they outline three forms of survey bias caused from non-representative respondent samples:
- Undercoverage Bias: The sample is not representative and misses some significant portion of the population. The converse is also true — over-coverage of a particular sample. In either case, the appropriate share of voice is skewed and so are the results.
- Nonresponse Bias: Depending on how the survey is worded, some populations of respondents may elect to drop-out rather than complete the full survey. If a relevant population of would-be respondents is somehow silenced by the survey (either intentionally or accidentally) the results can be skewed. This can occur for many reasons, so be sure to look for trends in your non-complete sample.
- Voluntary Response Bias: Stattrek cites the example of radio show audience call-in polls. These listeners are already self-selected. As online surveys and do-not-call lists grow in popularity, even the method that you choose to conduct the survey may create bias. Are you getting too many technophiles? Are you over-sampled with land-line users? Again, these are reasons to think carefully about where your respondents come from.
Today, the DIY technology is pretty good at helping users achieve representative samples. However, it is the responsibility of the researcher to ensure she is recruiting a random, representative sample of respondents.
The Structural Approach: FluidSurveys.net Describes How To Avoid Survey Bias and Maximize Response Rate Through Good Survey Structure:
- Screening Questions: The questions that determine whether the respondent is a member of the intended sample of the survey.
- “Easy-to-Answer” Questions: Typically, these are close-ended questions that require little creative thought from the respondent.
- Difficult-to-Answer Questions: FluidSurveys describe these as, “question[s] require[ing] the respondents to use their memory and elaborate on their answers in their own words.”
- Socially Loaded Questions: FluidSurveys recommend that you leave potentially embarrassing or personal questions until the end of the survey in order to ensure the best response rate.
I cannot say enough good things about how FluidSurvey University and how much help they can provide novice survey writers. However, I have two additional question types and pieces of advice to add.
First, I think it’s important to caveat this list with Purchase Intent questions. Not every survey is a concept test. However, many surveys ask opinion questions about purchase likelihood or item/idea preference. In my experience, these questions always belong immediately after the Screening Questions. Typically, these questions follow the formula of “Easy-to-Answer” questions, which is good. Why I suggest they lead the “Easy-to-Answer” section is that the more questions that precede them, the more influence the survey writer has over the survey participant’s answer.
Second, at the end of the survey, there is room for non-essential demographic questions. These are questions that are not included in the screening section, but are helpful diagnostically when analyzing the data. For example, you might want to screen for consumers who have video streaming devices in their home. Only respondents who say “Yes” move-on. However, once you get the data back, it might be helpful to split responses by male and female respondents or by income level. Those questions are often best left to the end, but may not seem obviously “Socially-Loaded.”
The Detailed Approach: SurveyTown.com Helps You Diagnose Whether Your Questions are Worded in a Way That Creates Bias:
Even if you follow FluidSurvey’s advice and create a well-structured survey, the way you word your questions may also create survey bias. There are a number of ways to load the wording of questions to create survey bias. I am not recommending them. I am sharing them here so you know to AVOID them.
SurveyTown.com provides a nice overview of 10 different kinds of question formats to avoid in surveys. Some create outright survey bias. Others simply produce answers that are unhelpful. Here is a summary of their list below:
- Leading Questions: Questions worded in a way that suggest a “right” answer to the survey respondent.
- Loaded Questions: Questions worded in a way that assumes something about the respondent and forces them to answer as if it were true.
- The Double-Barreled Question: SurveyTown describes these as a question that, “forces your respondents to answer two questions at once.”
- The Absolute Question: Yes/No questions that talk in universals like “always,” “every,” and “never.”
- The Unclear Question: Questions that are worded imprecisely or don’t provide you a definitive answer to the spirit of your question.
- The Multiple Answer Question: This does not mean multiple choice. It means that the same answer possibility appears in multiple response options. This can be an issue when offering ranges as your response options.
- Questions That Don’t Allow Respondents to Opt-Out: In the article, SurveyTown calls this the “Prefer Not to Answer.” In short — and especially if it is a “Socially Loaded” question — allow the respondent the option to “Prefer Not to Answer.” This is a better alternative to their lying or quitting the survey which can create survey bias.
- The Incomplete Question: SurveyTown calls this “Include All Possible Answers.” Offering only a handful of possible choices among the responses creates survey bias by eliminating possible answers. A possible, though often unsatisfactory, way around this is to simply allow “Other” and “None of These” as a response options.
- Questions With Inaccurate Scales: If you want truth and not survey bias, give respondents the full spectrum of options with your response scale.
- Survey Structure: Lastly, SurveyTown notes that poor survey structure can lead to survey bias. This is discussed above.
If any of these survey descriptions or summaries were unclear, you can see more thorough explanations with examples at SurveyTown.com in this article.
The Academic Approach: Researchgate.net Discusses Survey Bias from Position and Order of a Stimulus
The interesting thing about survey research is that it has created its own realm of meta-research. By this I mean that many academics have actually done studies on how to conduct Market Research studies. If you Google “survey bias” or “survey structure bias” there is an entire subsection dedicated to scholarly articles on the subject.
Among the most interesting articles I read to prepare this post was Danny Campbell and Seda Erdem’s Position Bias in Best-Worst Scaling Surveys: a Case Study on Trust in Institutions. If you are interested in reading the full article you can find it at Researchgate.net here.
Campbell and Erdem’s article is a good reminder that human beings use heuristics in making decisions. This is true is surveys too. One of those heuristics is basing choice strategies on the order in which those choices appear. This position bias is a well-documented form of survey bias. Interestingly, Campbell and Erdem’s research found:
Our results show that around half of the consumers used position as a schematic cue when making choices. We ﬁnd the position bias is particularly strong when consumers chose their most trustworthy institution compared to their least trustworthy institution. In light of our ﬁndings, we recommend researchers in the ﬁeld to be aware of the possibility of position bias when designing best-worst scaling surveys.
Although this is not a comprehensive list of the factors causing survey bias, it should be enough to get you started. Using this article as a guide, you should be able to:
- Ensure you are recruiting a representative sample of who you want to learn from.
- Ensure that the structure of your survey maximizes completes and isn’t a cause of survey bias.
- Review the questions you included to ensure they are not written in a way that creates survey bias.
- Ensure the order in which you present stimuli, options, and choices is done intentionally to mitigate positioning effects as a kind of survey bias.
As I said, this article and list are by no means complete. There are many other great resources on the web to help you diagnose and reduce survey bias. You can find some more of them at:
If you have other thoughts or suggestions, feel free to leave them in the comments section below.