Some of the Web’s Best Advice on Preventing Survey Bias

Some of the Web’s Best Advice on  Preventing Survey Bias

With the advent of DIY survey programs like Survey Monkey and Google Surveys, market research is becoming more democratized.  Marketers, sales people, and amateur market researchers now have massive base sizes at their disposal to produce the data they need (or want).  Further, as market research expertise is becoming more specialized, fewer CPG Market Researchers are out-sourcing more and more survey design and programming to their vendors. As survey research democratizes and the check-and-balance expertise of in-house Market Researchers gets out-sourced, businesses now run the risk of fielding research that is riddled with survey bias.
Survey bias, or response bias, can be caused by many different factors.  These may include the who is recruited, the order the questions appear, the way the questions are worded, or even how different choice options are presented.  If you are trying to be objective, unintended survey bias can compromise your responses.  If you are quality-checking a survey for (or from) someone else, you need to be able to understand how/if they could be biasing their respondents.
Therefore, if you are not a market research expert and you are planning to field your first survey, this article is for you.  Further, even if you ARE a market research expert and you have to do quality control on a survey this article is also for you.  I have collected some of the best advice and examples of how to ferret-out and correct survey bias from the web and summarized them here.

The Source Approach:  Stattrek.com on Survey Bias From Non-Representative Sampling

As Stattrek says, “A good sample is representative.”  In their article on the sampling bias as a cause of survey bias, they discuss how the recruiting of the respondent sample can influence results.  Many researchers using the DIY tools available today are willing to rely on the (n=1000+) sample sizes they offer to make-up for sampling bias.  As Stattrek observes, this isn’t always sufficient.  In their article, they outline three forms of survey bias caused from non-representative respondent samples:

  • Undercoverage Bias:  The sample is not representative and misses some significant portion of the population.  The converse is also true — over-coverage of a particular sample.  In either case, the appropriate share of voice is skewed and so are the results.
  • Nonresponse Bias:  Depending on how the survey is worded, some populations of respondents may elect to drop-out rather than complete the full survey.  If a relevant population of would-be respondents is somehow silenced by the survey (either intentionally or accidentally) the results can be skewed.  This can occur for many reasons, so be sure to look for trends in your non-complete sample.
  • Voluntary Response Bias:  Stattrek cites the example of radio show audience call-in polls.  These listeners are already self-selected.  As online surveys and do-not-call lists grow in popularity, even the method that you choose to conduct the survey may create bias.  Are you getting too many technophiles?  Are you over-sampled with land-line users?  Again, these are reasons to think carefully about where your respondents come from.

Today, the DIY technology is pretty good at helping users achieve representative samples.  However, it is the responsibility of the researcher to ensure she is recruiting a random, representative sample of respondents.

The Structural Approach:  FluidSurveys.net Describes How To Avoid Survey Bias and Maximize Response Rate Through Good Survey Structure:

FluidSurveys University offers several excellent articles on how to properly design a survey.  This article on survey structure and its counterpart on survey bias are two of my favorites.  They succinctly describe how to categorize survey questions and provide suggestions on how to order them.
In summary, FluidSurveys describes four kinds of survey questions and recommend that they appear in this order in the survey:
  1. Screening Questions:  The questions that determine whether the respondent is a member of the intended sample of the survey.
  2. “Easy-to-Answer” Questions:  Typically, these are close-ended questions that require little creative thought from the respondent.
  3. Difficult-to-Answer Questions:  FluidSurveys describe these as, “question[s] require[ing] the respondents to use their memory and elaborate on their answers in their own words.”
  4. Socially Loaded Questions:  FluidSurveys recommend that you leave potentially embarrassing or personal questions until the end of the survey in order to ensure the best response rate.

I cannot say enough good things about how FluidSurvey University and how much help they can provide novice survey writers.  However, I have two additional question types and pieces of advice to add.

First, I think it’s important to caveat this list with Purchase Intent questions.  Not every survey is a concept test.  However, many surveys ask opinion questions about purchase likelihood or item/idea preference.  In my experience, these questions always belong immediately after the Screening Questions.  Typically, these questions follow the formula of “Easy-to-Answer” questions, which is good.  Why I suggest they lead the “Easy-to-Answer” section is that the more questions that precede them, the more influence the survey writer has over the survey participant’s answer.

Second, at the end of the survey, there is room for non-essential demographic questions.  These are questions that are not included in the screening section, but are helpful diagnostically when analyzing the data.  For example, you might want to screen for consumers who have video streaming devices in their home.  Only respondents who say “Yes” move-on.  However, once you get the data back, it might be helpful to split responses by male and female respondents or by income level.  Those questions are often best left to the end, but may not seem obviously “Socially-Loaded.”

The Detailed Approach:  SurveyTown.com Helps You Diagnose Whether Your Questions are Worded in a Way That Creates Bias:

Even if you follow FluidSurvey’s advice and create a well-structured survey, the way you word your questions may also create survey bias.  There are a number of ways to load the wording of questions to create survey bias.  I am not recommending them.  I am sharing them here so you know to AVOID them.

SurveyTown.com provides a nice overview of 10 different kinds of question formats to avoid in surveys.  Some create outright survey bias.  Others simply produce answers that are unhelpful.  Here is a summary of their list below:

  1. Leading Questions:  Questions worded in a way that suggest a “right” answer to the survey respondent.
  2. Loaded Questions:  Questions worded in a way that assumes something about the respondent and forces them to answer as if it were true.
  3. The Double-Barreled Question:  SurveyTown describes these as a question that, “forces your respondents to answer two questions at once.”
  4. The Absolute Question:  Yes/No questions that talk in universals like “always,” “every,” and “never.”
  5. The Unclear Question:  Questions that are worded imprecisely or don’t provide you a definitive answer to the spirit of your question.
  6. The Multiple Answer Question:  This does not mean multiple choice.  It means that the same answer possibility appears in multiple response options.  This can be an issue when offering ranges as your response options.
  7. Questions That Don’t Allow Respondents to Opt-Out:  In the article, SurveyTown calls this the “Prefer Not to Answer.”  In short — and especially if it is a “Socially Loaded” question — allow the respondent the option to “Prefer Not to Answer.”  This is a better alternative to their lying or quitting the survey which can create survey bias.
  8. The Incomplete Question:  SurveyTown calls this “Include All Possible Answers.”  Offering only a handful of possible choices among the responses creates survey bias by eliminating possible answers.  A possible, though often unsatisfactory, way around this is to simply allow “Other” and “None of These” as a response options.
  9. Questions With Inaccurate Scales:  If you want truth and not survey bias, give respondents the full spectrum of options with your response scale.
  10. Survey Structure:  Lastly, SurveyTown notes that poor survey structure can lead to survey bias.  This is discussed above.

If any of these survey descriptions or summaries were unclear, you can see more thorough explanations with examples at SurveyTown.com in this article.

The Academic Approach:  Researchgate.net Discusses Survey Bias from Position and Order of a Stimulus

The interesting thing about survey research is that it has created its own realm of meta-research.  By this I mean that many academics have actually done studies on how to conduct Market Research studies.   If you Google “survey bias” or “survey structure bias” there is an entire subsection dedicated to scholarly articles on the subject.

Among the most interesting articles I read to prepare this post was Danny Campbell and Seda Erdem’s Position Bias in Best-Worst Scaling Surveys: a Case Study on Trust in Institutions.  If you are interested in reading the full article you can find it at Researchgate.net here.

Campbell and Erdem’s article is a good reminder that human beings use heuristics in making decisions.  This is true is surveys too.  One of those heuristics is basing choice strategies on the order in which those choices appear.  This position bias is a well-documented form of survey bias.  Interestingly, Campbell and Erdem’s research found:

Our results show that around half of the consumers used position as a schematic cue when making choices. We find the position bias is particularly strong when consumers chose their most trustworthy institution compared to their least trustworthy institution. In light of our findings, we recommend researchers in the field to be aware of the possibility of position bias when designing best-worst scaling surveys.
Even if you are not using scaling in your DIY survey platform, you should be aware of position bias.  Tools like Google Surveys and Survey Monkey have the ability to change the order in which visual stimuli and question responses occur for different respondents.  Take advantage of this technology.  By intentionally balancing the order in which stimuli is exposed and the choices appear you can neutralize some of the position bias in your survey.

Conclusion:

Although this is not a comprehensive list of the factors causing survey bias, it should be enough to get you started.  Using this article as a guide, you should be able to:

  • Ensure you are recruiting a representative sample of who you want to learn from.
  • Ensure that the structure of your survey maximizes completes and isn’t a cause of survey bias.
  • Review the questions you included to ensure they are not written in a way that creates survey bias.
  • Ensure the order in which you present stimuli, options, and choices is done intentionally to mitigate positioning effects as a kind of survey bias.

As I said, this article and list are by no means complete.  There are many other great resources on the web to help you diagnose and reduce survey bias.  You can find some more of them at:

If you have other thoughts or suggestions, feel free to leave them in the comments section below.

Leave a Reply

Your email address will not be published. Required fields are marked *