KS&R Blogs

jkraus@ksrinc.com

Blogger’s Disclosure:  the following blog entry may be considered "old school" or even "outdated" to some – I beg to differ!

When I first started out in market research 20+ years ago, we regularly pre-tested quantitative surveys with a smaller set of respondents before starting data collection to ensure the questionnaire was well understood, engaging, and promoted thoughtful responses from participants. We ascribed to the adage "if you're going to do something, you might as well do it right". In our experience, pre-tests are a rarity in today's fast-paced, I've got to have the answer yesterday mentality where budgets are tight and patience is thin.

Let's face it, pre-testing is not an enjoyable experience for most since you're really looking for what's wrong with your survey (although what's right about it is just as important). I mean, would Picasso step away from what he thought was a completed painting and embrace the opportunity for other people to find flaws in it. Perhaps the analogy is a bit overdone, but researchers tend to find perfection rather than imperfections in their surveys even though our job is to get at the "truth" through a crisp, well though-out, and highly engaging survey. Dare we think our survey doesn't pass muster on any of these accounts.

There is also a certain out of sight, out of mind mentality to survey research nowadays, particularly as we continue to move more and more to online data collection. We can't be in the minds of all the respondents as they complete a survey, but since it is presumably so well written (see pre-testing isn't fun above), then we take a leap of faith that everything is "fine", and of course respondents have complete clarity about what they're being asked and are highly engaged throughout the entire survey.

Finally, and perhaps more to the point, pre-testing is no longer in vogue (dare I say it's "uncool"). Who wants to take a stand and say we need to slow down (just a little) to make sure we've got this one right because too much is at stake if it isn't. I've silenced an entire room taking that stand on various occasions over the last few years" (for the record, also "not fun").

These are all fair points, or at least honest ones. And I've certainly missed an opportunity to pre-test a survey at different times in my career when my brain and my gut told me to insist on doing so – we all have! However, and it's a BIG however, we as researchers need to bring pre-testing back into the mainstream as a standard practice, at least in the following situations:

  • For more complex survey instruments that include new and/or detailed information that is critical for respondents to understand well in order to provide thoughtful input (for example, new product testing to ensure all features and benefit attributes are well understood).
  • For longer questionnaires, where there is a question about whether or not respondents will stay engaged throughout the entire survey and/or a need to identify the point that fatigue starts to set in.
  • For studies involving a long list(s) of attributes where there is a potential opportunity to pare them down to reduce respondent fatigue (e.g., attributes that are considered very similar by respondents, those that have little/no importance, etc.).
  • For choice-based studies where the main objective is to provide as close to a real-world buyer's experience as possible in a research environment. If the choices we're providing respondents are not well understood or overly complex we're defeating the very purpose this design was chosen in the first place. Since these studies usually have to do with opportunity forecasting or product optimization, they also require a level of precision that requires well understood choice sets.
  • Any study that is deemed particularly strategic, mission-critical, and/or is tied to opportunity for significant gain or loss depending on the decisions that are informed by the research. In other words, studies where the stakes are too high to risk faulty research results due to a poorly constructed survey.
  • Any study where the results will be deeply scrutinized and it's imperative to provide evidence that the survey itself was fully tested and (as appropriate) adjustments made to ensure it was well understood and engaging to respondents.

Although there are honest and practical reasons that pre-testing has fallen out of favor over the past several years, the truth is the downside risk of not executing a well thought out pre-test in certain situations is too great to dismiss. If there isn't a little extra time and budget to pre-test, an even more honest question might be whether the research is worth doing in the first place.

Note: in a future blog, I'll provide suggestions to execute effective pre-tests when time and money is tight (as it usually is!)


Jim Kraus
Principal

Jim Kraus

Jim has Account Executive responsibilities for the firm's Technology and Financial Services practices. Jim has over 20 years of marketing experience, holding senior-level research positions at IBM, Prudential Financial, and Guardian Life Insurance prior to joining KS&R in 2006. Jim's focus over the past several years has been on global B2B markets where he has deep knowledge of qualitative and quantitative research methodologies. Known as a consultative, result-oriented researcher, Jim is very adept at applying proven methodologies in creative ways to improve his clients' business performance through development of market insights that lead to more informed decisions in the areas of brand strength and performance, value proposition/message development, product/service development and forecasting, price optimization, buyer behavior, and client satisfaction/loyalty.