Nobody likes to make mistakes; especially when our money and time is on the line. The feeling of paying for a survey - only to find those valuable questions you included were incorrect and faulty
Nobody likes to make mistakes; especially when our money and time is on the line. The feeling of paying for a survey - only to find those valuable questions you included were incorrect and faulty - can be a gut-punch for any researcher.
The best way to prevent these issues from cropping up in your own research is to learn from others’ mistakes. To help guide your research, we have developed a list of the 10 most common issues our users experience during the survey question-writing process.
Questions that ask panelists to respond to two separate issues or topics with only one answer are known as double-barreled questions. The reason for such a name is obvious; you’re firing two questions at the participant at once. Most often, when surveyors include double-barrelled questions in a study or survey, they are not aware of the mistake they’re making.
The part some researchers have trouble with is proofreading their surveys. Double-barrelled questions are easy to identify, once you know what they are; you simply have to review your surveys before sending them out.
Ambiguous questions not only serve to confuse your audiences - the answers you receive will often be confused and muddled as well.
These questions are not productive, and can cause panelists to doubt your professionalism. Questions must be direct and purposeful in order to be effective. Here is a more effective way to rewrite those questions:
Open-ended questions are sometimes referred to as the “ceo’s favorite question”; upper management loves to see customers vocalize their affinity for the company. The descriptive answers enabled by open-ended questions are necessary for extracting certain kinds of insight from your panelists. Too many of these questions will exhaust your participants, and increase the probability that panelists will exit your survey early.
Another issue with open-ended questions is literacy and grammar. If you are surveying a diverse group of panelists in a place like the United States, then odds are good that you will have panelists who speak english as a second language. Otherwise, they may simply not have the same kind of proficiency with technology as us. Either way, it is vital you make your survey accessible for your panelists; if they are unable to adequately respond to your question, then their answer is no longer accurate.
A leading question is any that attempts to guide a participant’s decision, or push them towards a particular answer.
These types of questions may seem innocent on the surface. In reality, these faulty questions influence panelists to answer in a particular way; we should always avoid including these in our questions.
Many survey questions utilize some sort of rating scale in order to measure a panelist’s response. The Likert scale, for example, can be used to easily quantify user sentiments about any number of topics.
When you are deciding which rating scale to use for your question, test to make sure they are compatible. If your answers to the question make little to no sense - or if understanding the results is too unclear - then you will want to reconsider your pairing.
These questions are obscure, confusing, and the rating system they choose to validate their answers does not fit in the slightest.
When in doubt, go for a simple 5 or 7 point Likert scale. It is also crucial that you explain to your audience the direction in which your scale is moving (ex. is a rating of 1 the best, or the worst?).
When you are choosing which rating scale to implement in your survey questions, be consistent. Incorporating multiple sets of differing rating scales will confuse the panelists and make it harder for them to navigate your test.
Whether you choose to go with 0 through 10 or 1 through 5, keep the formatting exact throughout the entirety of your question set.
If you have to include more than 8 response options max for your answer set, then you need to either re-frame your question - or make it open-response.
When you’re creating an answer set, it is true that you want your answers to be comprehensive. Including too many answer choices can frustrate panelists and increase the time it takes them to complete your survey; on top of that, you will have a harder time trying to visually represent your data later on.
If you are having trouble fitting all your answers into one multiple-choice answer set, consider adding an“Other, please specify” at the end of your set. Making use of an open-ended response inside of an otherwise closed-answer response set can spare you many headaches down the road.
Asking questions that are too general or unfocused often fail to present us with any meaningful research data.
These questions are confusing and unhelpful. Ideally, each survey question should be designed to address one specific topic or concern. The more concise and focused your question can be, the better.
It is imperative that your answer sets include any potential choice your panelist needs to give. If the person responding to your question does not feel like their choice is represented in your answer set, they have two options: mark an incorrect or partially-correct answer, or exit the survey. Neither of these are good for your data.
When you’re developing your answer set, be sure to test your answers with coworkers or team members beforehand - to make sure they make sense. Also, consider including an “Other” or “Not Applicable” option to the answer set; that way, there will always be an out for them.
Including an early exit option for their survey is one of the most common mistakes that get overlooked by researchers. When you’re looking to survey an audience, it may seem reasonable to strive for receiving a complete set of responses from each user; however, this simply isn’t realistic.
There are many reasons a panelist may feel the need to exit your survey before they’ve answered every question. There may be some emergency that demands immediate attention. They could be experiencing an internet outage. Or, they may simply lose interest in your survey part-way through, and decide to leave; when this happens, you want to make sure you can keep hold of the responses they’ve given you.
If people exit your survey without submitting the answers they have already marked, you risk losing out on valuable data. Be sure to always give your participants an early out.
Helpfull is a living, growing platform built around the core principles of accessibility and usability. A community of thousands are ready and waiting to test your products; giving you the feedback you need to turn your project into a resounding success.
An intuitive user-interface, coupled with the ability to gather hundreds of consumer responses in just minutes, are just a few of the features that make Helpfull the ultimate tool for any artist, designer, marketer, or inquisitive spirit.
Our 1,000's of testers are standing by and ready to help you out! Create your account today and get started right now.