Surveys are a necessary evil for product teams who want to build great products (and for those who want to use them). They aren’t fun; however, people can be persuaded to participate in them. That is, unless they’re suffering from survey fatigue.
In this article, I’ll explain what survey fatigue is, what causes it, and what you can do to prevent it, netting you more survey responses and better quality insights.
Survey fatigue is when a survey respondent gets sick of taking surveys. This can happen when a survey is confusing or too long, or simply when there have been too many of them:
Naturally, the result of survey fatigue is respondents not wanting to take surveys anymore, which prevents product teams from acquiring insights. In addition, 67 percent of respondents have abandoned a survey due to survey fatigue, and I think we can assume that many more would too if they were to experience survey fatigue.
This suggests that we wouldn’t even be able to “churn and burn” them after they complete the survey because they simply wouldn’t (especially if you wanted to benchmark the results fairly over time), and it’d cost way too much anyway.
There are various causes of survey fatigue. The best way to truly understand them is to take surveys and learn as a respondent yourself what respondents
like tolerate and don’t like about surveys or forms in general, but we’ll cover them here anyway.
If a survey is too long, respondents can become bored and not finish it, even if they’re supposed to receive an incentive at the end. Many times have I trudged through a twenty-minute-long slog of questions just to think, “You know what? This Amazon voucher just isn’t worth it.”
Only 9 percent of people will take the time to answer survey questions thoughtfully with another 46 percent willing to participate if the survey doesn’t take too much time. Forty-five percent aren’t even willing to take surveys, so there’s no room at all to push respondents to answer more questions.
To prevent respondents from becoming bored, first reduce your survey to only the necessary questions. Fewer questions mean fewer answers to synthesize, too, so this approach is a win-win for everybody — “it’s better to have too much than too little” is a bad approach, especially if you want respondents to continue responding!
If you’re still left with too many questions, provide opportunities for respondents to quit the survey after answering the most important questions. You can’t stop respondents from quitting anyway, so you may as well be friendly about it to increase the chances of them participating in surveys in the future.
Also, don’t include dozens of asinine/pointless questions after the actual survey to purposely make respondents quit and avoid awarding them their incentive. You’ll risk losing them as a respondent, as a customer, and it’s just a crappy thing to do. It happens more than you think.
Confusing surveys can be just as frustrating as long ones. Even if the survey makes perfect sense to you, confirm that a few team members understand what the questions are asking at least.
It also doesn’t hurt to make questions optional so that respondents can skip confusing questions; the alternative is forcing respondents to give a potentially inaccurate answer, which is worse than getting no answer at all. The context of a skipped question should be enough to infer that it didn’t make sense to them — consider this a red flag for you to review.
It might seem as if too many surveys isn’t really a problem. After all, people can just not respond to them, right? But actually, if you distribute too many surveys, they’ll start to lose their significance and respondents will feel as if they can’t keep up with them.
An odd phenomenon even for those who hate surveys is how annoyed they get when they can’t provide their true answer — nobody wants to be misrepresented.
Therefore, when applicable, let rating scales start from 0 (or a worded equivalent such as “I hate it”). Have you ever seen a review that said, “I’d give it 0 stars if I could?”. False answers aren’t useful to anybody, so let respondents give “0” stars (or hearts, or whatever):
Similarly, with multiple choice questions and when applicable, let respondents choose an “Other” option to follow up with a personal response as opposed to one of the predetermined ones. While this can make quantitative data harder to synthesize, it’s better than having the wrong insights altogether:
Don’t Make Me Think isn’t just a great UX book by Steve Krug, it’s a cry for help. Users (or respondents in this case, and obviously Steve Krug, too) just want to be able to complete the task they set out to do, which in this case means providing a great form UX. For surveys specifically though, don’t make respondents think.
Here are some great ways to reduce cognitive overload:
Avoid rating scale questions with an excessive number of options to choose from. Instead, provide the smallest number of choices possible to reduce analysis paralysis, a type of cognitive overload where having too many choices forces people to think more than they want or need to. As a rule of thumb, if you can’t think of a suitable label for a choice (e.g., 1 heart = “I hate it”), then you probably don’t need it:
In addition, be careful with the phrasing of your open-ended questions. While respondents might have a lot to say, it’s very common to suddenly draw a blank when asked, and this is because we’re naturally bad at recalling past events.
Therefore, when talking about past events, you must take the respondent back there first. For example, “What do you think of our website?” wouldn’t likely be as effective as, say, “Think about the last time you visited our website. What happened and what did you like/dislike about it?”.
Another reason why the former is less effective is because respondents are being asked how they feel in the present tense, which is probably nothing now that the experience is in the past. Help them recall, don’t make them think:
The most frustrating thing about surveys in my opinion is when they (seemingly) ask the same question over and over, leaving respondents confused as to how the different questions are actually different (making them think!). You’ll see this a lot in surveys that are heavy on the rating scale questions because they’re often mistaken for being weightless, but this isn’t true — they still contribute to cognitive overload, especially if there are a lot of them. Here’s a made up but very plausible, almost paraphrased, survey:
As you can see, the differences between these questions are barely noticeable. In particular, you’re left wondering what the difference is between checking out, buying, and paying, and since the current survey trend is to only display one question at a time, respondents are often left pondering why they answered that question already. I’m frustrated just thinking about this scenario, which can be caused by either lack of clarity or giving into the urge to ask as many questions as possible thinking that more data is better.
You could change these questions to make more sense:
This would give you:
It’s unlikely that you’ll ever create a fun survey as I don’t think such a thing is possible, but you need those insights in order to create a good product. To get them, you’ll need to create surveys that are short, clear, and enable respondents to say exactly what they mean.
There are many ways to do that (as we explored in this article), but it just boils down to respecting your respondent’s time. If you do this, you’ll minimize survey fatigue, setting yourself up to acquire quality insights.
And of course you can always boost your survey response rates by offering incentives, although this doesn’t really reduce survey fatigue, it just makes respondents more willing to trudge on.
If you have any questions you can ask them in the comment section below. Thanks for reading!
Header image source: IconScout
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — get a demo of LogRocket today.
Linear design is a popular design trend, particularly for SaaS products, but has it peaked already? Let’s find out.
Penpot is an open source design and prototyping tool that aims to bridge the gap between designers and developers in the product workflow.
OpenAI has introduced GPTs, a way for anyone to customize ChatGPT without having to code. Here are 34 you can use in your design workflow.
Dialogs, bottom sheets, and toast notifications all provide user feedback. These seemingly simple elements play crucial roles in UI design.