When conducting user research, respondents can either give inaccurate answers (response bias) or behave unnaturally (Hawthorne effect). Not only do these two skew insights, they eventually lead to flawed designs and suboptimal UX. So, to get accurate data and create better digital experiences, UX researchers must ensure participants answer accurately and behave as naturally as possible.
In this article, you’ll learn what response bias and the Hawthorne effect are, how they impact user research negatively, and what you can do to mitigate these negative effects.
Response bias causes respondents to give inaccurate questionnaire responses, often unintentionally. Understanding its types will help UX researchers design studies that minimize its effects:
This is the tendency to automatically agree with statements.
For example, if a survey asks, “I prefer Component A to Component B [True/False],” participants might select “True” to be agreeable, even if they prefer Component B. The focus on preferring (Component A in this case) doesn’t encourage them to think about the reasons why they might prefer Component B (the latter is called satisficing — it’s when respondents give a good-enough answer instead of one that they thought about carefully).
This tendency can obscure critical insights into user preferences for interface designs, feature usability, or layout decisions.
Here, respondents believe that negative responses are impolite (especially towards a questioner, if there is one), causing them to respond untruthfully.
In UX, this could lead to underreporting of issues like confusing navigation or poor accessibility.
In this bias, participants who are aware they are part of a research study might alter their responses or behavior to align with perceived expectations.
There are two possible motivations for doing this:
For example, they might overly praise a prototype or persevere with a task they’d normally abandon. This skews data on natural user behavior.
With this bias, a respondent typically opts for the most extreme response applicable (e.g., 1/5, 5/5, extremely unlikely, extremely likely, etc.). They do this to prove that they’re participating, or the opposite, because they’re too lazy to quantify their exact sentiment (again, satisficing). Another reason is to show that they’re passionate about the topic, which somewhat relates to social desirability bias in a ‘pick-me’ sort of way.
This can distort UX research metrics like satisfaction scores or task success ratings.
This bias causes respondents to withhold information that they believe makes them look bad or exaggerate or falsify information that they believe makes them look good. And this can misrepresent data on how users perceive design aesthetics or usability.
Also known as the order effects bias, this one causes respondents to interpret questionnaire questions differently depending on the order that they’re asked.
For example, if you were to ask respondents about a product feature and then ask them to rate their experience of the product, respondents with question order bias might mistakenly rate their experience of the feature.
The ordering of multiple-choice options also matters, thanks to two subsets of question order bias — primacy bias (bias towards the first option thanks to urgency) and recency bias (bias towards the last option thanks to memory).
The Hawthorne effect is when user/UX research participants behave differently when they’re aware of the fact that they’re being observed.
For example, during a usability test, users might spend extra effort completing tasks to avoid seeming incompetent. This can result in inflated success rates, unrealistic completion times, or inaccurate feedback on task difficulty.
Both response bias and the Hawthorne effect negatively impact the quality of user research data, but they manifest differently:
For example, if you were to ask users about your product, those with response bias would answer falsely or inaccurately. And if they were to behave unnaturally while you watch them use the said product (e.g., during field research or a usability test) because they’re aware of the fact that they’re being observed, that behavior would be the Hawthorne effect.
So, when comparing response bias vs. Hawthorne effect, the causes are actually the same — flawed human cognition, which everybody has (assuming that they’re human, of course). The outcomes are the same, too, in a way — both will result in inaccurate, incomplete, and/or false data — but different in terms of response bias impacting responses and the Hawthorne effect impacting behavior.
To mitigate the impact of these response biases and the Hawthorne effect and ultimately yield quality data and useful insights from your user/UX research, you’ll need to do a few things. Let’s take a look at those now.
To combat courtesy bias, explain HOW negative feedback is useful, not impolite. Briefly explain what happens to negative feedback as it evolves from a problem to a solution, reframing it as an opportunity for improvement that they can spearhead rather than a criticism to swallow.
You could even provide examples of improvements that came about as a result of negative feedback.
To prevent respondents from sharing or persevering more than they normally would due to demand characteristics, resulting in feedback clutter and skewed UX benchmarking metrics, explain how quality is better than quantity when it comes to feedback; and that the feedback is only useful when they share only what they truly feel compelled to share, and if testing a prototype, that they should throw in the towel whenever they feel like it to ensure realistic UX benchmarking data.
For extreme responding, make it clear that mild or ambivalent responses are useful if that’s how they truly feel.
For those worried about being disagreeable, impolite, or unhelpful, that could alter their responses in an effort to SEEM; otherwise, anonymize surveys, interviews, and UX tests so that they feel more comfortable being honest.
In addition, ensure that user tests and UX tests are unmoderated, and of course, ensure that respondents are aware of all of this!
You can’t stop extremists from attempting to sway an investigation as a result of demand characteristics. In fact, data anomalies are inevitable in user/UX research.
However, what you can do is secure a large enough sample size so that these already-rare anomalies are like a needle in a haystack. This should be easy to do, assuming that your surveys, interviews, and UX tests are unmoderated.
Ensure that you secure extra respondents to account for any that don’t turn up, especially when conducting price sensitivity surveys, as respondents are more likely to lie when stating how much they’d be willing to pay.
Because of primacy bias and recency bias, the first and last options of multiple-choice questions will get some extra biased attention, which is unavoidable. Luckily, we can even the odds by randomizing the options.
Most survey tools, including Google Forms (which is free), enable you to randomize multiple-choice question options.
As mentioned before, question order bias can cause respondents to misunderstand questions as they mistakenly infer context from previous questions. To combat this, be crystal clear about what you’re asking and, when necessary, about what you’re not asking.
If there’s still room for error, just change the question order.
Respondents can’t automatically agree with statements as a result of acquiescence bias if you don’t use statements. Ensure that you present choices neutrally as well, though — for example, “Do you prefer Component A or Component B?” is better than “Do you prefer Component A?” as the latter suggests that there’s a correct answer and doesn’t encourage respondents to think about the other option.
You can make use of this advice when trying to understand people’s preferences and sentiments about a variety of things, including features, logos, branding, marketing, aesthetics, and, to some extent, usability, accessibility, and other UX-related things.
To mitigate cases of extreme responding where respondents are too lazy to quantify their exact sentiment, provide the smallest scale of measurement possible. This could mean asking respondents to answer on a scale of 1-5 (or even 1-3 as long as it makes sense) instead of 1-10.
Fewer options to consider means a smaller cognitive load, making respondents less likely to rush their answers. This advice applies to all units of measurement.
Almost all of the advice for mitigating response bias also applies to mitigating the Hawthorne effect. However, there are a couple more things to keep in mind, especially when conducting field research. I say that because during this type of study, you’ll obviously be observing the participant, and they’ll be aware of it.
Firstly, be as invisible as possible. That is, if no other type of research will suffice — diary studies aren’t observational but can yield the same insights.
In addition, avoid ethnographic research and other types of research where participants are studied alongside other people, as this can also make participants behave unnaturally (timid or competitive, to name just a couple of things).
Despite response bias being a bias and the Hawthorne effect being an effect, they’re actually quite similar. Both impact the quality of user/UX research data in a negative way, causing businesses to reach the wrong conclusions and then, of course, waste resources by driving the wrong outcomes.
Response bias causes respondents to give inaccurate answers, and the Hawthorne effect causes participants to act in a way that they normally wouldn’t, but both result in participants providing false data.
Mitigation techniques include:
By using these techniques, you can mitigate both response bias and the Hawthorne effect, ensuring that you end up with accurate data and insights when conducting user and UX research.
This doesn’t mean that any type of moderated research is bad; it can just cause response bias and the Hawthorne effect. There are many benefits to user interviews, moderated UX tests, and field studies, but they’re better for supplementary research.
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — get a demo of LogRocket today.
Competitive analysis isn’t about copying competitors. It’s about understanding user needs, learning from industry trends, and creating standout designs. Let’s explore how.
The internet’s carbon footprint rivals that of major industries. But with sustainable web design, UX designers can make a difference. Here’s how to create eco-friendly digital experiences.
Swiping left, right, up, or down — it’s second nature in today’s mobile interactions. But are these gestures designed for everyone? Read this blog to figure that out, and see what more you can do.
Sticky and fixed navigation sound similar but serve different purposes. In this post, I look into their distinctions, use cases, and best practices to design navigation that satisfies users and drives results.