It doesn’t matter how great your interviewing screens are and how amazing your research methods are if you recruit the wrong participants.
Talking to the wrong people is the worst thing that can happen during the research — it can give you false confidence and beliefs in insights that are opposite to those of our target group.
That’s exactly what screener surveys are supposed to protect us from. Assuming we do them right. More on this in this blog today.
A screener survey is a set of initial questions we ask potential research participants to decide whether they are a right fit for the research.
In most cases, they help us define if we are talking to our intended user persona.
There are different methods to screen participants:
Well-thought screener surveys are essential in the user research process.
Most importantly, they ensure high data quality. Screener surveys help us ensure we are talking to the right audience and help us further segment our results if we notice diversity in research outcomes.
They also help with efficiency immensely.
Discovering that you are talking to the wrong people during the research phase itself is quite expensive. You waste time and often money (e.g., for incentives).
Screeners are a filter that helps us segment a big group of people into well-segmented sub-groups for our research.
What do you ask in a screen survey?
Short answer? Anything that helps you decide whether the person is a viable candidate for research or not — put all of it in when creating your screener survey.
The exact questions differ depending on the specific nature of your research but might include:
Making your screener effective doesn’t have to be complicated — just follow a few best practices to get it right. With these tips, you’ll be well on your way to finding the perfect fit for your research.
Unless you are recruiting highly specialized people and have to ask them specific details, go for simple questions.
You want to limit any confusion that could lead to inaccurate answers. Plus, if the survey becomes too confusing, many people can leave without finishing the survey.
People are most likely not incentivized at the screener stage yet, so don’t put too much mental work into them.
Five to ten questions are usually a sweet spot for surveys.
Fewer than five, and you might not have enough information to segment participants properly.
More than ten or more people will drop out or start giving out random answers.
This is just a guideline, though. If you truly need to ask fifteen or twenty questions to recruit a very specific audience, go for it.
Whenever possible, ask closed-ended questions. They are both easier to fill and easier for you to interpret.
You are not fishing for any new insights, so open-ended questions are mostly pointless.
The main disadvantage of closed-ended questions is the limited choice they give. You can’t predict all possible answers.
As a fallback, let people answer “Other” and later either analyze these answers yourself or disregard them completely.
It’s still a better option than inaccurate results.
The more important it is to recruit the right audience, the more control questions (e.g., asking the same question twice to check for consistency) you should include.
Still, having just one control question already reveals many spammers.
As with any other good research survey, test your questions for bias.
Read each of your questions and ask yourself — Does this give off “correct” answer in any way? If yes, change it.
For example, “Do you prefer our product because it’s high quality?” suggests that high quality is the “correct” answer. “What is the main reason you chose our product?” is more neutral.
Test the screener for accuracy.
Find at least two people you know are part of your target group and two who are not, and send them the screener. Then, check if the screener results align with reality. You might need to adjust the screener to be more accurate if there’s a mismatch.
You’re off to a great start with the earlier best practices, but these advanced strategies can help you add the final touches for a truly efficient screener.
Consider adding conditional logic if you need a long screener to recruit a specific segment.
For example, if after a few initial questions, you can already say the surveyed person is not a fit, finish the survey earlier.
Or, if your target segment consists of a few smaller subsegments, move people through different questions based on initial responses. This will allow you to ask particular questions without bombarding users with irrelevant questions.
First of all, be transparent about how you will use the data and how long you will store it. State it explicitly if you plan to keep screener responses and contact answerers in the more distant future.
Ask for consent at the very beginning of the screener to set up the expectations from the beginning.
If your screener takes a lot of time and effort to fill in and you recruit a specific, pre-qualified audience, consider compensating people for going through the screener even if they don’t qualify.
This can help greatly with recruitment efforts. However, it works mostly for invitation-only screeners. You don’t want a mass audience to start filling in the screener at random just to get the compensation.
Whenever you ask demographic questions, include a broader range of responses and/or allow an “other” option.
For example, people identify with more genders than men and women, whether you like it or not.
Let me repeat myself. It doesn’t matter how great your interviewing screens are and how amazing your research methods are if you recruit the wrong participants.
Whether using an online survey, phone calls, or even real-life screening, talk to the right people before proceeding with your research.
Ask relevant demographic, behavioral, attitude, knowledge, intent, and qualification questions to ensure the potential participants fit your desired user profile.
You’ll end up with a good screener if you:
For complex screeners, use conditional logic and consider compensating participants in the survey stage.
And in all cases, cover your legal and diversity basics.
Although developing a perfect screener might be time- and energy-consuming, it’s still cheaper than building a wrong product based on wrong insights.
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — get a demo of LogRocket today.
Learn what the Double Diamond design process is and how to leverage it to design impactful solutions that engage users.
A well-designed multi factor authentication system enhances security without slowing users down. Let’s explore how to make authentication feel effortless.
What if designers worked with real UI components instead of static images? A code-to-design workflow makes handoffs seamless, reducing friction between designers and developers while speeding up production.
Explore the history of the scrollbar and how to design a comfortable viewing experience for users with natural and reverse scrolling.