There are a few things that every product manager should do at least every month and ideally on a weekly basis. One of these is user testing.
Although it’s often difficult to kickstart a continuous user testing process, it’s undoubtedly one of the highest-leverage habits a product manager can have.
In this article, you’ll learn what user tests are, how to properly conduct them, and how they can help you ship better products.
User testing is the act of testing your solutions, ideas, and assumptions directly with your end-users.
By testing on a tangible prototype with your users, you can capture specific insights and stronger evidence of what works and what doesn’t compared to user interviews.
While interviews shine for idea exploration, user interviews are the primary tool for idea validation.
There are two primary types of user tests: unmoderated and moderated ones. Let’s explore how these differ and when to use which.
Unmoderated tests are performed without the active engagement of the product team. In most cases, you design and set up a test on an online testing platform and send it out to users to complete on their own.
These tests are usually used to quickly test the solution with a large sample of users to achieve statistically significant results — something that would be infeasible to do in a moderated manner.
Use these tests to glean insights on things such as:
Moderated tests are done with the active participation of the user researcher. Although the underlying idea is the same, moderated tests give you an opportunity to ask follow-up questions and dig deeper into why users choose specific actions and what was their thought process behind that.
Although you still can measure quantitative data during the test, it’s harder to run it on a big enough sample to get statistically significant results.
Moderated tests work great for earlier product tests when you are still exploring if you are taking the right path toward your product goals.
Yon can simplify the whole user testing process into five steps. Let’s dive deeper into each of them.
The very first step is to clearly define what questions you are trying to answer.
Some examples of research questions include:
List out all the questions you want answers to and try to prioritize them. Then, look at the prioritized questions and ask yourself: are these more quantitative or qualitative questions?
If you have more quantitative questions, focus on unmoderated tests, and if qualitative questions are most pressing, start with qualitative questions. You can also run both types of tests at the same time to cover both quantitative and qualitative areas.
Regardless of how you prioritize your questions, if you have some qualitative questions on your mind, these should be answered first, as answers to them can strongly impact your future direction.
Revisit your research questions for the experiment and try to find a task you can give to a user to figure out an answer to the question.
For example, if you are trying to figure out if a new referral feature will be easily discoverable for your users, you could ask them to “go to the homepage and then try to refer their friends”. The amount of time it takes them to find the feature and the number of flows they try out before finding the right one will give you the answer to how discoverable the feature is.
If you are testing qualitative questions, you can also pre plan follow-up questions worth asking during the test. For example, if you want to discover if users would be willing to refer their friends, you can ask them follow-up questions after completing the task.
I’d recommend keeping the number of tasks below ten and the length of user tests below thirty minutes. Otherwise, you risk tiring out your users, leading to lower-quality insights. It’s better to run two smaller tests than one excessively long one.
If you are running a quantitative test, aim to get around a hundred responses — that’ll give you strong statistical data and level out any outliers.
For qualitative tests, similarly to user interviews, five seems to be a sweet spot. Over five interviews, insights tend to repeat themselves, and you’ll hit strong diminishing returns. It’s often better to start iterating on the prototype and then run a follow-up test.
Although the exact metrics depend heavily on the type of task you’re giving your users, there are three measurements worth noting for each of the tasks:
Anything below 100 percent for the completion rate usually means some critical issues block users from completing the task.
Time on a task shows you which tasks take the longest for users and could use some optimizations.
Error rate shows if users follow the user journey you intended for them. If the error rate is high, that means you either need to improve the clarity of the main flow or redesign the flow to better fit users’ expectations.
The last step is to decide the next step to take. It’s very context-dependent, but in most cases, it comes down to four choices:
User tests are a great complementary activity for user interviews, which, hopefully, you run weekly.
From a product management perspective, they help you in three ways. They:
Things like time on task, compilation rate, or error rate might seem insignificant in the grand scheme of things, however, these changes tend to add up and build a holistic user experience. One optimization won’t make a dent in the metric, but twenty of them made over time will significantly impact user experience, which usually translates to higher retention and engagement.
Before committing to an expensive production A/B test, you can start by testing them during user tests on a prototype.
Although it doesn’t give you as strong a validation as the production A/B test, a user test can be set up in a day, while a proper split test might take months to develop. It’s often better to run twenty-user tests than one production A/B test.
Don’t get me wrong. User tests shouldn’t be your final or only validation technique. Nothing beats an A/B test, especially for sensitive changes. Yet, user tests can help you speed up your discovery efforts by providing insights early.
You can design a moderated user test in a way that resembles a “user interview on steroids,” where most of the focus is put on qualitative explorations and asking open-ended questions intertwined with occasional user tasks.
This way, you can get results similar to user interviews, but the extra tangibility coming from experiencing the actual prototype helps anchor users to the context of your product.
On the one hand, it will limit the “exploratory” nature of the interview and limit the breadth of insights you get since users will narrow down their thinking towards what they just saw. On the other hand, it’ll help you get deeper and more insightful remarks on your product itself.
This type of user interaction is an excellent addition to standard user interviews.
Although no product manager questions the value of user tests, it’s one of the areas that is easy to neglect.
There can be a lot of friction coming from the need to define, monitor, and then analyze specific tasks and metrics related to them. The best approach to avoid this friction is to establish a routine. For example, my teams often have a routine of bi-weekly user tests, where every second week, we define our most pressing research questions, then spend a week working on designing the test, and the second week analyzing and making conclusions from the test.
The effort is still there, but now it’s harder to neglect — we have routine meetings regarding planning and analyzing tests in our calendars. Also, the routine helps us iterate on our testing process, making each iteration smoother and less time-consuming to set up.
Whether it’s weekly, bi-weekly, monthly, or even quarterly, just start with some sort of routine and see how it works for you. These are those “high effort, high reward” activities you should strive to find space for in your agenda.
Featured image source: IconScout
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
At its core, product lifecycle management (PLM) software is a tool designed to manage every aspect of a product’s lifecycle.
Scenario analysis is a strategic planning tool that helps you envision a range of possible future states and the forces behind them.
A fractional product manager (FPM) is a part-time, contract-based product manager who works with organizations on a flexible basis.
As a product manager, you express customer needs to your development teams so that you can work together to build the best possible solution.