In the latest quarterly update at the fictitious company ClothezForU, Rebecca, the VP of product, learns about the abysmal online sales numbers after a recent redesign of the product display page.
She sends off an email to Tim, the product manager, to find out what happened. Tim immediately sends a Slack message to Deena, the data analyst, to pull all of the latest analytics to see what went wrong. Deena provides an extensive report including click-through rates, audience segmentations, funnel path charts, and the results of a recent A/B test.
Tim sets up a Zoom call with the entire product team to share the findings and determine a course of action. Raj, the director of engineering, has always felt that the new design was faulty because the product images were too high resolution and impacted load time. Sally, the creative director, was certain that the photography did not represent enough diversity and doesn’t resonate with the target audience. Keisha, the director of content, suggested that the product descriptions were too verbose and were missing key information. Marvin, the UI designer, proclaimed that if they just made the add to cart button bigger, users would more easily notice it.
This scenario with the team at ClothezForU represents challenges that every product organization faces when making decisions that will affect the user. Like many data-driven organizations, there is a heavy reliance on measurable, quantitative data tied to specific product KPIs. Unfortunately, these measurements only provide one aspect of the experience and don’t help to understand it from the user’s perspective.
Every design choice impacts the decisions and perceptions that your users have. User research allows us to test hypotheses during both product development and post-launch. Research can also serve as a solid foundation for understanding the needs of the target persona and ensure that they are top of mind when deciding not only how to design the experience but what to consider building in the first place.
In reality, product design is both an art and a science, relying on an interpretation of user intent (aka “the why”) and actions (aka “the what”) based on objective and subjective data.
Behavioral data refers to information about the actions that users take while interacting with a product or service.
In the context of UX design, behavioral data can be used to understand how users interact with a product and what they do within it. This can include things like how long they spend on a particular page, what buttons they click on, and how they navigate through the product.
By measuring these behaviors, designers and researchers can get a sense of what is and isn’t working well for users, and make informed decisions about how to improve the product.
Measuring the “what” in user experience refers to understanding what users do within a product, rather than how they feel about it or what they think of it. While attitudinal data (such as surveys or interviews) can provide valuable insights into users’ thoughts and feelings, behavioral data can provide a more objective measure of how users actually interact with the product.
By analyzing behavioral data, designers and researchers can identify problems and opportunities for improvement in the product, and make changes to enhance the user experience.
The team at ClothezForU already understood the value of using quantitative behavioral data to evaluate their users’ experience. Capturing what the user is doing through behavioral data is an objective measurement that is not open to interpretation (i.e., they either clicked or they didn’t click).
Product teams are held to KPIs =based on end results, including the actions that the user takes (e.g., makes a purchase, signs up for an account, clicks on an advertisement, etc.). Performance metrics can also indicate positive or negative trends such as changes in site traffic, bounce rates, and time spent on a page.
Analytics can provide a clear baseline to measure the behaviors of how users perform certain tasks such as:
User research methods such as A/B or multivariate testing can help to understand the cause and effect of certain design decisions. For example, Deena, the data analyst at ClothezForU, worked with Marvin, the UI designer, to determine if changing the size of the add to cart button would have any significant impact on the number of times a user would click on this button.
The power of these metrics is in volume. Behavioral data collected from analytics platforms aggregate thousands if not millions of user actions to paint a clear picture of what users are doing with sample sizes that are usually statistically significant.
In these situations, A/B tests can clearly demonstrate that changes are having an actual impact on user actions. The fundamental problem with these tests is that they can not determine with certainty what is actually driving these actions.
This uncertainty is magnified by the number of changes that are made between versions. In the situation faced by ClothezForU, their team made numerous changes to the page as part of a redesign. How would they know which changes truly influenced user behavior and by how much?
Behavioral data is often thought of as being exclusively quantitative, but it doesn’t have to be. Qualitative insights can be gained by behavioral data used to describe the actions that the user is performing and even in some cases what they are seeing.
Many analytics platforms now offer the ability to capture recordings from each user session. These recordings can be replayed to see what interactions took place including a screen capture of the page, mouse movements and clicks, scrolling behavior, and text entry. Product teams can then analyze these behaviors to gain insight into what the user was doing as they were completing a task such as:
Session replays are like sitting next to an individual while being able to observe as they interact with your product but without the ability to ask them questions about their experience. This powerful capability allows us to identify trending behaviors that are common to individual users and can be used to probe deeper into the why behind those actions.
More advanced user research methods such as eye-tracking allow us to see exactly what the user looked at as they interact with a product. Similar to session replays, eye tracking can record the user’s screen and actions with a real-time overlay of their eye gaze. Understanding what the user does and what they look at can be a very insightful combination. Eye tracking can help us to identify user behaviors such as:
The primary method for qualitative eye tracking analysis is to watch the real-time eye movement recordings and manually code the observed behaviors. Eye tracking is also capable of quantitative data insights based on Areas of Interest (AOIs) where specific regions of the page are defined to determine how frequently and how long users looked at these areas.
Behavioral data from eye tracking is limited to providing insights based on what the user was seeing and doing. For example, eye tracking can’t tell us why a user spent a long time looking at an image: is it because they liked the image, were confused by it or found it repulsive?
Attitudinal data refers to information about users’ thoughts, feelings, and opinions about a product or service.
In the context of UX design, attitudinal data can be used to understand why users feel the way they do about a product, and what drives their behaviors and decisions. This can be obtained through methods such as surveys, interviews, and focus groups in which users are asked about their attitudes toward the product, as well as their preferences, needs, and motivations.
Measuring the “why” in user experience refers to understanding the underlying reasons behind users’ actions and behaviors. While behavioral data (such as log data or usability testing results) can provide valuable insights into what users do within a product, attitudinal data can help designers and researchers understand why they do it.
By analyzing attitudinal data, designers and researchers can gain a deeper understanding of users’ motivations, needs, and preferences, and use this information to design products that better meet their needs and expectations.
Over lunch, Deena vents to Rochelle the team’s user researcher about the pressure Tim is putting on the team to fix the product display page layout. Rochelle suggests that they conduct a usability test to understand the experience from the user’s perspective.
User experience designers and researchers often use qualitative methods such as usability testing to understand their users’ perspectives. This form of research collects both behavioral and attitudinal data informed by tasks that the user attempts to complete.
During these studies, participants are encouraged to think aloud to better understand what they are thinking as they interact with the product. Usability testing can help to answer questions such as:
Usability tests help us gain a deeper understanding of the user’s mental model to optimize how they interact with a product. This form of research is typically done in real-time with a facilitator conducting the session in order to probe based on what they are observing and hearing from the study participants.
The limitations of this approach are that these tend to be relatively small sample sizes and are also subject to the observer effect. Usability testing results are not the best way to learn if users will actually replicate their actions when interacting with the product on their own.
Understanding a user’s level of and reasons for satisfaction with an experience is an important part of capturing the why. Data on satisfaction is most insightful when it is closely related to a user’s most recent and narrowly-focused behavior.
At ClothezForU, the Voice of the Customer (VOC) team has implemented a satisfaction survey that is triggered immediately after a customer completes a purchase. The metrics from the survey are closely watched and are regularly reported to the executive team during their quarterly reports.
This type of post-experience survey is helpful to understand the user’s perspective on the overall purchase process but does not provide an understanding of which specific aspects contributed to their rating.
Another type of feedback survey can be triggered by very specific user behaviors. For example, the ClothezForU team could have presented a short survey to users who are viewing the product display page after a set period or immediately after they clicked on an action to expand more details about the product.
There are many methods for collecting attitudinal data that can be based on a larger qualitative- and quantitative-focused data collection including:
It is important to remember that in contrast to behavioral data, attitudinal data is highly subjective and subject to numerous biases. Response sentiment can also be heavily weighted depending on whether the feedback request is solicited (e.g., an intercept survey) or unsolicited (e.g., a user clicks on a Give Feedback link).
The goal of understanding the what and why is to help identify the root causes of the actions that the user takes. Combining these strategies allows teams to make informed decisions based on triangulated data (not team member opinions).
Here is a scenario of how the ClothezForU team could have worked together to understand both the what and the why to make an informed design decision:
In order to take a user-centered design approach to product strategy, teams should become familiar with the many methods in the user research tool kit that support behavioral and attitudinal data collection.
Each method has strengths and weaknesses, and there is no one right approach to utilizing them. To get started with an overview of the most common research methods used in UX, I recommend reading Measuring the User Experience by Bill Albert and Tom Tullis, and for a deeper dive into quantitative metrics, Quantifying the User Experience by Jeff Sauro and James Lewis.
Featured image source: IconScout
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — get a demo of LogRocket today.
You know that good design is all in the details. And nicely used kerning impacts readability, user flow, and brand professionalism in your UI design — more on that in this blog.
Visual communication and UI/UX design work together to enhance user experiences. Here’s how you can balance both in your designs.
Designing for enterprise isn’t just about looks — it’s about making complex workflows easy. Here’s how to do it right.
When familiar patterns start to feel stale, these seven ways will help you revive your design inspiration and to create standout user experiences.