Making the right decision is hard
In the latest quarterly update at the fictitious company ClothezForU, Rebecca, the VP of product, learns about the abysmal online sales numbers after a recent redesign of the product display page.
She sends off an email to Tim, the product manager, to find out what happened. Tim immediately sends a Slack message to Deena, the data analyst, to pull all of the latest analytics to see what went wrong. Deena provides an extensive report including click-through rates, audience segmentations, funnel path charts, and the results of a recent A/B test.
Tim sets up a Zoom call with the entire product team to share the findings and determine a course of action. Raj, the director of engineering, has always felt that the new design was faulty because the product images were too high resolution and impacted load time. Sally, the creative director, was certain that the photography did not represent enough diversity and doesn’t resonate with the target audience. Keisha, the director of content, suggested that the product descriptions were too verbose and were missing key information. Marvin, the UI designer, proclaimed that if they just made the add to cart button bigger, users would more easily notice it.
This scenario with the team at ClothezForU represents challenges that every product organization faces when making decisions that will affect the user. Like many data-driven organizations, there is a heavy reliance on measurable, quantitative data tied to specific product KPIs. Unfortunately, these measurements only provide one aspect of the experience and don’t help to understand it from the user’s perspective.
Every design choice impacts the decisions and perceptions that your users have. User research allows us to test hypotheses during both product development and post-launch. Research can also serve as a solid foundation for understanding the needs of the target persona and ensure that they are top of mind when deciding not only how to design the experience but what to consider building in the first place.
In reality, product design is both an art and a science, relying on an interpretation of user intent (aka “the why”) and actions (aka “the what”) based on objective and subjective data.
- Behavioral data: Understanding the “what”
- Not all behavioral metrics are quantitative
- Attitudinal data: Understanding the “why”
- The power of attitudinal and behavioral data combined
Behavioral data: Understanding the “what”
What is behavioral data?
Behavioral data refers to information about the actions that users take while interacting with a product or service.
In the context of UX design, behavioral data can be used to understand how users interact with a product and what they do within it. This can include things like how long they spend on a particular page, what buttons they click on, and how they navigate through the product.
By measuring these behaviors, designers and researchers can get a sense of what is and isn’t working well for users, and make informed decisions about how to improve the product.
Measuring the “what” in user experience refers to understanding what users do within a product, rather than how they feel about it or what they think of it. While attitudinal data (such as surveys or interviews) can provide valuable insights into users’ thoughts and feelings, behavioral data can provide a more objective measure of how users actually interact with the product.
By analyzing behavioral data, designers and researchers can identify problems and opportunities for improvement in the product, and make changes to enhance the user experience.
How can a UX specialist use behavioral data?
The team at ClothezForU already understood the value of using quantitative behavioral data to evaluate their users’ experience. Capturing what the user is doing through behavioral data is an objective measurement that is not open to interpretation (i.e., they either clicked or they didn’t click).
Product teams are held to KPIs =based on end results, including the actions that the user takes (e.g., makes a purchase, signs up for an account, clicks on an advertisement, etc.). Performance metrics can also indicate positive or negative trends such as changes in site traffic, bounce rates, and time spent on a page.
Analytics can provide a clear baseline to measure the behaviors of how users perform certain tasks such as:
- Are they making mistakes such as triggering an error message?
- Are they taking a less-than-optimal path to complete their task?
- Is it taking them a lot longer than expected to reach their goal?
- Do they rely on instructional elements such as tooltips, walkthroughs, or video tutorials?
User research methods such as A/B or multivariate testing can help to understand the cause and effect of certain design decisions. For example, Deena, the data analyst at ClothezForU, worked with Marvin, the UI designer, to determine if changing the size of the add to cart button would have any significant impact on the number of times a user would click on this button.
The power of these metrics is in volume. Behavioral data collected from analytics platforms aggregate thousands if not millions of user actions to paint a clear picture of what users are doing with sample sizes that are usually statistically significant.
In these situations, A/B tests can clearly demonstrate that changes are having an actual impact on user actions. The fundamental problem with these tests is that they can not determine with certainty what is actually driving these actions.
This uncertainty is magnified by the number of changes that are made between versions. In the situation faced by ClothezForU, their team made numerous changes to the page as part of a redesign. How would they know which changes truly influenced user behavior and by how much?
Not all behavioral metrics are quantitative
Behavioral data is often thought of as being exclusively quantitative, but it doesn’t have to be. Qualitative insights can be gained by behavioral data used to describe the actions that the user is performing and even in some cases what they are seeing.
Observing individual users with session recordings
Many analytics platforms now offer the ability to capture recordings from each user session. These recordings can be replayed to see what interactions took place including a screen capture of the page, mouse movements and clicks, scrolling behavior, and text entry. Product teams can then analyze these behaviors to gain insight into what the user was doing as they were completing a task such as:
- Where does their mouse cursor go as they explore the page?
- Do they tend to quickly scroll down the page or do they spend most of their time on certain content?
- Do they appear to change their actions in response to what is happening on the screen?
- What words are they typing into text fields? In what situations do they revise what they have typed in?
- How do they respond to any errors while completing a task?
Session replays are like sitting next to an individual while being able to observe as they interact with your product but without the ability to ask them questions about their experience. This powerful capability allows us to identify trending behaviors that are common to individual users and can be used to probe deeper into the why behind those actions.
Using eye-tracking technology to see what the user sees
More advanced user research methods such as eye-tracking allow us to see exactly what the user looked at as they interact with a product. Similar to session replays, eye tracking can record the user’s screen and actions with a real-time overlay of their eye gaze. Understanding what the user does and what they look at can be a very insightful combination. Eye tracking can help us to identify user behaviors such as:
- What initially draws their attention and does it appear to influence their actions?
- What path do their eyes follow as they interact with the product?
- Do they appear to be visually engaged with certain areas of the page? When do they lose attention?
- When do they scan, skim, or actually read the content?
- What elements of the page go completely unnoticed?
The primary method for qualitative eye tracking analysis is to watch the real-time eye movement recordings and manually code the observed behaviors. Eye tracking is also capable of quantitative data insights based on Areas of Interest (AOIs) where specific regions of the page are defined to determine how frequently and how long users looked at these areas.
Behavioral data from eye tracking is limited to providing insights based on what the user was seeing and doing. For example, eye tracking can’t tell us why a user spent a long time looking at an image: is it because they liked the image, were confused by it or found it repulsive?
Attitudinal data: Understanding the “why”
What is attitudinal data?
Attitudinal data refers to information about users’ thoughts, feelings, and opinions about a product or service.
In the context of UX design, attitudinal data can be used to understand why users feel the way they do about a product, and what drives their behaviors and decisions. This can be obtained through methods such as surveys, interviews, and focus groups in which users are asked about their attitudes toward the product, as well as their preferences, needs, and motivations.
Measuring the “why” in user experience refers to understanding the underlying reasons behind users’ actions and behaviors. While behavioral data (such as log data or usability testing results) can provide valuable insights into what users do within a product, attitudinal data can help designers and researchers understand why they do it.
By analyzing attitudinal data, designers and researchers can gain a deeper understanding of users’ motivations, needs, and preferences, and use this information to design products that better meet their needs and expectations.
Usability testing to understand the user’s experience
Over lunch, Deena vents to Rochelle the team’s user researcher about the pressure Tim is putting on the team to fix the product display page layout. Rochelle suggests that they conduct a usability test to understand the experience from the user’s perspective.
User experience designers and researchers often use qualitative methods such as usability testing to understand their users’ perspectives. This form of research collects both behavioral and attitudinal data informed by tasks that the user attempts to complete.
During these studies, participants are encouraged to think aloud to better understand what they are thinking as they interact with the product. Usability testing can help to answer questions such as:
- What are the user’s wants and needs?
- Why do they take (or not take) certain actions?
- What do they like or dislike?
- What do they find confusing?
Usability tests help us gain a deeper understanding of the user’s mental model to optimize how they interact with a product. This form of research is typically done in real-time with a facilitator conducting the session in order to probe based on what they are observing and hearing from the study participants.
The limitations of this approach are that these tend to be relatively small sample sizes and are also subject to the observer effect. Usability testing results are not the best way to learn if users will actually replicate their actions when interacting with the product on their own.
Measuring satisfaction with surveys
Understanding a user’s level of and reasons for satisfaction with an experience is an important part of capturing the why. Data on satisfaction is most insightful when it is closely related to a user’s most recent and narrowly-focused behavior.
At ClothezForU, the Voice of the Customer (VOC) team has implemented a satisfaction survey that is triggered immediately after a customer completes a purchase. The metrics from the survey are closely watched and are regularly reported to the executive team during their quarterly reports.
This type of post-experience survey is helpful to understand the user’s perspective on the overall purchase process but does not provide an understanding of which specific aspects contributed to their rating.
Another type of feedback survey can be triggered by very specific user behaviors. For example, the ClothezForU team could have presented a short survey to users who are viewing the product display page after a set period or immediately after they clicked on an action to expand more details about the product.
There are many methods for collecting attitudinal data that can be based on a larger qualitative- and quantitative-focused data collection including:
- In-app/website feedback forms
- Post-release surveys
- Social media groups/product forum posts
- Support cases/call logs/client emails
- NPS surveys
- Standardized Usability Surveys (e.g. SUS, QUIS)
It is important to remember that in contrast to behavioral data, attitudinal data is highly subjective and subject to numerous biases. Response sentiment can also be heavily weighted depending on whether the feedback request is solicited (e.g., an intercept survey) or unsolicited (e.g., a user clicks on a Give Feedback link).
The power of attitudinal and behavioral data combined
The goal of understanding the what and why is to help identify the root causes of the actions that the user takes. Combining these strategies allows teams to make informed decisions based on triangulated data (not team member opinions).
Here is a scenario of how the ClothezForU team could have worked together to understand both the what and the why to make an informed design decision:
- The product team noticed a significant decline in online sales after redesigning the product display page.
- Members of the team that collect both attitudinal and behavioral data met to strategize on a holistic understanding of the users’ experience to share with the larger product team.
- The analytics team found that only 15% of users clicked to enlarge the product image. Users that clicked to enlarge were 75% more likely to add the item to their shopping cart than those that did not enlarge the image.
- Findings from a usability test found that most participants did not realize that the product image was clickable. There weren’t any visual affordances that indicated they could interact with it. Participants were expecting a hover effect or an icon to indicate that they could click to zoom in.
- The Voice of the Customer (VOC) team’s product display page intercept survey results found that respondents want to see a close-up image of the product to determine its texture, quality, and unique features. They also want to see the product from different perspectives and to be able to rotate the image 360 degrees.
- The user experience designer worked with a user researcher to conduct additional rounds of usability testing to evaluate and iterate on several new design concepts with users.
- The team finalized the design after receiving consistent positive user feedback during the design and research process.
- The product team launched a revised product page and worked with the VOC team to introduce a short targeted survey that triggered for 10 percent of their users who interacted with the new product image viewing feature.
- The analytics team measured how the user engaged with the feature and reported on how often interactivity resulted in the product being added to the user’s shopping cart.
- The product team provided the complete story to their executives about how they were able to a) identify the reasons behind the poor sales performance, b) how they were able to involve users to understand their needs and to validate a new design strategy, and c) how they were able to demonstrate user behaviors and reactions after launching the new design.
In order to take a user-centered design approach to product strategy, teams should become familiar with the many methods in the user research tool kit that support behavioral and attitudinal data collection.
Each method has strengths and weaknesses, and there is no one right approach to utilizing them. To get started with an overview of the most common research methods used in UX, I recommend reading Measuring the User Experience by Bill Albert and Tom Tullis, and for a deeper dive into quantitative metrics, Quantifying the User Experience by Jeff Sauro and James Lewis.
Featured image source: IconScout
LogRocket: Analytics that give you UX insights without the need for interviews
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — try LogRocket today.