Chris Holland is Director of User Experience at Emory Healthcare. He began his career in semiconductor engineering at GlobalFoundries, before transitioning to EMR quality control at Mass General Brigham (MGB). From there, Chris spent nearly 10 years working in implementation testing, project management, and usability program management at MGB before moving to Emory Healthcare.
In our conversation, Chris talks about how his team’s goal is to be a UX cultural epicenter, i.e., when they can’t work on something directly, developing a local model so project teams can render their own evaluations or conduct their own user research for success. He discusses how he encourages his team to teach each other, as well as the rest of the organization, how to consider and engage users effectively at every stage. Chris also shares the importance of understanding the multiple layers of impact UX changes can have.
We aim to deliver a world-class user experience for our healthcare workers. That means we need to understand and shape our strategy around the needs of over 100 different user roles across our healthcare network of over 70 specialties — and that’s just with our internal staff. Of course, there’s the patient side of it we need to consider alongside other dedicated patient experience teams.
It’s such a diverse user base with so many different workflow and technology needs. We need to inform the overall strategy because UX lies within all parts of the digital process — whether it’s through our system’s design, support, or training. There are so many opportunities to affect user experience that ultimately, it’s the responsibility of everyone involved. Our role is more in centralizing understanding, reporting out, and helping use analytics and user research to improve UX. That’s very much the goal and vision of our department.
Yes — we want to learn from other organizations. UX is a little unique in healthcare. Just because we’re in healthcare doesn’t mean we get a pass on holding ourselves accountable to developing the optimal user experience and making that a differentiating asset for Emory. We want those who work here to have the best technology experience possible. To accomplish that, we need to learn from other organizations, whether they’re in healthcare or sit outside of the industry. I want our team to be held to the same standards to which other service delivery organizations are held.
Our goal is that user-centric ideologies and best practices don’t stop at the boundaries of our team. We want to drive and permeate a UX culture across Emory just as much as we want to render the work ourselves. Also, we want to make sure that everyone holds themselves accountable for how their work impacts the user experience. Whether you’re a project manager, developer, trainer, or even leader, we want to ensure you have the tools, optics, and methodologies to account for the user at any stage.
I like to emphasize that my team teaches as well as works. I want them to know that they’re not just responsible for measuring and driving the user experience, but for instilling a culture of user experience with the teams with whom we work. With that, I look at the teams we work with as a genuine partnership. We want to enable all teams to do what’s best for our users — whether by providing analytics tools to surface user behaviors and pain points, teaching other teams how to engage users effectively through feedback channels, hosting workshops and events, and more. That’s all within the scope of our team.
Recently, we put on a UX summit where we had attendees from various teams, such as clinical informatics, application developers, and researchers. We even had major outside contributors from other technology industries come and share their practices. The goal was to make sure that this knowledge base becomes second nature across our Emory network.
It’s been received really well. We’ve had some early successes where teams that have partnered with us understand how they can apply user-centric methodologies or evaluation capabilities early into their designs. That way, upon release, we help them see and learn from the impact of their work. That has gotten other teams excited to see that, wow, the scope and impact of their work doesn’t just stop when they get through deployment! Rather, let’s take time to measure and learn from the effects and impact it has across the user base post-deployment.
Overall, this has created a very positive culture. We’re now consulted for many different projects that impact our staff’s wellbeing. We get knocks on our door when teams ideate on how best to deploy something for our users or what kind of solution design makes sense for them. They look to us for help to make sure they get things right on the first try. That natural advocacy, in and of itself, has been a huge success.
With any team, there’s a balancing act of figuring out any immediate work that needs to get done vs. the structure that needs to be put in place. We developed criteria to understand the measurable scale of what’s being released and the impact it’s going to have, and that helps us understand and prioritize the work that most directly affects our user experiences.
Also, when we can’t consult or directly work on something ourselves, we develop a local model so that project teams can render their own evaluations or do their own user research. It comes back to that hub and spoke model where we want to be a UX cultural epicenter. The idea is to empower teams to do this work, especially when we can’t directly take it on ourselves.
Of course, some time budgeting goes on around the immediate evaluation work that needs to go on vs. what we carve out and maintain for program development. This means looking at our analytical platforms and how we can improve them, or how we can expand our user research toolkit.
Lastly, we emphasize being innovative and deploying lean methodologies to keep up with a fast-paced environment. We want to be cutting-edge. I very much believe that no matter when we’ve studied or practiced, the UX field is going to keep growing with or without us. To keep up, we run internal professional development cycles where I empower my staff to go out every quarter and research anything of interest related to usability — as long as they teach it to the team after and we make it available as a workshop or asset for the greater Emory network.
With UX, it is a challenge. Other departments might be able to have a more refined ROI, whereas preventative work in UX is sometimes hard to get a discrete KPI on. I believe a successful UX department should always be able to tell you what your good experiences are — whether it’s by role, workflow, or technology. You should always be able to go to the UX department and understand what’s working well.
In the same vein, they can tell you about your suboptimal experiences and areas for improvement. The most important thing a UX team can do — even more than identify where things aren’t working — is advise teams on how to improve them. If you’re able to give informed feedback on what doesn’t work and why, then you’re equipping your teams to actually solve what matters most to your users.
We tackle this in two different contexts based on the service lines that we have. The first is doing evaluations by products or workflows. We regularly meet with project teams to render user-centric methods early on. That way, we make sure that we’re building things optimally that apply best practices to how humans and computer systems interact. We also work on developing evaluation KPIs and criteria post-release to make sure that we can measure value or the intended effect of an effort.
The second is by measuring the experience by user base. We want to understand, regardless of if there’s new work entering the system, how our Emory users are experiencing the current state. We developed a novel way of iteratively getting a pulse on our user basis for their digital satisfaction. We use the electronic health record, which is essentially the digital front door of where they go to work, to get a pulse metric on how they are doing digitally and what we can do to improve it. This simple question is released every day to a fraction of our user base throughout the year for rolling feedback. With a 30 percent response rate, we can establish a strong baseline of digital satisfaction ratings and segment this by sites, roles, clinical specialties, and more.
Each project has its own runway and scope. It all gets shaped to the work and when it needs the insights most. For projects that maybe have a smaller runway, we might develop something that’s a bit more facile — it gets the team immediate insights that they need to drive a successful effort forward. Projects that have a longer runway and more degrees of freedom in design, on the other hand, can be scoped into a larger effort. We’ll run workshops and go through exercises to make sure we design things right. We’ll even go a layer deeper with user research and host independently proctored user interviews, for example.
Healthcare is a melting pot of different workflows, technologies, and user bases. We treat each effort uniquely, though we have a developed standardized evaluation method that we bring into each one. There are a lot of experts across the clinical community, and when we do a deep dive, we partner with the relevant subject matter experts to understand the workflows and direct user impacts.
It also helps us understand the environment to see some of the secondary impacts. It’s not always the perceived benefits and challenges that we need to validate — sometimes, there are secondary and tertiary impacts that end up in scope as well. One example of this is AI messaging. This is an effort that we were brought into because, originally, the scope was assumed to increase efficiency for users. Rather than having to draft messages from scratch, AI could help start a message back to a patient and the provider could just review and edit it slightly before sending it.
While we’re seeing marginal gains when it comes to efficiency, one user base that was never brought up during the initial scoping was the patient base perspective. There may be a profound impact on them, due to the quality of the message they see from AI. There could definitely be a quality impact where patients are receiving messages that have additional detail to better assist the patient.
As mentioned, we have a rolling user-based satisfaction program where we iteratively collect how folks are feeling about the technology. We hold a weekly review session to look at every single score and comment that comes in. And when I say we, it’s not just the UX team, it’s also leaders and analysts at Emory from informatics, development teams, training teams, and support teams.
This group gets to listen to the voices of the users that we serve. We’re empowered to review and respond to each piece of feedback that comes in. To date, we’ve sent over 1,500 responses back to our users. We make sure that anyone who engages with this program hears back.
We’ve found that ultimately, survey fatigue isn’t about the survey itself, but the impression of what’s actually going to be done with it. We want to develop a closed-loop system and channel these experiences accordingly. If it’s a positive experience, we want to acknowledge and thank the user for that feedback. We also let the teams responsible know to keep doing what they’re doing. For the experiences that have a bit more “turbulence,” we want to direct that to its appropriate endpoint. That might look like opening tickets, 1:1 training, or submitting enhancement ideas on the respondent’s behalf.
A broader instance of this occurred recently when we saw an increased sentiment around our system performance. Our comments mentioned systems were moving a little slower. Our providers needed to log in and out of our system to get it to work so all of those little seconds add up. It’s not necessarily something they log a ticket for, but they expressed it as being difficult.
So, we developed trends and showed our infrastructure and network teams that we had causation to believe that something’s going on. It actually correlated to an event in the system and they responded accordingly. This iterative feedback captures a channel that doesn’t always surface within an organization. A lot of times, organizations just look to their ticket queues or leadership councils on what they should do next. Here, this voice of the user helps regularly capture a layer of friction that doesn’t always amount to a standard endpoint.
AI is at an interesting point that many industries, healthcare included, are still exploring. Teams are seeing where it works well and where it doesn’t. AI is not something that’s a proven commodity in all instances. We’re seeing AI assistance deployed on our phones, websites, and call trees — all to varying degrees of effectiveness.
We have the same phenomenon going on in healthcare now as well. There are certain generative AI uses that have clear-cut value, such as using AI to help with clinical documentation, which helps providers get through their notes and onto their lives. Then there are unknown areas for AI’s ROI, such as helping summarize large swaths of patient data in meaningful ways.
Of course, there’s a risk that comes with that application as well. Does it over-summarize or does it create enough meaningful feedback where users will keep coming back to? Or is it sort of a novelty? Is it embedded at a usable or practical point in the workflow? This is especially important because AI costs are usually per prompt or token, so we need to see if these are meaningful exchanges. Emory has a unique challenge ahead of us in developing the experience ROI to understand when these technologies go from being a neat commodity to tools that are practically useful.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
What’s Agile really about? In this blog, I explore the history and implementation of the Agile Manifesto and uncover how its values drive product innovation and collaboration.
A product wedge strategy is a smart way to enter a competitive market, focusing on solving one specific problem exceptionally well.
Mikal Lewis talks about how a product’s value proposition also encompasses the experience customers feel when they use it.
Learn how Fiedler’s contingency theory helps leaders adapt to different situations. Discover practical examples, key benefits, and step-by-step guidance.