Thach Nguyen is Senior Director of Product Management — STEPS at Stewart Title. He started his career as a lead business analyst at First American before becoming a product manager at Cloudvigra, an intelligent mortgage automation platform. Thach then rejoined First American as a senior product manager and then transitioned to Rocket Companies. Before his current role at Stewart Title, he served as Director of Product Management at NotaryCam and Senior Director of Product Management — Sequoia at First American.
In our conversation, Thach talks about the importance of candid moments and human error in the age of AI. He shares the elements of empathy that technology can’t replace, as well as how he fosters and promotes trust in product teams.
I think it’s most impactful when it comes to teams learning how to use AI, at least from my experience in these early stages. It gives you the ability to understand product management 101. You can ask AI to do almost anything for you, from research to helping you iron out a process. AI is also really good for surfacing trends in data. It can predict behavior based on the information that you feed it. One tangible aspect of this is backlog grooming, for example.
With that said, I don’t think AI will ever be able to replace the insights that come from meeting somebody in the hallway quickly and chatting, or hopping on a quick Slack call. You can’t replace that because that interaction between people happens on the spur of the moment. And when that does happen, you can’t replace certain ideas in those candid moments.
There’s also the human judgment part of product management that still requires you to read the room and understand what the people that you’re talking to are feeling. Same with resolving stakeholder tension or making trade-offs when you don’t have all the information you need yet.
That’s a good question. With anything related to technology, when you get used to getting the answer that you need the first time, you don’t really question the method. Let’s say you use a specific AI for something, and the answers that it gives you haven’t led you astray yet. But at the same time, you don’t know if you could have done something differently or faster. In that respect, I try to remind myself and my team that at the end of the day, AI gives you the “what.”
It’s all data-driven — AI responses are what you feed it, and it answers based on your prompt. So, how do you know that the prompt that you gave AI was the right one to give you the best answer? It’s all trial and error. AI gives us the “what,” but humans give us the empathy for the problem statement you’re trying to solve. It’s an extra level of understanding that we don’t have a shortcut for.
I’m lucky because I work in home title insurance, and we’re trying hard to use machine learning to drive our process and build systems. However, it’s also weird because we’re trying to use data to drive decisions, but humans are the ones who work the process. They’re doing the underwriting and evaluating land to insure it. That process is still kind of treated as an art.
Anybody who works in this industry for a while knows that there are a lot of nuances that you can’t get from AI. In our space, I feel lucky that I don’t have to push for that extra level of empathy because it’s already ingrained in what we do. Any office that you speak to in this industry will tell you upfront that they operate uniquely and that we can’t standardize the things that they’re doing. It’s easy for us to have that extra level of human touch to all of this, because it can’t just be all data-driven.
From my experience, another tangible example of AI incorporation is a chat or automated help guide. Again, it’s based on feeding data into the machine. If you have good user manuals with clear instructions, instead of having somebody at the front lines answering the help desk, you can have AI sit at the front.
This was happening at Rocket Mortgage, and we’re thinking about it here at Stewart also. Any complex insurance system warrants a lot of user questions, especially if you’re trying to build a brand new system from scratch. We welcome AI in that way for sure, but it usually starts with some kind of automated help — and it’s only useful if it’s well built. I’ve also seen it not executed well, and it’s even more frustrating.
One other big learning was that if you’re going to put a chatbot in front of someone, you need to give users the ability to exit and talk to a real person.
We definitely live in a metrics-driven world. In leading a team, I really try to lead by example. Whether it’s a 1:1 with a teammate or an entire team check-in, I want to make the emotional intelligence part of managing more important than ever, especially as we become more reliant on artificial intelligence.
We have AI tools that complement the jobs that we’re trying to do, so the expectation will soon be that we can do more with fewer people. However, that won’t be able to tell you where blockers are, if someone is having trouble understanding data, or going through burnout. I try to remind the team of those things that AI can’t tell us, so we have to keep them in mind.
Further, teams thrive when they can trust each other and the people who are hearing them out. Without actually trusting each other as a team, it’s hard to trust the process itself.
It’s definitely difficult to coach from that perspective. It’s harder to have tangible examples of interactions where you can give coaching or feedback when everyone is fully remote. Unless it’s on a group call, there’ll be one-on-ones that happen that you won’t hear about from a feedback perspective for a long time.
Even being a hybrid one or two days a week, you often overhear conversations that you’re not directly a part of. Or, someone can pull you aside to loop you in on something, whereas being remote, you’re expected to go from one meeting to the next in a very transactional way. Obviously, there can be 1:1s that are very casual, but usually, meetings are focused on getting a specific answer or conversation.
This past week, we had a two-day onsite with the product team and some engineering leaders, and I realized that a consistent challenge is always going to be alignment on direction, scope of work, and approach to updating leadership. I can’t see a world where that goes away completely. It’s hard because this is alignment on multiple levels. I support eight product managers, and we don’t always align on what we should be doing or how we should approach something. That makes it even more difficult when other people outside of the team are asking for our feedback on our game plan.
The core challenge will always be getting everybody to march in the same direction. Sometimes, alignment from a product management perspective just means having people agree to disagree and move forward. With anything related to building teams with multiple personalities, it’s never going to be a one-and-done. You have to build that muscle over time.
It’s impossible to do it on your own. It’s funny because with AI, the sentiment is that one person can now do it all. People say that if you take a coding class and know the basics, you can use AI to build an application for yourself. But I think that really depends on how complex the project is.
I’ve only worked in industries where anything that we’ve built touches so many different things, and that’s not even thinking about outside of the business that we’re building it for. The title production system that we’re building, such as the list of integrations to external third-party vendors and partners, is so long. By the time we get to an agreement on how we build the system, you still have to make sure that the data flowing back and forth through the external system is clear. And there’s no chance that any one person can do all of this. The systems are so large that they usually need one product manager to own each specific domain.
It’s always been a difficult conversation to have, even before machine learning and AI came along. There will always be some teammates who feel that their work will speak for itself and that relationship building can come secondary. I find that the struggle will always be there, and I believe that it’s a narrow view to hold.
But when you build a good working relationship and it’s genuine, people will hear you ask the wrong question, but still give you the right answer. It’s something you can’t do without, but again, it has to be genuine. People can read through you, otherwise. AI might not be able to read through all your please and thank-yous, but people can see if you’re viewing things as transactional. Just like human beings, our true selves will come out eventually.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
These five books helped me grow as a product manager by improving my mindset, habits, leadership, and decision-making.
Nora Keller talks about embracing non-linear paths, trusting the team, and keeping product grounded in real user needs.
Tyler Stone, Associate Director, Product at Sensor Tower, talks through how he’s led Sensor Tower through a complete product redesign.
AI agrees too easily. That’s a problem. Learn how to prompt it to challenge your thinking and improve your product decisions.