Eugene Mandel is Senior Director of Product Management – AI at RingCentral, a provider of AI-driven cloud business communications, contact center, video, and hybrid event solutions. He has been working in technology and AI for more than a decade and co-founded multiple companies — jaxtr, MustExist, and Qualaroo.
Eugene also served as a principal data engineer at Jawbone and Lead Data Scientist, ML Products at Directly. Before his current role at RingCentral, he worked in leadership at Great Expectations, an open source project for data testing, documentation, and profiling, and Loris, a natural language AI platform.
In our conversation, Eugene talks about the importance of “show, don’t tell” with AI products and how it’s crucial to enable user visibility into AI applications. He shares how the role of the product manager is evolving with advancements in technology, as well as the skills PMs should have to prepare for this future.
Labels that are most helpful for the traditional software and the new AI-based software are “deterministic” versus “probabilistic.” That’s the aspect of this difference that drives the most challenges.
To give a silly, simplified example, suppose you pressed a blue button, and a light came on every time. Now, when you press the same blue button, sometimes it will come on, and sometimes it won’t. Product people transitioning from PMing deterministic products to probabilistic products usually struggle with this same type of issue — the technology is not guaranteed to work like legacy systems.
A long time ago, I worked at a company where we were introducing machine learning. And every time a classifier put out a wrong prediction, our QA wanted to file a Jira ticket. It took me some time to understand why, but, of course, QA saw that the software did something wrong, so they wanted to file a bug.
Going deeper is usually the answer. When it comes to AI and machine learning, this means digging into the data and what users want. When you’re a PM, it’s often a challenge to balance your time because it’s claimed by all kinds of things, including teams and stakeholders. I’ve always found that there are three things that I never regret spending time on: talking to users and customers, looking at data, and talking to engineers.
I’ve found that even if you’re spending a lot of time on these tasks, it’s still never enough. Looking at the data is specifically crucial because, while machine learning and AI models can be somewhat like a black box, the good ones respond to training data very well. So, to a degree, writing specs might transition into curating training data sets and running evaluations.
With probabilistic AI and ML-based software specifically, the expectations of predictability of development cycles have changed. Say you’re working with a team of engineers as an experienced product manager. You want to develop some kind of feature that, while changing UI here or backend components there, you can usually always accurately predict how long it will take. But, because all the AI-based development is rather new, you don’t know, but it’s hard to tell that to your CEO or stakeholders.
In the past, I’d run two tracks. For every project and initiative that we started working on, we thought hard about what universe it belonged in — more well-known or more experimental. If the answer is “well-known,” then I can behave like a normal, traditional product manager and will avoid saying, “I don’t know,” to every question.
However, when working with a more risky, experimental project, I’d clearly label it as a research project. Then, the project management of it would become different as well. Suddenly, the deliverables aren’t code; they’re experiments. If a data scientist asks, “Can we do this?” I could work together with them. We could break things into a sequence of smaller questions so each one would take no longer than a week to answer. A week is the maximum you should take in a commercial environment, and if one week passes without answering a question, we could disprove it.
Then, the range of uncertainty would start to narrow down, and at some point, the research project graduates into a more traditional software development one.
Talking about it doesn’t help, that’s for sure! This is a classic “show, don’t tell” scenario. Whichever way you work with marketing on describing best practices, what the company does, and how your data is great, well, users don’t care. So, how do you gain confidence in anything around you in the world? You enable visibility. The same applies to AI-based features.
In conversational intelligence, which is the type of product I work on right now, AI outputs some type of judgment. For example, it will say that in this particular call center conversation, this agent could have done better by explaining the refund policy and will specifically call out seconds 15-19 in the conversation. In other words, you can’t just answer without knowing why.
My second best practice is enabling feedback. But you have to be careful about this. That’s usually helpful, but there’s a pitfall — if you enable feedback, but the user doesn’t believe that this feedback matters, you actually dug yourself a deeper hole. It’s important to respond to feedback and make it known that the system responds to feedback.
A long time ago, I was developing an automated system that gives an initial answer in a customer service environment. Someone would type a question, and the system would respond. First, it would specify that it’s an automated system, so it’s clear that it’s not trying to impersonate a human. After that message, there was an option for the user to indicate either “Yes, I like it” or “No, I want to speak to a human.” The “speak to a human” option was a nice technique. Users don’t like to work for you, so just clicking a thumbs up or thumbs down doesn’t provide any action. However, when feedback becomes part of normal usage, it’s much more trusted, and it’s taken better.
A lot of this seems obvious now, but at the time, this was all new, and we were figuring out best practices that now seem like common sense.
This is where observability comes in. It’s an entire loop — a probabilistic system makes a prediction, and we internally know how it got to that prediction. The user has a good way to react to that prediction if the feedback is collected, either implicitly or explicitly. Let’s say the system generates a draft of an email, and the user sends it as-is or with very few edits. You can say, “Well, I guess they liked it, so it worked, right?”
On the other hand, if they completely re-edit the email, you can take it as negative feedback, even if they didn’t specifically indicate that they didn’t like the answer. The feedback is stored, and nowadays, storage is cheap. It’s important to have this type of context saved because then you have a system that processes this feedback automatically and with humans. You understand why, you change, and you rinse and repeat.
If I have to work to get the data, if it’s difficult to get feedback, and if I have to get approvals, this all fails. So, this loop is a very robust system for collecting feedback. The system logs and reviews feedback, reacts, incorporates changes into the next round of training or training prompting, etc.
I might be controversial in saying this, but I strongly believe that the role of the product manager is changing drastically. Sometimes, people believe that you can be more like a general manager whose skill is product management. For example, today, I work for a company that makes chairs, and tomorrow, I work for companies that make cell phones. I could say it doesn’t matter because I’m a product manager, and it’s all the same. But that’s completely wrong.
Product management changes from company to company, and sometimes, it’s more about understanding what customers want and linking that between the market and your engineers. In other companies, product management acts like project management. In my opinion, the project management part is going away — it’s dying. Why? Because people have to convey ETAs and status of tasks between different teams, which is highly automatable. If project management is going away, it’s because it’s being more automated. However, understanding what the customers want, linking it to the capabilities of systems and technologies, and working with engineers will always stay.
With that said, PMs have to go deep. There is no such thing as saying, “I write the spec how it should work, but I don’t know how it works underneath.” No. PMs and engineers are becoming much closer to the same person. Actual software engineering is becoming slightly easier with AI tools, so PMs can get much more dangerous in implementing things.
Well, PM teams tend to work in triads that include product, design, and engineering. Your role is to move a project and initiative together between the actors, but other lines are impenetrable as well. Good engineers pride themselves on being product engineers — people who don’t just say, “You’ll tell me what to write, I’ll write it,” but people who understand customers and how to make the experience better.
UX designers think about customers and sometimes encroach on product people. They know that they need to be able to touch something to understand how it works, which is where they overlap. Is it possible that those roles will completely merge? I think it depends. Nikhyl Singhal, a PM career coach, commented that there’s a complete bifurcation between big tech and small tech. It’s almost like being good at one doesn’t make you hireable for the other. I think this is really true — at some point, there will likely be a billion-dollar company created by one person.
Yes. You have to get skills from other areas, but in a lot of cases, it’s not going to be just one person. I’d rather still work in a small group of two, three, or four people where we know a lot about what each other does while still maintaining our strength.
In any skill, there are multiple levels. For engineers, getting data from a database, displaying it, and implementing CRUD actions is easy. You could say that’s not difficult, but optimizing some scalability problems or solving a problem in a distributed system becomes much much more specialized. Not everyone can just go do that.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
Your product backlog should be a vehicle to drive value but it often becomes a distraction to what really matters.
Brad Power, Senior Director Digital Product & User Experience at Kendra Scott, shares factors to consider in build, buy, or ally decision.
Product managers aren’t perfect and mistakes will happen. However, every issue offers a learning opportunity.
Vamsee Chamakura talks about how his engineering background helps him act as a bridge between different groups in the organization.