Roman Gun is Vice President, Product at Zeta Global, a cloud-based marketing technology company that focuses on multichannel marketing tools. Before joining Zeta Global, Roman spent several years at FEVO, a software company that makes cloud‑based enterprise software for social commerce.
In our conversation, Roman discusses his approach to formulating processes around the team’s goals, capabilities, and resources, rather than the other way around. He also talks about unique challenges associated with building generative AI-based products, such as optimizing for speed, and Zeta’s approach to organizing product activities around self-contained “pods” with their own dedicated personnel, goals, and KPIs.
I am the VP of product at Zeta Global. I’m specifically the head of the intelligence layer, which is all things analytics, forecasting, modeling, GenAI, recommendation, personalization, and MLOps.
My career in product started with a piece of propaganda. I was working on a resume with a designer friend of mine a long time ago and I didn’t want a typical resume. It ended up being a propaganda poster. I did this personal branding campaign based on propaganda posters throughout the ages, and eventually, a VC who was funding a new startup caught wind of that. They were in my network and I aligned with the creative direction they wanted to take the startup in.
I had a network of design friends who I was doing that with so I brought them over to join the company. Because I was working with them, it was natural that we started working on the product design together. Then, I turned into the person who started communicating with engineering on how to implement things and what our goals were. That cascaded into communicating timelines to stakeholders. That one-man product show started developing.
I think saying that I avoid processes is a little strong. If you’ve ever had change management or a consultancy that comes to a company, there’s this concept of right-sizing a company. I really enjoy the process of right-sizing a process — I think that it’s one thing to come in and insert a process, but another thing to instill a process to make things efficient and work for the team.
I’m a little wary of people that come in and start a process before they understand the problem that they’re trying to solve or who they’re trying to solve the problem with. I never start with a process; I start with what am I trying to do and who am I trying to do it with. Based on that, we can figure out what our process looks like.
When I say I avoid processes, I mean that I avoid being dogmatic or starting with a process. To me, that stands in the way of actually solving customer pain points and trying to get the outcome you want.
Right now, I’m responsible for five different pods. Pods are self-contained ecosystems that have their own PM, set of engineers, designers, KPIs, and mission statement. So you have to make it work for the individuals that are solving the problems at hand.
For instance, for our generative AI pod, the technology was so new that we just wanted to jump in. The point wasn’t the process; the point was for all of us to huddle together and try to solve problems in real-time using the technology that’s there and figuring out an optimal user experience. For us, the process was to meet every day to knock these things out and see what we have today, what needs tweaking, what can move into production, and what we can communicate to stakeholders.
We were one of the first companies that actually had an in-platform, GenAI, agent-based ecosystem that users could use. Because we moved so quickly and were able to rapidly prototype MVPs, we were able to continuously show value to stakeholders at the top. We had to move with very serious agility, whereas for a different pod, like forecasting, the process might be slower. They have a lot of deep technical work that needs to align with other services, product lines, and layers.
Our product is called ZOE, or Zeta Opportunity Engine. We essentially made an ecosystem that creates agents that have specific jobs to be done. Say we have an agent that is conditioned to be really good at building reports, building campaigns, or answering knowledge base questions. We’re taking our ecosystem and breaking it down into digital SMEs.
These digital SMEs are really good at, say, analytics. And they have access to our analytics API. Then, you’re training that agent and telling it that it’s a world-class SQL analyst who works for X type of company. You show it what a great report looks like. It has access to API endpoints, and you can feed it common types of questions that people ask. Then, it can go off to the races and start generating answers. This ecosystem is plugged into our platform, so it’s accessible from anywhere.
At its core, an LLM is, in practice, a chatbot, and sometimes you don’t want things to be chatty. For instance, in a reporting use case, you just want to know a percentage. Being able to do a lot of pre-prompting to make things look a little bit more succinct when you need it and more verbose when you need it, there are some challenges around that.
For the knowledge base question, for example, you don’t need an ion cannon — a peashooter is good enough and it’ll get you the results way quicker. In this instance, we used a home-grown fine-tuned model, not an LLM. That’s really the challenge of it — optimizing for speed.
At Zeta, we have three active layers and one invisible layer. We have the experiences layer, which is most of the things around campaign syndication, marketing automation, and those types of experiences. We have the data layer, which is about ingesting, normalizing, standardizing data, and data governance. And then we have the intelligence layer, which is the layer I’ve been talking about.
The five pods are analytics, forecasting, GenAI, MLOps, and opportunity insights. The forecasting pod is similar to analytics, but its job looks to the future, whereas analytics looks at history. GenAI, as I discussed with ZOE, also does things like standard subject mind generation. MLOps is how we have a centralized intelligence store — it’s a feature and model store. Finally, we have the opportunity insights pod, which is really about working with our data cloud team and distributing the great insights and third party data that we have.
In terms of each pod, the largest one we have has 15 people, and I think the smallest one we have is around seven or eight. It’s a combination of engineers, designers, and PMs, and then they’re loosely coupled to a product marketer as well.
Part of it is starting from a workback plan. This is what we want ZOE to be able to achieve overall because otherwise, you might find yourself in a position where you architect yourself into a corner. We want to be broad and ambitious around defining where we want to go, and then have a workback plan with milestones. One of the things we’re doing is creating an app plugin ecosystem so that other teams in the company can start building into ZOE.
It’s internal forcing functions and external forcing functions as milestones and then figuring out the value that you can add during that time. As we started figuring that out and mapping it to a job to be done, we evaluated which things are actually critical for being able to get the job done and which are advanced features that only a fraction of users use that we can scrap.
Then, as we got closer to milestone dates, we’d put on what I call our Rick Rubin hats. He’s a music producer and a big part of what he does is not add, but reduce. He gets to the core and the truth of things. You have to put your reductionist hat on and think, “At this point, how do I make this achieve the critical job to be done, even if it takes away some bells and whistles?” Then, when you hit that milestone, you can sneak some bells and whistles in.
We have standardized deployment and GA dates across the company. Within that, it’s a workback. We’re going to have certain pods where their goal is to have something that they can ship every week, even if it’s not usable for the user. They just know that the services are complete. We have other pods that are a little bit more comfortable deploying things only when there’s a whole end-to-end experience. There’s some variation between all those pods, everything falls somewhere in there.
Within that, you have your frontend tasks, backend tasks, QA sign-off on all of these things, design sign-off, and product sign-off. It has to go through those trenches. For some pods, it’s a little bit more formalized — they have a sit-down and go/no-go for each specific epic. For other pods, there’s a little bit more leniency. Then there are other pods where the PM and the EM are the ones that ultimately decide whether that is going to go out or not. It all has to meet basic QA standards and things of that nature, but the go/no-go call on that is distributed based on how those pods feel comfortable operating.
It’s about being able to map to completing something, right? Are you making sure that that end user is getting exactly what they wanted? Because an end user doesn’t care about your process. One of the reasons we moved toward this pod structure and made things horizontal is so we can start with what we are trying to achieve. What are we doing and why are we doing it? The how comes last.
The engineers, designers, and product people need to understand why, because if you don’t understand why, it’s hard for everything else to align. We start with understanding why we’re doing it and what exactly we’re doing. Then it’s the how, which creates the when. Based on that, you can create something really big and shiny, something small, or something that’s probably the reality of the situation — something in the middle.
For us, there are at least two different customers. From an enterprise-grade platform perspective, you have to think about the person who’s paying for the service. They have certain processes that, whether right or wrong, we need to be able to support. They have legacy systems and we have to support their process as a part of the journey. A lot of that is really diving into the weeds, untangling data, and figuring out why things happened.
The other user is the actual end user who has to experience what our customer is trying to do. How we try to bridge this gap is by showing that these are the results of doing things the current way, and here, based on our forecasting, our intelligence, our competitive landscaping, etc., is what the results could be. Let’s give something new a try.
In 2024, I think we need to think about a few things. One is what happens when this tool that seems mythical, like generative AI, actually becomes distributed. It becomes just the core part of your technology stack. What’s going to be great is how you map it to a job to be done, and how you utilize all the various tools that exist out there. I think it’s going to be more a case of stringing things together than having an individual capability. That’s one of the things that people don’t talk about enough with automation — how you can use it to walk people up the ladder.
Beyond that, in 2024, I think everyone should be thinking with a “what if” mindset. When I walk around nowadays, I think, “What if instead of all these various telephone poles, we had distributed computers on these poles?” I’m always thinking about what the future looks like. Maybe not the 2024 future but a 2034 future. I think the art of the possible is really important to think about.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
Scenario analysis is a strategic planning tool that helps you envision a range of possible future states and the forces behind them.
A fractional product manager (FPM) is a part-time, contract-based product manager who works with organizations on a flexible basis.
As a product manager, you express customer needs to your development teams so that you can work together to build the best possible solution.
Karen Letendre talks about how she helps her team advance in their careers via mentorship, upskilling programs, and more.