Anjali Gurnani is a strategy and operations executive with more than 15 years of experience in tech. She’s worked for two hyper-growth unicorns: ActiveCampaign, a Cloud 100 company, and Uptake, one of the first AI/ML ops platforms for predictive analytics. In both of these roles, Anjali mastered the delicate arts of finding and keeping product-market fit, bridging the voice of the customer with the product vision, and setting clear goals for product and engineering teams.
In our conversation, Anjali discusses the shift in emphasis on R&D as companies move away from growing at all costs to growing efficiently. She talks about the challenges of adjusting to R&D investments, such as balancing the company portfolio for resource allocation and setting expectations around the time it takes to realize business value. Anjali also shares her view on important KPIs and how R&D ROI is becoming a valuable metric for organizations.
I’ll always be an entrepreneur at heart. Once you’ve gone through that experience, I think you’re wired to constantly look for ways to bring innovation and creativity to problem-solving and be very customer-focused.
I’ve played roles where I was a customer myself and then worked in the industry that built those products afterward. Being able to switch between those roles as not only a customer but also as a problem solver is probably the most fun part of being in tech.
That’s a really important issue that I think a lot of SaaS companies are looking closely at right now. Especially as companies are moving away from “growth at all costs” to more efficient growth, they’re looking very closely at what’s happening on the R&D side.
In product companies, all value is created from R&D. People are really starting to broaden their understanding of how all of these pieces work together beyond just product development — including cybersecurity, regulatory compliance, tech support, and IT operations.
There are a few things that make adjusting to R&D investments challenging. One is, when you’re looking at R&D investment, it’s important to realize that pivots mid-cycle can take significant projects majorly off track. Making a change to what you’ve allocated within a quarter may not show impact until several quarters later.
The other piece of this is balancing the portfolio approach to resource allocation and investment allocation. I’ve found that there are pretty much four primary areas that all work can be bucketed into:
Yeah, saving to reinvest is the principle that most companies really need to embody more. If some of those investments are misaligned with a target customer segment that’s not your ICP, this creates a missed innovation opportunity to meet the needs of the segment where you have actual product-market fit.
Innovation is hard to accelerate beyond an MVP or POC; it takes sustained investment. Once you see it through the following year, you see market signals that you’ve missed the right opportunity with the right target segment, and that missed innovation can take you another year as you try to catch up. Someone else who made that decision the right way obviously gets the competitive advantage.
In the case of SaaS, as you get to a stable customer base, the velocity of feature requests increases exponentially. Speed to market is a critical piece, but that has big dependencies on the underlying architecture of the product.
You may see an opportunity — and let’s use AI as an example — where customers are ready for something more personalized. Generally, trends in software are to deliver more personalized recommendations on what to do in the product or how to apply it to a specific domain.
There is a lot that can be done to innovate UI/UX to address customer friction around usability, specifically to make features more straightforward and more relevant to a user’s goal. And you can’t do that if your data and infrastructure are not clean — if there’s no data integrity or tooling in place to ingest and process data at speed — and that translates to big performance issues or incorrect recommendations coming from the AI.
Most companies that haven’t been making continuous investments toward platform modernization are going to find it very difficult to bridge those gaps and quickly put out features that take advantage of AI capabilities. They’re going to have to address both sides of it: the frontend and the backend. And the backend is dependent on the quality of the data itself.
It depends on how agile the organization is. Especially for growth companies, speed and agility are two of the most important principles to operate against. It’s important not to be too prescriptive with the methodology because one size doesn’t fit all. I tend to favor more autonomous decisions around the methodology within organizations.
But, to counter that, what’s really important to pay attention to, less so than the methodology itself, is risk management. Are there clear milestones that are aligned with business timelines? What is the team doing to surface blockers? Do they have a safe space for surfacing blockers with collaboration around how to address them?
Some of these things are within a team’s control, and some are not. Leaders can help teams evaluate this through well-organized retros. The other piece is how good they are at forecasting and how predictable and repeatable their forecasting is. These are things that agile methodologies can’t always address.
It’s data-driven decision-making, but there is such a thing as analysis paralysis, especially because there are so many signals to process. I think it’s really important to establish a culture of data-driven decision-making and try to stick to one true north metric.
A lot of products are not instrumented to provide a lot of usage signals. It can become really difficult to form a hypothesis around what to do next if you don’t have that.
Financial discipline is also super important — not enough product teams think about the cost side of things. I think it’s really crucial to at least understand where investments have been effective and to collaborate across not just engineering, but other business teams to see what works and what doesn’t. It’s important to understand this from front-line teams (sales, support, and success). There are people enablement costs to sell, service, and support a product that go beyond engineering.
And outcome measurement. Outcome measurement is kind of a new muscle for a lot of product teams, primarily because many things can’t be attributed directly to a specific feature or release. That’s why it’s very important to establish what needle we’re trying to move and how we’ll know if it’s moving before something’s even committed to the roadmap. And if we don’t establish that, what will we be looking at to develop a stronger point of view around it?
It’s very different from product to product. But, oftentimes, there are usage metrics that are interdependent on certain user behaviors. You might know a top-level usage metric around how many times this page was visited in the product and how long it took them to actually complete a task. Those are examples where, hopefully, people have instrumented to try to be able to measure that, but you may not know things like how many times they completed it successfully.
Some should be universal no matter what company you’re at, especially for product teams: NPS, ARR growth, ARPA, churn, number of customer-reported issues, and revenue at risk. And this is where internal dashboards are really important so you can follow how those things are trending.
The newest one that I think most companies are starting to try to implement is more R&D ROI — looking at last year’s R&D investment relative to revenue and dividing that to know what your ROI was on R&D.
The last piece that I think product people should at least be aware of is developer productivity KPIs, which are primarily based on DORA metrics. For product leaders, what matters most is a general sense of team velocity and deliveries on time or delays.
Less so KPIs and more conversation and engagement with customers. I look at it in four primary ways.
The first is having a customer advisory board. A lot of work goes into the prep for customer advisory boards but not enough goes into extracting insights from the meetings and communicating them with the rest of the organization. Again, it’s about closing the feedback loop. Each product leader is directly responsible for doing that.
The second is red accounts and support escalations. On one hand, customer advisory boards are great to talk about the future potential. They help anticipate what customers can mature into, but red accounts and support escalations tell you what is not meeting expectations with current features. More future-oriented red accounts and support escalations are about improving reactivity to issues that are not going well. That is a very important qualitative feedback channel.
Thirdly, community discussion is extremely important. People are not necessarily talking, but their users typically are. Say you’re selling into enterprise, but your customer is the CMO. The CMO is in some community forum, and there’s probably a lot being said there about needs, related problems, and in many cases, what products are good or not working so well.
And then the last is partner engagement. Partners can provide a lot of qualitative information about what they see in the market, what they’re hearing from their own customers, and incentives that you can create to make sure that that feedback loop is strong.
I would say it’s still a nascent concept, customer advisory boards. The best customer advisory boards should be changing pretty often.
I’ve been a customer of many products and expect that, by being part of the board, you will have more influence on some of your own needs. It’s really important to separate that expectation from participating in the customer advisory board.
I would say it all starts with the type of roadmap that you built. I have found that the most effective type is the goal-oriented roadmap complimented with a very clear release schedule. Oftentimes, people will try to smash the two together, and then your roadmap looks more like a release schedule when it really should be more directional and thematic, and it should be a lot more outcome-oriented.
Another important thing is discipline and rigor in the bets process. A lot of companies try new things with their bets process every year because they’re about to enter the new year. That’s just because folks are being reactive to whatever the performance might have been in the current year.
I think it’s really important to be thinking about what the right bets process is for the business throughout the year and continuously improving on it. Limit innovation bets to no more than 30 percent and have very clear gates for how to incrementally increase investment toward those bets.
The last piece is being very clear that every bet has cross-functional dependencies. Especially with product bets, as they have technical dependencies and go-to-market readiness dependencies.
Sales and success teams have a view on priority customer needs, but may not realize that those needs require a major change to existing sales processes or new expertise that they don’t have. It’s important for product leaders to avoid releasing those kinds of new capabilities too prematurely, especially where internal enablement requires more time to complete.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
A fractional product manager (FPM) is a part-time, contract-based product manager who works with organizations on a flexible basis.
As a product manager, you express customer needs to your development teams so that you can work together to build the best possible solution.
Karen Letendre talks about how she helps her team advance in their careers via mentorship, upskilling programs, and more.
An IPT isn’t just another team; it’s a strategic approach that breaks down unnecessary communication blockades for open communication.