Mike Korenugin is Director of Product at SE Ranking, an SEO and analytics platform. He started his career as a product manager at Astorex before transitioning to IM Action, an online marketing agency, where he was later promoted to Chief Operating Officer. Prior to joining SE Ranking, Mike held product roles at Jobjet and PandaDoc.
In our conversation, Mike talks about the importance of not over-relying on judgment and instead assuming a data-informed approach — using data alongside prior experience, competitive analysis, strategy, and customer problems — to make decisions. He also discusses how he promotes this data-informed culture across teams throughout the organization.
Here at SE Ranking, we preach being data-informed, which means we root our decisions in data as much as possible — we’re ready to embrace ambiguity and follow a vision when data is not there. It’s the way we work. I follow a straightforward approach in my daily routine. First, I craft a strategy that outlines our direction and clearly states what we do and don’t do. The goal here is to make our most important output metrics painfully evident to the team.
Second, we spend time with our product and data teams building our metric tree, a.k.a. our driver tree. How do we go from an output metric to input metrics that we can actually influence and measure in the moment?
Third, I ask my team to focus on select outputs. Output metrics are generally some of our revenue components, like new monthly recurring revenue (MRR) from an add-on product or expansion revenue (e.g., customers are switching from lower to higher subscription plan tiers). Then we pick a feature set as a target, where usage correlates with improvements in retention and monthly active users (MAU). That’s how we pick the tools to work on.
Fourth, we give the team the necessary analytical tools to support them, both from the behavioral data side and the customer insights side. Lastly, I give them time to do homework and dive deeper into customer problems. I want them to lean into lower-level data signals so they feel empowered to come back with solutions and priorities. At that point, we can correct the strategy as needed and co-create a roadmap together.
I am constantly working on extending this culture beyond just product and engineering. I’m fortunate to collaborate with many teams across the company, like product marketing, demand gen, data, sales, customer success, support, and more. They all play a key role in embracing this mindset because more often than not, when we talk about output metric changes, they don’t all linger on product alone.
For example, with product marketing and success, we essentially co-plan improvements in expansion and reactivation. When we launched a new add-on for social media marketing last year, we built funnels and materials and then evaluated how our automated messaging affected MAU compared to human-enabled outreach.
Typically, our OKRs are structured so that we share input metrics. Some are shared with the product marketing team, some are shared with the success team, and so on.
For lower-level self-serve analysis, we use a tool for behavioral data analysis. Within it, we have all kinds of information, such as firmographics, attribution, back office data, etc. These dashboards are available to all the teams collaborating on a given metric.
We collaborate closely with demand gen and sales, especially around new product launches, and work with these teams to connect our product hypothesis with likely outbound audiences. We build funnels from total audience to engagement rates, sign-up rates, and activation win rates.
At SE Ranking, we approach new products almost like a mini-business. We have a cross-functional team with an account executive, product marketer, and product manager who all work closely on generating the required data and supporting materials. Demand gen runs experiments and tests different clusters of interests in both paid and organic channels, and then reviews each to optimize conversation rates. If there’s enough opportunity to scale one of these channels, we work on the activation conversion for a smaller subset within it.
Right now, every month, we receive around 4,000 pieces of feedback from our customers. These signals are also spread thin across quite a few different channels, such as the reasons for deals lost report in HubSpot or Gong call summaries from the sales and success teams.
There are also in-app feedback form responses as well as forms surveys we run proactively. This feedback is further complicated by the fact that SE Ranking is a multilingual platform, so we receive feedback in a number of languages (English, German, Spanish, French, Dutch, Japanese, Portuguese, and more). Synthesizing all of this used to be impossible. Now, we push all the feedback into a single connected system and enrich it with customer and behavioral cohort data. This enables us to easily understand what different cohorts are telling us across channels.
Right now, this is our primary tool for success and supporting the product. When we discuss reasons for churn, for example, we’re now able to eliminate any kind of recency bias. We can look at trends and understand if there are patterns or one-off anomalies. We’ve been very happy with the results.
Any data point on its own is meaningless without judgment from the person who looks at it. I strongly encourage my team to reference their experience, understanding of competitors, our strategy, and knowledge of our customers’ problems into the data.
As an abstract example, if someone comes to you and says, “Hey, our product activation is 3 percent,” does that bring any insight on its own? No. But, let’s say you know it’s a freemium product, you’ve spoken with a few customers, and you understand how much space there is for improvement compared to industry best practices. In that case, you can tell if this number is good or bad in the context and see opportunities to change that.
To me, the balance between data and judgment comes fairly naturally. Behavioral data is often a critical input, but it’s only one of many we rely on. There are a lot of strategic risks that we invest in anyway, even if current usage data says otherwise. For example, in SEO marketing technology, we’re going to continue preparing solutions for Google AI overviews and LLMs, even though referral traffic from LLMs is below 1 percent on the internet right now. But if we don’t invest there, we might die out in three years — there’s a strong rationale for these higher-level decisions.
Sure. On a tactical level, it happens all the time. Right now, we’re testing some changes to our product’s navigation, including the menu layout and how to advance to the next steps. We tested a few solutions, but none worked the way we thought they would. We ultimately picked the one that we thought would give us more opportunities to add other products in the future. That was the rationale we used in an instance where we didn’t have enough data.
It’s a painful truth, but startups competing in established spaces like ours often need to rebalance investments across areas. It can turn into complete chaos for product teams when they need to switch focus and can’t realistically own results.
My dream approach has two parts. First, we cut out parts of the product that we don’t believe add much value and glue the ownership areas to the cross-functional teams. Second, we allow these teams to operate the backlog however they see fit within the scope of the overall strategy. I’d only interfere for three reasons: to merge teams’ efforts, to point out inconsistencies with strategy, or to make sure the team is using data and customer understanding to drive decisions.
Reality does not always allow for this, though. Right now, we’re in a transitional state at SE Ranking. I promised the team that focus areas are fixed for at least six months. At the start of each year, we co-create a high-level year-long roadmap, set the metrics and related areas of investment, and assign the owner. This gives the team a cushion — time to prepare and deeply research the customer and their problems ahead of time. That way, even if we have to switch areas of focus, they already know when that is most likely going to happen.
Oftentimes, with this level of ownership where people are responsible for output metrics, different people may be working on the same things at the same time. Or they may be working on something independently that they should have been working on together from the beginning. For us, the solution is very simple. To provide ultimate transparency, the entire team is part of the strategy creation.
We follow a W process — we gather as an executive team once a year, discuss the overall topics that we would like to invest in, and then the team takes that as an input and enriches it with their understanding of what areas to pursue or not. Finally, we discuss it again as an executive team.
This helps create a good level of understanding for everybody regarding where we’re headed and why we’re doing what we’re doing. Everybody feels like they are part of the process and like they truly contributed to what we’re going to work on throughout the year. We do quarterly planning on top of that, as well as bi-weekly check-ins with the team to discuss our OKRs and our progress toward them.
The general idea is consistent. Whenever possible, we want to use quantitative data and complement it with qualitative data, logic, and our narrative. Typically, smaller companies will have fewer customers to work with so they often have less quantitative data than they’d like. Instead of behavioral information, they’ll need to rely on qualitative data from lost deals, customer interviews, and market research.
Behavioral data becomes somewhat of a true input once you have at least 1,000 monthly active users, and past 5,000, you can start running A/B tests. That’s how I think about it. When you’re small, you rely on qualitative data, and as you grow, behavioral data becomes more important.
In my mind, the success of a data-informed approach depends on how the executives of the company make decisions. If they tend to rely on their vision and gut feeling, being data-driven or data-informed is not going to work. There are a lot of companies that are vision-driven and still grow substantially, but that growth becomes harder to sustain as the company scales because its funnel loses the homogeneity of an early product. As a company scales, the types of customers or use cases become more varied and nuanced.
I can share an example. Eight years ago, I worked at Jobjet, an SMB-focused, B2B SaaS company. Decisions based on data are only as good as the quality of data, and back then, I had the freedom to implement data tracking as I saw fit. Unfortunately, I didn’t have the skills to turn it into a success. I was collecting data on activation and retention and built some funnels, but I simply couldn’t connect the data inputs with the customer understanding and the market. It failed.
Another example is when I worked at PandaDoc. We were constantly going back and forth from setting a North Star and building a strategy from the North Star into a metric tree. A quarter later, we would say, “You know what? There’s still vision. Let’s just switch back to it and make decisions based on how we see the market evolving.” The company is successful and found its way to rely on data, but the back and forth did create a fair amount of turbulence for the team.
These are examples that illustrate why supplementing data with judgment is so important — you can’t just rely on visions or gut feelings. I’ve been at SE Ranking for nine months, and it’s been great to see our approach to being data-informed working so well. Early on, we set some ambitious goals, and this initially led to some frustration because the team simply lacked the tools and the ability to hit them early on. But then we course-corrected, adapted, introduced new instruments, and now we’re seeing incredible results.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
Priya Lakshminarayanan, Chief Product Officer at Recurly, talks about Recurly’s work to improve merchants’ subscriber experiences.
Phil Freo shares his experience scaling Close from its founding team to more than 100 people and transition to being a fully remote company.
Nick Fisk shares his experience leading transformations and transitioning companies to a product ownership model.
It’s easy to hide behind charts and justify decisions with shallow metrics. But great product teams resist that urge.