Jim Naylor is the Global Director of Product – Xvantage at Ingram Micro. He began his career in software engineering at AOL, working on the localization and web browsing teams, before leading software configuration management for flagship products including AOL, AIM, Netscape, and Winamp. At Edmunds, Jim became the company’s first agile-trained project manager and later served as Director of Product for Used Vehicles and APIs. At Carvana, he built the mobile shopping experience that allowed customers to purchase a used car entirely from their phone in under five minutes. Jim went on to lead product, UX, and product operations at Allergan Aesthetics (an AbbVie company), where he helped create the Allē loyalty program.
In our conversation, Jim talks about how he views documentation as a company’s IP, and how his teams should use it as a source of truth for the product and the value it delivers. He shares the importance of good data hygiene and classification methods, as well as how good documentation creates and promotes accountability throughout the business.
Early in my career, I was in an engineering role where I was responsible for overseeing AOL’s software intellectual property. This was before tools like GitHub existed, so my team managed the repos and infrastructure that enabled developers to version, build, and ship software reliably.
Every product build — whether it was AOL, AIM, Netscape, or Winamp — flowed through my team. I served as the lead build engineer for AOL 9.0, and through that work, I developed a deep appreciation for the discipline of traceability. Documentation wasn’t just a formality — it was essential to knowing exactly how our software was assembled, ensuring we could reproduce any version of any product at any time. That mindset is something I try to instill my teams today: your documentation is your source of truth, your continuity plan, and your strategic advantage.
Fast forward to now, I coach my product teams to treat their documentation the same way engineers treat their code — as intellectual property for the business. It’s the source of truth for how the product is supposed to work and deliver value, and I personally believe, just like code, that product documentation should be versioned and have some form of lineage. It should mature over time and adjust to the needs of your users and the business.
I’ve gone into several companies where people just create PRDs from scratch. As your product evolves, you should be amending and building on the existing documentation. A lot of the tooling that we have available to us, such as Confluence, provides that versioning and lineage, and that continuity ensures a shared understanding, reduces ambiguity, and helps with accountability.
I try to position writing as a leadership behavior. Clear documentation is how your ideas scale with you and the company over time, and I embrace it as strategic leverage. My boss at AOL once told me that the faster you can automate yourself out of your role, the faster you’ll find yourself in a better job.
I have found that building a reputation as someone who communicates instructions clearly leads to greater trust, more autonomy, and, ultimately, bigger opportunities, because you become known by your organization as someone who doesn’t just ship features, but builds systems that other people can grow upon.
I think the best way is to read and understand what already exists. It helps you better understand how you can inject change instead of doing it all from scratch and potentially rewriting 80 percent of what’s already there. This helps engineers, too, because they are so deeply involved in the way they work. If you’ve got a bunch of disconnected documents, the team can easily become disengaged with the product.
With the tooling in this space, I can hold my team accountable. After I update something, I can see who has read it, and if my lead engineer or lead designer hasn’t yet, I can nudge them to check it out. I find that really helpful.
Most PRDs help you understand the why, the what, and the how in a pseudocode fashion. I’m a big advocate of writing acceptance criteria using the Gherkin format, where you fill in the mad lib, “Given [this] when [this], then [that].” Sometimes, I see PMs treat that as a box to check. They put a big criteria out there but miss out on an incredible scaling opportunity while creating more work for their team/QA and making their intentions less clear.
The tactic here is that when you put all of these into a table, you’re effectively writing the pseudocode for your engineering team. Having that simplified logic makes it easy for your engineers to say, “Cool, I know what kind of conditional statements or constructs I want to use to code those outcomes.” You’ve effectively written the test cases for QA as well. Even from an interaction design perspective, these acceptance criteria tables help your design team call out opportunities to make things simpler. By doing that, you’ve actually reduced overhead expenses across all these disciplines.
It feels like more work, but the quality is much higher coming out the other side, with way less churn and fewer meetings in the middle. I’ve had to work with a lot of distributed teams in my current role, and I can see cycle times reduce based on how well my team is getting those acceptance criteria grids teed up.
The intention is to make it readable, easily modifiable, and figure out how to structure all of the logic. Being able to see the delta from the previous version of that document makes it so much easier for new team members to digest everything.
When you have two teams with the same goal in mind that are operating separately from one another, it’s kind of like owning a car but outsourcing your dashboard. If you can’t trust or understand what the speedometer is actually calculating against, you’re frankly driving blind. I try to coach my teams into understanding, if not owning, the schema.
Getting tracking requirements into your acceptance criteria is one of the first things I like to coach. It’s not just about the features and the functionality — it’s also about how we’re supposed to grab the data. From there, we can indicate that in the given-when-then format. In that, you can pull in other teams, and everyone rallies around the same document.
It’s also important to capture the data in a way that makes sense. In some cases, it’s grabbing something from the frontend, and in others, it’s the backend. In a previous role, I saw my engineering team tag events for different PMs in multiple ways. That led to chaos at the dashboard level because we couldn’t confirm consistent approaches to how things were being measured.
The product team needs to understand, for example, when somebody logs in. The frontend method only tracks user intent, whereas the backend method logs that it actually happened. Even if you can’t technically own the schema, you need to own consistent tactics for capturing the data. Otherwise, you could be setting your team up to capture something incorrectly, which can hurt your product and the visibility of what’s happening in your ecosystem.
At the discovery level, if you’re in a healthy place with your data, you can use it to ask smarter questions about what you already know, and more importantly, what you don’t know. You can, with confidence, explore which problems (reframed as opportunities) are more valuable to the business and take inventory of the signals that you need to capture to define and measure success.
If you don’t have the correct signals, you can at least revisit the schema and say, “We’re not capturing this event that happens in our ecosystem,” and then partner with your data team to track the event before you make your opportunity A versus opportunity B decision.
Often, I see a lot of organizations say, “I don’t know which one’s a bigger opportunity, so I’m just going to place a bet and run in that direction,” or “I’m not allowed to place that bet, so I need to ask my data team to go figure that out for me.” But because the data team probably operates in a consultative structure, they’re dealing with a thousand tickets. You’re just one ticket waiting around. But when the schema is better understood and it’s been set up so knowledge is democratized, people can go in on their own and get the answers to their questions more quickly.
In my previous role, we used Segment as a customer data platform (CDP), and I took inventory of everything that we had there. We were running into some messy metrics where some people were reporting revenue, and other people were doing calculus for the revenue metric. The two were really far off. That led to us questioning which method was actually correct, if either of them.
I presented the two dashboards side by side, as well as how different people were calculating the metrics. This showed how easy it is to misrepresent numbers, which is really important. Just from a narrative perspective, I have found that if we can’t trust the metrics that are being presented to us, why should leadership trust our roadmap decisions? That pained me — I was running a product team and wanted to ensure that there was a lot of trust. We needed to tell a good, concrete data story about why we wanted to peel away from the existing roadmap to go address another opportunity.
The great thing is that you don’t need to rip out the old to get the new to work. It’s good to keep the old in place so that you can make sure that you’re at some level of parity before you start implementing the new.
In my previous role, my goal was to simplify as much as possible. There were a lot of redundancies and duplicate events under different names. We took roughly 700 events and 1,500 properties and cut them down to fewer than 100 events and 250 properties, and this distillation became great for everyone in the organization. Asking somebody to learn 100 things about the product is way easier than asking them to learn 700 things.
Also, we used the object-action naming convention (strongly defined and typed, like in software development), which results in clear and consistent event names. For example, the verb “completed” has a very distinct definition, and whenever something is named, that definition would show up. In this case, “completed” meant that the action finished or was committed to a database, and if that’s not what was actually happening, we needed to go into the verb dictionary and find a more suitable term. We couldn’t be loose about this because it can easily create confusion.
Having all of that in our schema gave people the confidence that they needed to see what percentage of our checkouts completed yesterday included t-shirts, for example. They could go into this short, 100-item list of events, find “checkout,” then “completed,” understand the difference between completed and submitted, and get the answer to their question on their own without needing to be a data science person. Frankly, it also enabled marketing to play with audience creation really easily, which opened a door to many new opportunities.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
Act fast or play it safe? Product managers face this daily. Here’s a smarter way to balance risk, speed, and responsibility.
Emmett Ryan shares how introducing agile processes at C.H. Robinson improved accuracy of project estimations and overall qualitative feedback.
Suvrat Joshi shares the importance of viewing trade-off decisions in product management more like a balance than a compromise.
Great product managers spot change early. Discover how to pivot your product strategy before it’s too late.