Whitney Coggeshall is the Director of Product Management at CFA Institute, a global not-for-profit organization that provides finance education for investment professionals. She began her career as an educational researcher and psychometrician, specializing in advancing assessment design. Over time, her passion for learning innovation and strategic problem-solving led her to pivot into product management. With experience spanning research, analytics, and product development, Whitney brings a unique blend of data-driven decision-making and customer-centric thinking to her work.
In our conversation, Whitney talks about how she leverages low-lift, low-risk methods to gauge genuine product interest from users and use those insights to drive decisions. She shares how her background in research has informed her approach to product management, as well as how she incorporates evidence-based iteration and continuous discovery in her work.
I am a data nerd and I come from a research background. My approach to decision-making revolves around what I’ll call evidence-based iteration, where I collect data efficiently and try to validate the most crucial assumptions before I move forward. The goal is to either fail fast or succeed quickly. A lot of people try to leave research out of the process because it can add to the timeline, so my challenge has always been how to get that base information without slowing down the process.
The process that I use has a few steps. The first is getting to the core of what needs to be known. This sounds simple, but it’s actually the hardest part. I specialize in new product development, so I typically have hundreds of questions I’d like to have the answers to, but I usually start with just one or two crucial assumptions to validate before I decide to move forward. The rest of those answers can come over time.
First and foremost, I ask, do I already have the data that I need? You would be surprised to know that, in many cases, someone in the organization may already have it. It could exist from sales talking to folks, from surveys that we’ve already done, or from conversations in the market. If you’re hearing a need from those sources, you might be able to skip data collection altogether.
The second part of the process is devising a mini data collection to get the answers I still need. The key here is to do the most lightweight work as fast as possible. I do things like A/B tests and smoke tests, and I use a lot of AI to help prepare things quickly. The third step is recruiting people. If you’re in a big organization, you hopefully have a research operations team that you can leverage. If not, you’ll have to get a little creative and try to utilize contacts that you already have. Your sample doesn’t need to be perfect — you just want to reduce risk.
The last part of the process is doing something with that data to help inform your decision. I use AI to help analyze the data that I’ve collected. Of course, AI isn’t perfect, and you want to keep that data analysis as lightweight as possible — don’t overcomplicate it. A good rule of thumb is to spend no more than one business day pulling your insights together. Be sure to frame your insights to help inform the question that you’re trying to answer.
Absolutely. As a product manager, it’s crucial to stay connected with the different divisions in your organization. The call center is a great example because they are the primary folks hearing complaints. If you want to know what the biggest issues people are having with the product are, it might be worthwhile to have a conversation with your call center representatives.
They usually document these things so you can go through them and see what you notice. Same with sales. The call center can give you feedback from people who’ve already engaged with the product, but sales can give you insight into why some people aren’t buying the product. Usually, if you hear the same answer over and over, you can feel reasonably confident that you’re missing a big feature.
With that, I want to caution against building out features for one person or group of people. If you want to pick a problem to solve, it should be very robust so that you know it’s not just exclusive to one person or group.
That depends on the question you’re trying to answer and how risk-averse you’re trying to be. For example, if you say, “This project’s going to cost $10 million, so I need to be sure that there’s a market need for it,” you’re going to want to collect data from a lot of places and different stakeholders. On the other hand, if you say, “I think this is a cool feature to add. We can be a bit more risky in this situation,” then you can get away with a much smaller sample pool.
Most projects are somewhere in the middle of those two extremes. I like to say that this process isn’t a recipe, it’s a general framework. With that, make sure that you’re thinking about it critically. Having enough data depends on the scenario and the pros and cons of having more participants versus fewer participants.
I primarily work in new product development. A lot of the core questions that I tackle revolve around whether a product has a viable place in the market. One of the biggest challenges, particularly in education, is that people often say they want a product, but that doesn’t necessarily mean they would actually purchase it or even engage with it. When you ask, “Would you like this?” they often say, “Yes, I would,” because it’s a socially desirable answer.
When I am trying to think of low-lift, low-risk methods, I also need to make sure that I’m actually getting usable data. In questions like, “Is this product viable in the marketplace?” I typically don’t use traditional interview or survey methods because they often aren’t the best way to gauge real demand. Instead, I might use quick, low-risk experiments like smoke tests.
For example, I might send a branded email to a target audience with a concise description of the product. The email will provide an option to sign up for updates if they’re interested. This method is very effective because signing up typically requires a small but real commitment. As a consumer, I don’t want to get a bunch of emails about something I’m not interested in, so signing up is a strong indicator of genuine interest.
I think people tend to overuse interviews and surveys because they’re quick. But, the smoke test is also fast. It’s cost-effective and virtually risk-free as long as you don’t overpromise in the email language. I typically draft all of these in less than a day. The process involves writing the email, setting up the signup form, and working with marketing to get the branded email out of the door. We can collect results quickly, and they are typically very easy to analyze. If there’s strong engagement, that’s a good signal that the concept is worth a deeper exploration.
You definitely want to follow marketing campaign best practices. I always work with our marketing department because they can offer advice on optimizing the subject line and copy. Also, I think you have to tap your audience appropriately.
Problems can arise if you’re trying to tap a brand-new market where you don’t have contacts. This is where the recruitment piece comes in and can considerably add to your timeline. I always try to leverage the contacts that I have as much as possible and remember that the sample doesn’t have to be perfect.
The key outcomes are typically email open rate and form signup rate. The best thing about these metrics is that they’re easily trackable within marketing platforms. It’s also important to analyze them together rather than in isolation. As I mentioned, this effort should be lightweight. I use marketing software that allows me to watch my metrics in real-time, which is awesome.
In terms of looking at the metrics together, if you have a low email open rate, you’re definitely going to have a low signup rate. This could be an indicator that the problem might not be that the product isn’t a good idea, but the email itself isn’t attractive. Maybe it has a weak subject line or poor targeting, or your audience experiences email fatigue. In this case, you may not have usable data — you may need to repeat the exercise with a new email or find a different target audience.
On the other hand, if you have a high open-rate but a low signup rate, this suggests that the concept isn’t resonating with your audience. People were curious enough to open that email, but they didn’t take the next step. The happy path has a high open rate and a high signup rate. This is a strong signal that there’s genuine market interest in the product. The challenge is defining what high and low mean for these indicators. I like to compare against industry and internal marketing benchmarks to see what a typical rate is to compare against.
One last thing to remember is that, with these metrics, a high signup rate might still be a small number in most people’s minds. For example, a form sign-up of 3–4 percent is typically a strong signal of demand. Having that comparative data is important to ensure you’re making an informed decision rather than just relying on an arbitrary threshold.
If you don’t come from a research background, you should try to become besties with your data insights group! Since I do have that background, I like to do things myself, but that’s my preference. In general, product managers typically leverage their data insights group. I recommend having a strong connection to them because they can be key in your process.
Sometimes, people might try to tell the data team what they’re doing, but they unintentionally keep it too general. The better they understand what you’re trying to do and why, the more they can help you collect data and give you the most useful and accurate results. They can be thought partners with you and help ensure that the results they’re giving you are suitable for what you’re trying to do.
In addition to the quantitative methods I talked about, I do a lot of qualitative end-user interviews. These are extremely valuable. Let’s say you uncover an insight where a customer dislikes a particular aspect of a prototype or product. I don’t think that you should just immediately jump into giving the customer what they want or what they say they want. You need to take that step back to fully understand that feedback before making changes.
This is where having a deep conversation with them about expectations is crucial. Ask, “What were your pain points? How did the product fall short?” More often than not, I find my initial interpretation of the issue was incorrect. Rather than requiring a full redesign or a completely separate feature launch, you can usually solve the problem with a small targeted tweak, and this will better align with user needs.
I also like to do continuous discovery. Every month, I interview six to eight customers. This helps me stay closely connected to market needs. These interviews can vary in focus — sometimes they’re specific to a new initiative I’m considering, and sometimes I let them share what’s top of mind for them. This helps solve my recruitment problem a little bit. I always have some people on deck that I’m about to talk to where I can say, “Hey, I’m hearing this. Is this a problem you’re having, too? Can you tell me more about that?” It’s great to have resources like that in the market to validate things.
I create a calendar and send out an email at the beginning of the month that says, “I’ve got some free time; book some time and help me learn about your certificate experience.” I let them just take time on my calendar. This is a very repeatable process that I can do month-to-month. I usually send it out to a couple hundred people at the beginning of the month, and I get a handful who book the time. Then, I block off 15 minutes before their interview and try to think about what questions I need answered right away and what questions I can leave for them to answer more broadly.
A key aspect is thinking about the user experience of the interviewee. They are doing you a favor by talking to you, so you need to keep that in mind. I like to tell them that they don’t need to prepare anything in advance and can book time whenever they want. I also try to be really flexible if people need to reschedule. By keeping it lightweight on my schedule and theirs, this is an easy way to handle the important task of staying in touch with the market.
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Want to get sent new PM Leadership Spotlights when they come out?
In the field of product management, a strong personal brand helps to showcase strategic thinking, leadership, and storytelling.
Mike Ly, VP of Product at BeyondTrust, talks about the potential that comes with leveraging product-led growth along with sales-led growth.
Rumna Mishra talks about how the cybersecurity landscape is evolving — shifting from point solutions to comprehensive platform solutions.
Why do users abandon their carts? Was the transaction blocked by a technical error? Shopify site owners can now answer all those questions with LogRocket for Shopify Checkout.