Someone once told me that product management is like being a firefighter who also happens to build houses. You spend time making something beautiful, then suddenly everything is on fire, and you need to switch gears completely. Not to mention when you’re building additional rooms, while others are burning, and you keep running between the two with your hammer and a bucket of water.
No matter how polished your roadmap is or how well you anticipate risks, a crisis will happen: a critical bug will take down your main feature, a security breach will expose user data, a viral complaint will spiral into a PR nightmare, etc.
When disaster strikes, every second counts, and your response determines whether you emerge stronger or watch your product’s (and your) reputation crumble.
The problem is that most PMs aren’t prepared for full-blown crisis scenarios until they’re already in the middle of one. By then, the damage is unfolding in real time. That’s why you need a clear, repeatable plan for managing high-stakes incidents before they escalate into disasters.
Let me walk you through what every product person needs to handle product crises effectively, minimize damage, and maintain user trust. The advice I’m about to share comes from years of figuring this out the hard way.
Responding quickly requires you to be able to identify common crises when they first emerge so that you can immediately start implementing a solution. Some of the biggest examples include:
This is when a core feature stops working, preventing users from accessing your product or its key features.
In this case, the key priorities include triaging and rolling back the faulty update if possible, communicating the scope of impact, and providing a workaround if a fix isn’t immediate.
You should be able to prevent at least some of those issues by improving testing coverage and setting up better alerting systems to catch issues before they reach production. Catching and fixing a bug before it’s released is 1000 times cheaper than when it hits production.
This type of crisis happens when user data is exposed due to a security vulnerability.
When it happens, you and your team need to dedicate time to immediately containing the breach by shutting down affected systems, engaging security, legal, and compliance teams immediately, and notifying affected users and regulators within the required legal timeframes.
It goes without saying that you also need to patch the vulnerability before it can be exploited again.
If you don’t want to deal with this type of crisis, conduct security audits, enforce stricter access controls, and implement automated monitoring for anomalies.
Finally, your product manager’s day can be ruined by a viral complaint that spreads misinformation about (or sheds light on an issue in) your product.
In that case, you need to assess whether the criticism is valid, acknowledge concerns publicly even if internal discussions are ongoing, and correct false claims with facts without being confrontational. Having a public dispute with a user can add gasoline to a fire, even if you’re (technically) right.
To prevent a random post from going viral, you can invest in having a dedicated social listening team and pre-approved response templates for common scenarios.
The worst time to create a crisis plan is during the crisis itself. When something breaks, whether it’s a critical bug, security incident, or reputation hit, your team needs a predefined response process to follow.
Back in the university days, I was told that a smoking engineer is always more efficient when faced with a dire problem than a non-smoking one. The non-smoker will jump into action and will behave like a headless chicken. The other engineer will have time to think while getting a cigarette, thus approaching the “fire” with a plan and a level head.
To set up a crisis plan, follow these three key steps:
Not every issue requires a full-scale emergency response. You need clear criteria for what qualifies as a real crisis based on business impact, security risk, and user exposure.
I recommend three tiers:
I’ll readdress identifying an actual crisis later in the article. For now, let’s assume we are dealing with P0.
Depending on the time of day and vacation season, you’ll either have the right people that know what they need to be doing or you’ll need to assign clear roles. Ideally, this should be done long before a crisis hits.
Your crisis response team should include:
If an issue is bigger than your team can handle, when do you escalate to executives? Or to legal and regulators?
Set communication thresholds. For example, a P0 incident automatically triggers an executive briefing and legal review. No exceptions.
Having this point nailed comes down to a few workshops with the team, where you speculate what can potentially go sideways and who will be able to assist when push comes to shove.
When a crisis hits, the first few hours are critical. Panic and emotional reactions make things worse. A structured three-hour response window helps teams quickly contain issues without making rash, reputation-damaging decisions.
Here’s the breakdown of this method:
During the initial 60 minutes, you, and more importantly, your team should focus on gathering facts.
What happened? Is it a bug, a breach, or a public relations issue? How many users are affected? Is this isolated or widespread? What’s the worst-case scenario? Could this cause revenue loss, compliance issues, or data leaks?
Only with facts can you properly assess the issue priority and move on to a fix. Just because a client or a panicked stakeholder tells you it’s a top priority, it doesn’t have to be the case. I’ve seen tons of “product not working” tickets, which turned out to be a simple misconfiguration on the user’s side of the web.
Due to such situations, don’t jump to making any public statements before verifying facts. A crisis is only one when you recognize it as such. Not anytime earlier.
Engineering begins the analysis to determine the source of the issue. If needed, legal and compliance assess risks for security and regulatory issues (though that usually happens long after an issue was fixed, and such an assessment is even needed). Customer support drafts holding responses for incoming complaints. You, or whoever is responsible for comms, make sure stakeholders know what is happening.
At this stage, avoid premature promises about fixes before engineering confirms feasibility. Don’t let executives pressure teams into rushed public responses. A late update is always better than an inaccurate one.
If the crisis is severe enough to last three or more hours, it’ll require you to notify your users about the situation. This is for your own benefit: If users know you are on top of an issue, they’ll be less likely to bombard customer support, leaving less clean-up work once the dust is settled.
Be transparent, but don’t overshare. Users want acknowledgment and reassurance, not a technical deep dive. Own the issue rather than deflecting.
Just like in stakeholder comms, blaming third-party services or “unexpected bugs” makes your team look incompetent. Give a clear next step. Will there be a follow-up update? When should users expect more information?
Here’s an example response:
“Hello,
We’re aware of an issue affecting [feature/service]. Our team is actively investigating, and we’ll provide an update within the next [timeframe]. We appreciate your patience as we work to resolve this.”
As you can see, every crisis requires not only a structured approach, but also massive focus and human resources.
While some of you may feel like all you do is run every day putting out fires, not every fire is worth grabbing the extinguisher for. One of the most overlooked skills in crisis management is simply this: knowing whether what you’re looking at is a crisis, or just a loud inconvenience dressed like one.
If you treat every Slack ping, support ticket, or stakeholder freakout as a five-alarm emergency, you’ll exhaust your team, lose credibility, and ultimately stop recognizing the truly dangerous signals when they come.
On the other hand, if you downplay an actual crisis, the fallout can be far worse. In other words: A boy who cried (tech) wolf…
So, how do you tell the difference?
It starts with impact. A true crisis has meaningful consequences on one or more of the following dimensions:
If none of these boxes are checked, you’re probably not in a real crisis. It may still be important, but it can be addressed through normal processes, not panic-mode protocols.
Next, check the reliability of the source. Is this issue backed by telemetry, logs, and multiple user reports? Or is it based on a single vague screenshot from an angry client? Don’t ignore early warnings, but verify them before mobilizing your entire team.
I’ve seen engineers lose half a day over what turned out to be a browser extension bug on a customer’s side. Noise like this is common, your job is to filter the signal from it.
Then comes pattern recognition. The more experienced you are, the faster you can triage. Some symptoms look bad but are familiar and reversible. Others may seem minor but hint at deep architectural flaws.
Build a playbook from past incidents. What instincts have you developed over time? It’s just a mental database of false alarms and near-misses turned learnings.
Finally, emotions aren’t evidence. Just because a stakeholder yells “CRITICAL” doesn’t mean it is. Set criteria. Make them visible.
Align with leadership on what qualifies as P0, P1, and P2. This protects you from being swayed by whoever’s loudest on the call and ensures your team stays focused on what really matters.
Great product management isn’t just about building features. It’s about handling chaos when things go wrong. The best PMs stay calm under pressure, communicate effectively, and lead their teams through uncertainty. You don’t have to be perfect. You just need to be prepared.
So the next time a crisis hits, don’t panic. Follow the plan. Own the problem. Keep your users informed. And most importantly, learn from it.
By implementing crisis management strategies, you can transform potential disasters into opportunities to build stronger user trust and demonstrate your team’s resilience. Plus, once your team goes through a crisis in a civilized and structured way, future issues won’t be as scary.
Whether you’re dealing with technical failures or communication challenges, having a structured approach ensures you can respond effectively when it matters most. Speaking of such approaches, make sure to check this space regularly for new (hopefully) insightful product management articles. See you next time!
Featured image source: IconScout
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
Act fast or play it safe? Product managers face this daily. Here’s a smarter way to balance risk, speed, and responsibility.
Emmett Ryan shares how introducing agile processes at C.H. Robinson improved accuracy of project estimations and overall qualitative feedback.
Suvrat Joshi shares the importance of viewing trade-off decisions in product management more like a balance than a compromise.
Great product managers spot change early. Discover how to pivot your product strategy before it’s too late.