Prioritizing Opportunities Using Customer Evidence: A Skill Behind Top Product Manager Interview Questions
This skill teaches you how to systematically assess and compare opportunity nodes in an Opportunity Solution Tree by evaluating the frequency, severity, and breadth of customer evidence so you can confidently decide where to focus solution ideation.
To prioritize opportunities, evaluate each node in your Opportunity Solution Tree against three dimensions of customer evidence: frequency (how often it appears across interviews), severity (how painful it is for affected customers), and breadth (what percentage of your target segment experiences it). Score and compare opportunities on all three dimensions, then select the opportunity with the strongest combined evidence for focused solution ideation.
Outcome: You can confidently select the highest-impact opportunity from your tree by grounding your decision in real customer evidence rather than opinion or gut instinct.
Prerequisites
- Familiarity with the Opportunity Solution Tree framework
- A populated set of opportunity nodes from continuous customer interviews
- Basic understanding of continuous product discovery practices
- Experience with identifying customer opportunities from research
Overview
Once you've built an Opportunity Solution Tree and identified a rich set of customer opportunities through continuous research, you face a critical decision: which opportunity should your team pursue first? This is one of the most consequential choices in product discovery, and it's also one of the most common topics explored in product manager interview questions. Interviewers want to see that you can move beyond instinct and stakeholder politics to make evidence-based prioritization decisions.
Prioritizing opportunities using customer evidence means evaluating each opportunity node against three concrete dimensions — frequency, severity, and breadth — drawn directly from your research data. Frequency measures how often the opportunity surfaces across customer conversations. Severity captures how painful or blocking the unmet need is for customers who experience it. Breadth reflects what proportion of your target customer segment encounters the problem. By scoring opportunities against all three, you create a defensible, transparent rationale for where to invest your team's solution ideation energy.
This skill is a cornerstone of the Opportunity Solution Tree method. It bridges the gap between upstream discovery (identifying opportunities from research) and downstream delivery (generating and testing solutions). Master it, and you'll not only ship higher-impact products — you'll also ace the prioritization-focused product manager interview questions that separate senior candidates from the rest.
How It Works
The core insight behind this skill is that not all customer opportunities are created equal, and the best way to compare them is by looking at the underlying evidence rather than relying on a single stakeholder's conviction or a loud customer's request.
Imagine your Opportunity Solution Tree has a branch with five sibling opportunities. Each one came from real customer interviews, so they're all valid. But your team can only focus on one at a time. You need a structured way to compare them.
The framework uses three evidence-based lenses:
Frequency answers: "How many distinct customers or interviews surfaced this opportunity?" An opportunity mentioned by 14 out of 20 interviewees carries more weight than one mentioned by 2. Frequency signals that the opportunity is widespread in your data, not an outlier.
Severity answers: "How painful is this for the customers who experience it?" Some unmet needs are mild inconveniences; others are deal-breakers that cause customers to churn, use workarounds, or abandon tasks entirely. Severity is often assessed qualitatively — look at the emotional intensity in interview transcripts, the length of time customers spend describing the problem, and whether they've tried to solve it themselves.
Breadth answers: "What percentage of our target market or segment likely experiences this?" Frequency in your interview sample is a proxy, but breadth requires you to think about whether your sample is representative. An opportunity might appear in 100% of interviews with enterprise customers but be irrelevant to SMBs — breadth helps you contextualize frequency against your actual target segment.
When you evaluate opportunities across all three dimensions, you avoid common traps: chasing a rare but severe pain point that affects almost nobody, or pursuing a widespread but trivial annoyance that won't move your outcome metric. The strongest opportunities score high on at least two of the three dimensions, and the very best score high on all three.
Step-by-Step
Step 1: Gather and Organize Your Customer Evidence
Before you can prioritize, you need a clear inventory of the evidence behind each opportunity node. Go back to your interview notes, research repository, or synthesis artifacts and tag every piece of evidence to the opportunity it supports.
For each opportunity in your Opportunity Solution Tree, create a simple evidence log. This can be a spreadsheet, a tagged collection in Dovetail or EnjoyHQ, or even sticky notes on a wall. Each entry should include: the customer identifier, the date of the interview, a brief quote or summary of what the customer said, and any notes about emotional intensity or workarounds they described.
The goal is to make the evidence behind each opportunity visible and countable. You should be able to answer "how many customers mentioned this?" and "what exactly did they say?" for every opportunity you plan to compare.
Tip: If you've been doing continuous interviews following the sibling skill of identifying customer opportunities from research, your evidence should already be partially organized. Resist the urge to rely on memory — go back to the transcripts.
Step 2: Score Each Opportunity on Frequency
Count the number of distinct customers or interview sessions in which each opportunity was surfaced. This is your frequency score.
Be careful not to double-count. If one customer mentioned the same pain point in two different interviews, that's still one customer. Conversely, if two different people at the same company independently raised the same issue, count them separately — they represent distinct perspectives.
A simple approach is to use raw counts (e.g., "12 out of 25 interviewees") or normalize to a percentage. Normalization is especially helpful when comparing opportunities that were explored in different research sprints with different sample sizes.
Tip: Consider creating a simple matrix with opportunities as rows and customers as columns, marking which customers mentioned which opportunities. This visual makes frequency immediately clear and also reveals clusters.
Step 3: Assess Each Opportunity on Severity
Severity is more qualitative than frequency, but you can still make it structured. Review the evidence for each opportunity and rate the intensity of the customer need on a scale — for example, 1 (minor annoyance) to 5 (critical blocker).
Look for specific signals in your evidence: Did customers describe emotional frustration? Did they build workarounds or use competitors to address the gap? Did they say they would pay to solve it? Did the opportunity cause them to abandon a task or churn? Each of these signals points to higher severity.
Have multiple team members independently rate severity for each opportunity, then discuss and converge. This prevents any one person's bias from dominating the assessment. Document your rationale so you can defend the rating later — especially useful when preparing for product manager interview questions about how you made prioritization decisions.
Tip: When severity ratings diverge sharply among team members, it often means the opportunity is poorly defined. Consider breaking it into sub-opportunities before scoring.
Step 4: Estimate Breadth Across Your Target Segment
Breadth asks you to extrapolate beyond your interview sample: does this opportunity affect a narrow slice of your user base or a large portion of your target segment?
Start with your frequency data as a baseline. If 60% of your interviewees mentioned an opportunity, it might affect roughly 60% of your segment — but only if your interview sample is representative. Consider whether your sample over-indexes on power users, a particular company size, or a specific geography. Adjust your breadth estimate accordingly.
You can also triangulate with quantitative data if available. Support tickets, feature request counts, NPS verbatims, product analytics showing drop-off points, and survey data can all help you validate whether a qualitatively identified opportunity affects a broad population.
Rate breadth on a simple scale (e.g., Low / Medium / High, or a 1-5 score) and note what evidence informed your estimate.
Tip: If you lack quantitative data to validate breadth, flag your confidence level. It's better to make an honest low-confidence estimate than to skip this dimension entirely.
Step 5: Create a Comparison View and Identify Top Candidates
Now bring your three scores together for all opportunities in a single view. A simple table works well:
| Opportunity | Frequency | Severity | Breadth | Notes | |---|---|---|---|---| | Opportunity A | 14/25 | 4/5 | High | ... | | Opportunity B | 6/25 | 5/5 | Low | ... |
Look for opportunities that score high across multiple dimensions. An opportunity with high frequency, high severity, and high breadth is a clear winner. But you'll often face trade-offs — for example, a highly severe but narrow opportunity versus a moderately severe but very broad one.
Don't reduce the decision to a single weighted score unless your team finds that genuinely helpful. The value of this exercise is in the conversation it enables. The three dimensions give your team a shared language for debating priorities that's grounded in evidence, not opinions.
Tip: Visualizing opportunities on a 2x2 matrix (e.g., Frequency × Severity, with bubble size for Breadth) can make trade-offs immediately visible and is a great artifact to bring into stakeholder conversations.
Step 6: Select an Opportunity and Document Your Rationale
After comparing, select the opportunity your team will pursue for solution ideation. This doesn't have to be the mathematically highest-scoring option — strategic context matters. If two opportunities score similarly, you might choose the one that aligns better with your current outcome at the top of your Opportunity Solution Tree, or the one where you have more confidence in the evidence.
Document your decision and rationale. Write down which opportunity you chose, why, and what evidence supported the decision. Also note which opportunities you deprioritized and why — this creates a decision log that prevents relitigating the same debate later.
This documentation is invaluable in two contexts: first, when stakeholders ask why you're working on X instead of Y; and second, when preparing for product manager interview questions where you need to articulate how you made trade-offs using real data.
Tip: Revisit deprioritized opportunities after each discovery cycle. New evidence from ongoing customer interviews might shift the scores, and an opportunity that was low-severity last quarter might have become urgent.
Examples
Example: Prioritizing Onboarding Opportunities for a B2B SaaS Product
A product trio at a project management SaaS company has built an Opportunity Solution Tree with the outcome "Increase 30-day activation rate from 40% to 55%." Through 30 continuous customer interviews over six weeks, they've identified five sibling opportunities under the onboarding branch: (A) "New users can't figure out how to invite teammates," (B) "New users don't understand the difference between projects and workspaces," (C) "New users want to import data from their previous tool," (D) "New users feel overwhelmed by the number of features on first login," and (E) "Admins need SSO setup before they can onboard their team."
The team creates an evidence matrix. Opportunity A appears in 22 of 30 interviews with high emotional intensity — users describe feeling embarrassed when they can't get colleagues into the tool. Opportunity B appears in 18 interviews with moderate frustration. Opportunity C appears in 8 interviews but with extreme severity — users who need data import consider it a complete blocker and some churned. Opportunity D appears in 15 interviews with moderate severity. Opportunity E appears in 4 interviews but is critical for those enterprise admins.
Scoring: A scores highest on frequency (22/30) and severity (4/5), with high breadth since almost all new users need to invite teammates. C scores low on frequency (8/30) and breadth (only affects users migrating from a specific competitor) but maximum severity (5/5). E is the narrowest — only enterprise admins.
The team selects Opportunity A: it has the strongest combined evidence across all three dimensions, and it directly connects to their activation outcome (users who invite teammates activate at 3x the rate of solo users). They document their rationale and note that Opportunity C warrants a deeper quantitative investigation to understand true breadth — perhaps a survey targeting the broader user base.
This is exactly the kind of prioritization narrative that would shine in product manager interview questions about how you decide what to build.
Example: Deciding Between Severe-but-Narrow vs. Moderate-but-Broad Opportunities
A consumer fintech app team has the outcome "Reduce monthly churn from 8% to 5%." Two sibling opportunities are competing for attention: (A) "Users who overdraft feel punished by the fee structure and leave" (frequency: 5/20, severity: 5/5, breadth: Low — only affects ~10% of users) and (B) "Users can't easily see where their money is going each month" (frequency: 16/20, severity: 3/5, breadth: High — affects most active users).
At first glance, Opportunity A seems urgent — customers are literally churning because of it. But the team examines breadth carefully: overdrafting users are a small segment, and even solving the problem completely might only reduce churn by 0.5 percentage points given the segment size.
Opportunity B, while less emotionally intense per customer, affects the vast majority of active users. Customers describe checking their app less frequently because they find spending breakdowns confusing, and reduced engagement is a known leading indicator of churn.
The team calculates rough impact: if solving B improves retention for even 10% of the broad affected population, the churn impact would be 2-3x greater than solving A for 100% of the narrow population. They choose Opportunity B, but flag A for a future sprint focused on high-severity niche opportunities.
This trade-off analysis — showing you can reason through severity vs. breadth with data — is a powerful demonstration in product manager interview questions about prioritization frameworks.
Best Practices
Always use direct customer evidence — quotes, behavioral observations, or quantitative signals — rather than secondhand reports from sales or support when scoring opportunities.
Involve your full product trio (PM, designer, engineer) in the scoring exercise so the prioritization decision has shared ownership and diverse perspectives on feasibility and impact.
Separate the evidence-gathering step from the scoring step to avoid anchoring bias; review all evidence before assigning any scores.
Re-prioritize regularly as new evidence arrives from continuous interviews — the best Opportunity Solution Trees are living artifacts, not one-time exercises.
Use the three-dimension framework (frequency, severity, breadth) as a conversation tool with stakeholders, not a black box; transparency in your rationale builds trust.
When presenting your prioritization to leadership, lead with the customer stories behind the winning opportunity, not just the scores — narrative evidence is more persuasive than numbers alone.
Common Mistakes
Treating frequency as the only prioritization signal and always choosing the most-mentioned opportunity.
Correction
An opportunity mentioned by many customers but with low severity may not move your outcome metric. Always evaluate severity and breadth alongside frequency to avoid chasing mild inconveniences that affect many people but don't really matter.
Conflating stakeholder requests or HiPPO (highest-paid person's opinion) urgency with customer evidence.
Correction
Keep stakeholder input separate from your customer evidence scoring. If a stakeholder feels strongly about an opportunity, ask them to point to the customer evidence that supports it. If the evidence isn't there, that's a signal to do more research, not to prioritize it.
Scoring severity based on your own assumptions about how painful a problem is rather than reviewing what customers actually said.
Correction
Anchor severity scores in specific customer quotes and behaviors. A problem you think should be painful might not bother customers at all, and vice versa. Let the evidence speak.
Comparing opportunities at different levels of the OST hierarchy (e.g., comparing a parent opportunity against a child opportunity).
Correction
Only compare sibling opportunities — those at the same level within a branch of your tree. Comparing a broad parent opportunity against a specific child creates an apples-to-oranges situation. Use the sibling skill of structuring opportunity spaces hierarchically to ensure your tree is well-organized before prioritizing.
Treating the prioritization as a one-time event and never revisiting it as new customer evidence emerges.
Correction
Build a cadence of re-evaluation — for example, every two weeks after your continuous interview synthesis. New evidence can dramatically shift the relative importance of opportunities.
Other Skills in This Method
Maintaining and Evolving a Living Opportunity Solution Tree
How to continuously update the OST as new customer insights and experiment results emerge, keeping it a dynamic artifact rather than a one-time deliverable.
Facilitating Opportunity Solution Tree Workshops with Teams
How to run collaborative OST mapping sessions with cross-functional teams and stakeholders to build shared understanding and alignment on product discovery direction.
Designing Assumption Tests and Experiments for Solutions
How to identify the riskiest assumptions behind each solution and design lightweight experiments—prototypes, fake doors, or concierge tests—to validate them quickly.
Structuring and Grouping Opportunities into a Hierarchy
How to break down broad opportunity areas into smaller, more specific sub-opportunities to create a navigable tree structure that aids prioritization.
Defining Measurable Outcomes for the Top of Your OST
How to select and define a clear, measurable business outcome that anchors the entire Opportunity Solution Tree and aligns team efforts.
Identifying Customer Opportunities from Continuous Research
How to synthesize customer interviews, surveys, and behavioral data into distinct opportunity nodes that represent unmet needs, pain points, or desires.
Generating Multiple Solutions for Each Opportunity
How to use divergent thinking techniques to brainstorm at least three distinct solution ideas per opportunity, avoiding premature commitment to a single approach.
Frequently Asked Questions
How do I prioritize opportunities when I have limited customer evidence?
If you have fewer than 10 customer data points, focus on gathering more evidence before prioritizing. Use the frequency data you do have to identify which opportunities need deeper investigation, and supplement with quantitative signals like support tickets or product analytics to build confidence.
Should I use a weighted scoring formula to combine frequency, severity, and breadth?
You can, but it's often counterproductive. Weighted scores create a false sense of precision and obscure the trade-offs. Most experienced practitioners use the three dimensions as discussion prompts rather than inputs to a formula, keeping the conversation transparent and the reasoning visible.
How does opportunity prioritization differ from feature prioritization frameworks like RICE or ICE?
Opportunity prioritization happens before you've identified solutions. RICE and ICE score specific features or solutions. In the Opportunity Solution Tree framework, you first prioritize which customer problem to solve using evidence, then generate multiple solutions for the winning opportunity, then evaluate those solutions separately.
What product manager interview questions test opportunity prioritization skills?
Common product manager interview questions in this area include: 'How would you decide what to build next?', 'Tell me about a time you used data to prioritize,' and 'How do you handle competing customer needs?' Demonstrating the frequency-severity-breadth framework with a concrete example is a strong way to answer.
How often should I re-prioritize opportunities in my Opportunity Solution Tree?
Re-evaluate whenever significant new evidence arrives — typically every 2-4 weeks if you're running continuous customer interviews. Also re-prioritize when your outcome metric changes or when you've shipped a solution and need to decide what to tackle next.
Can I use quantitative data instead of qualitative interviews to assess frequency, severity, and breadth?
Quantitative data is excellent for validating frequency and breadth estimates but struggles to capture severity on its own. The strongest approach is to triangulate: use qualitative interviews to identify and understand opportunities deeply, then use quantitative data to validate how widespread they are.