Summary
This article demystifies a common PM interview trap: when a metric takes an unexpected turn and you’re asked, “Is that good or bad?” Using a Zoom example (“average meeting size >30”), it illustrates why this is first a diagnostic question, probing distributions, generating multiple hypotheses, and connecting causes to user behavior and business impact, before it becomes a trade-off discussion. You’ll get a crisp framework for diagnosing signals, a practical A/B test setup to validate fixes, and clear criteria for prioritizing between enterprise-scale features and small-meeting UX. If you’re looking for a structured, real-world approach to root cause analysis and effective prioritization, read on.
How to Identify a Diagnostic Versus a Trade-Off Question
Problem diagnostic questions (sometimes referred to as root cause analysis) and trade-off questions can be challenging to classify because their framing is not always clear.
A typical diagnostic question presents a situation where a metric has declined or a negative outcome is occurring, and you are asked to identify the root cause and explain how you would fix it.
Sometimes, however, the question is framed around a metric that has taken an unexpected turn, neither explicitly good nor bad, and you are asked what you would do in response. This is the case with the question we will analyze:
“The average size of meetings is above 30 people. What do you think, is it good or not?”
At first glance, one might think that since the impact is not clearly positive or negative, it must be a trade-off question. In reality, it is a diagnostic question because the interviewer wants to see whether you can:
- Understand why the metric looks this way
- Break down possible explanations (is the metric inflated by a few outliers, i.e., distribution skew, or does it reflect a genuine change in user behavior across the board, i.e., actual shift).
- Connect underlying causes to user behavior and business impact
This type of question is fundamentally about diagnosing the signal before deciding what to do.
Once you have diagnosed the signal, trade-offs may emerge later, but only after the root cause analysis is complete. At that stage, the interviewer may shift to asking how you would choose between options, such as prioritizing features or allocating resources. Those follow-ups become trade-off questions, where you are expected to define trade-offs clearly and explain how you would make a balanced decision.
To illustrate this, we will walk through both cases using a simulated mock interview. In particular, we will demonstrate how to:
- Identify and answer a problem diagnostic question, including designing an A/B test
- Tackle a follow-up trade-off question
Interviewer Evaluation
For the Diagnostic Question, the interviewer will evaluate your performance as follows:
- Problem Framing: Does the candidate recognize that the task is to diagnose the issue first, rather than rushing into solutions or trade-offs?
- Hypothesis Generation: Do they propose multiple, plausible explanations for the observed outcome, showing openness to different possibilities instead of jumping to one conclusion too quickly?
- Analytical Approach: Do they outline how they would test or validate their hypotheses, using both quantitative and qualitative data, and connect their findings back to user needs and business impact?
- Action Planning: Do they propose a thoughtful set of next steps that address both risks and opportunities, instead of defaulting to a single path forward?
- Measurement and Accountability: Do they define clear success metrics and guardrails that demonstrate whether their chosen actions are effective?
For the Trade-off Question, the interviewer will evaluate your performance as follows:
- Trade-Off Framing: Does the candidate clearly articulate the trade-off as a “gain X, sacrifice Y” decision, rather than assuming both can be fully achieved?
- Balanced Perspective: Do they acknowledge the benefits and downsides of each option, including short-term and long-term implications?
- Decision Criteria: Do they specify logical criteria to guide the choice, such as impact on key metrics, alignment with strategy, and competitive context?
- Prioritization Rationale: Do they justify which option should come first based on user value, business goals, and situational context, and explain what would change their mind?
- Recommendation: Do they make a clear, defensible choice and describe how they would monitor results and adapt if conditions change?
Framework for Diagnosing Issues
The interview question: “The average size of meetings is above 30 people. What do you think, is it good or not?” is not framed as a traditional product problem with a presumed negative outcome. Instead, it presents an unexpected metric without direction. Still, to answer effectively, you must diagnose the root cause and implications to determine whether corrective action is required.
The first step is to explore the potential implications, both positive and negative. Only after outlining those possibilities should you proceed to analyze the data, validate the hypotheses, and determine the next steps.
1. Assess the Unexpected Metric Outcome
- Clarify the metric: is there ambiguity (e.g., averages vs. distributions, missing context)?
- Identify what additional analysis is required to interpret what is really happening.
2. Generate Hypotheses
Show broad thinking by considering multiple possible explanations instead of jumping to one.
- Scenario A: Positive cause (e.g., growth in a high-value use case)
- Scenario B: Negative cause (e.g., decline in a core use case)
- Scenario C (optional): Data artifact or external factor (e.g., seasonality, reporting quirks)
3. Analyze the Data
Evaluate hypotheses with quantitative and qualitative evidence to find the most likely cause.
- Segment: Break down by user type, geography, plan type, device, or time
- Compare: Look at trends, medians, and distributions
- Validate: Use qualitative inputs such as user research, feedback, or competitor signals
- Tie to business outcomes: retention, ARPU, churn, revenue mix
4. Execution Strategy
Show decisiveness by planning actions for both opportunity and risk scenarios.
- Track 1 (Opportunity): If positive, double down (features, monetization, adoption)
- Track 2 (Risk): If negative, fix friction and protect the core
- Balanced approach: Keep flexibility by balancing investments across both
5. Define Success Metrics
Ensure accountability with clear, measurable outcomes.
- Metrics for the health of the core use case
- Metrics for capturing upside from opportunities
- Guardrails to avoid regressions
6. Recommendation
Conclude by stating whether the shift is positive or negative and outlining the balanced action you would take.
Framework for Assessing Trade-Offs
1. Define the Trade-Off
Frame it as a “win some, lose some” decision. Choosing one path means gaining X but sacrificing Y. Always consider short-term and long-term effects. Show nuance, acknowledge uncertainty, and explain how you would validate which risk or opportunity matters more.
2. Decision Criteria
Lay out systematic criteria for prioritization, such as:
- Engagement and revenue metrics
- Market conditions
- Company stage and strategy
3. Recommendation Path
Make a clear recommendation, explain why, and note how you would monitor and adjust if conditions change.
Interview Answer
The following is a simulated interview response to demonstrate how to approach a diagnostic question + A/B Testing, followed by a trade-off question.
Diagnostic Question
INTERVIEWER: Imagine you are the PM of Zoom. You realize that the average size of meetings is above 30 people. What do you think, is it good or not?
Assess the Unexpected Metric Outcome
CANDIDATE: I see this as a root cause analysis question. We need to understand why the average meeting size is high and what that implies. Since I do not yet have data, I would begin by formulating hypotheses, then outline the analysis I would conduct, and finally explain how I would proceed depending on the results we obtain.
Hypotheses
An average can be misleading, so I would check the distribution to ensure accuracy. Two main scenarios:
- Scenario A: The average is skewed by a few very large webinars or conferences
- Scenario B: Most meetings are actually 30+ participants
These have very different implications. Scenario A suggests strong enterprise adoption (positive for ARPU). Scenario B would be concerning, as it may signal a decline in consumer and SMB use cases (like one-on-ones or small team syncs). Those smaller meetings drive daily engagement, so a decline could mean churn to competitors like Teams, Meet, or Slack.
Data Analysis
Next, I would analyze the data further. I’d segment meetings by size, frequency, and plan type (consumer, SMB, enterprise). If small-meeting usage appears weak, I would confirm with customer research, such as quick surveys and interviews, to identify any friction points. I would tie all of this back to business metrics, such as churn rates and ARPU shifts by segment.
Execution Strategy
After analyzing the data, I would pursue a dual-track strategy. On one track, capture the upside by continuing to invest in large-meeting and webinar features for the enterprise. On the other hand, address small-meeting friction for consumers and SMBs, for example, by improving the ease of scheduling, quick-start flows, and integrations with calendars and chat.
Define Success Metric
To measure the success of my strategies, I would track these metrics: DAUs for <10-person meetings, SMB retention, and enterprise ARPU growth.
Recommendation
So in short, a high average meeting size could either be an opportunity to scale enterprise adoption or a warning sign of churn in the small-meeting core. My role as PM would be to diagnose which scenario is true and then balance execution to protect the core while scaling the upside.
A/B Testing
INTERVIEWER: Suppose users avoid small meetings (<10 people) due to poor UX. Your team ships features to improve this. How would you test success?
CANDIDATE: Okay, so the success metric, as I defined it, is DAU for <10-person meetings, so I’d set up an A/B test with this metric as the test metric. Users are randomized into treatment and control groups; the treatment group sees the new small-meeting improvements, while the control group experiences the current experience.
The primary metric is the number of daily active users in meetings of 10 people or fewer. I’d look for a statistically significant lift, say 3% or more, to consider the feature successful.
I’d also track secondary metrics, such as the number of small meetings started, join success rates, and next-week retention rates. These metrics matter because they help explain why DAUs are changing. For example, if DAUs don’t move but we see more small meetings being started and better retention, that tells me users are responding positively, the feature is on the right track, but hasn’t yet scaled. In that case, instead of rolling it back, I’d iterate: refine the design, expand the rollout, or add complementary improvements to further push adoption.
For the decision:
- Go if DAUs increase significantly and guardrails stay healthy.
- No-go if DAUs don’t improve and secondary metrics show no positive signal, or if guardrails break.
- Iterate if DAUs are flat but secondary metrics are trending positive, which suggests potential that needs more refinement.
Trade-off Question
INTERVIEWER: You have limited resources and cannot fully invest in both directions at once. How do you think about the trade-off between doubling down on enterprise large-meeting features versus investing in the small-meeting experience?
Define Trade-off
CANDIDATE: I see this as a classic short-term versus long-term trade-off. Enterprises drive much higher ARPU, so doubling down on large-meeting features could lead to rapid revenue growth. But most of Zoom’s users are consumers and SMBs who rely on small meetings every day. If we neglect that, we risk churn, lower engagement, and eventually a weaker market position.
Decision Criteria
To decide, I’d look at three things:
- Trend data: Are small-meeting DAUs already declining, or are they stable?
- Revenue dependency: Where is near-term growth expected to come from, enterprise or SMB?
- Competitive landscape: Are competitors like Teams or Meet pulling SMBs away?
Recommendation Path
If I see a weakness in small meetings, I’d prioritize protecting that core, even if it slows enterprise growth, because small meetings are critical to Zoom’s moat: they drive daily engagement and habit formation. If small meetings are stable, I’d lean into enterprise growth, but I’d still maintain a lightweight team focused on small-meeting reliability and ease of use, so we don’t miss early warning signals.