Selecting the Right North Star Metric: Essential KPIs Product Manager Guide

This skill teaches you how to evaluate candidate metrics and choose the single metric that best captures the core value customers get from your product — the foundation of the North Star Metric framework and one of the most consequential kpis product manager decisions you'll make.

To select your North Star Metric, first identify the core value your product delivers to customers. Then brainstorm candidate metrics that quantify that value exchange. Evaluate each candidate against six criteria: does it measure value delivered, is it leading, is it actionable, is it understandable, is it measurable, and does it correlate with revenue? The metric that scores highest across all criteria becomes your North Star.

Outcome: You'll have a validated, clearly articulated North Star Metric that captures customer value, aligns your team, and serves as the single most important KPI guiding your product strategy.

Synthesized from public framework references and reviewed for accuracy.

ProductIntermediate2-4 hours

Prerequisites

  • Understanding of basic product metrics (DAU, retention, conversion, etc.)
  • Clarity on your product's value proposition and target customer segments
  • Familiarity with the North Star Metric framework concept
  • Access to product analytics data or reasonable usage assumptions

Overview

Choosing a North Star Metric is one of the highest-leverage decisions a product manager makes. It determines what your team optimizes for, how you prioritize your roadmap, and whether your organization rallies around customer value or gets lost chasing vanity metrics. Among all the kpis product manager teams track, the North Star Metric holds a unique position: it's the one metric that, if it grows, indicates your product is sustainably delivering more value to more customers.

But selecting the right North Star Metric is harder than it sounds. Pick a metric that's too broad (like revenue) and it won't guide day-to-day decisions. Pick one that's too narrow (like page views) and you'll optimize for engagement tricks instead of real value. The goal is to find the metric that sits at the intersection of customer value delivered and long-term business growth — the metric where improving it means customers are genuinely getting more from your product.

This skill walks you through a structured process for brainstorming candidate metrics, stress-testing them against proven evaluation criteria, and building consensus with stakeholders. It's a core competency within the broader North Star Metric framework and directly feeds into sibling skills like identifying input metrics and connecting your North Star to roadmap decisions.

How It Works

The selection process works by narrowing a broad set of candidate metrics through progressively tighter filters. The conceptual model has three layers:

Layer 1: Value Identification. Before you look at any metric, you need crystal clarity on what value your product creates. Not what it does — what outcome it delivers. Spotify doesn't deliver audio playback; it delivers music discovery and enjoyment. Airbnb doesn't deliver listings; it delivers nights of unique accommodation. Your North Star Metric must quantify this value exchange.

Layer 2: Candidate Generation. With value defined, you brainstorm every plausible metric that could represent that value. This is a divergent phase — you want 8-15 candidates. Some will measure consumption (sessions, time spent), some will measure output (tasks completed, items created), and some will measure outcomes (goals achieved, problems solved). The best North Star Metrics tend to measure the moment of value realization — the instant a customer gets what they came for.

Layer 3: Criteria-Based Evaluation. Each candidate is scored against six proven criteria that separate great North Star Metrics from misleading ones. The criteria test whether the metric reflects real customer value, whether it leads (rather than lags) business outcomes, whether teams can actually influence it, and whether it's practically measurable. The metric that passes all six filters most convincingly becomes your North Star.

This approach works because it forces rigor into what's often a gut-feel decision. Among all the kpis product manager teams debate, the North Star Metric deserves the most deliberate selection process because every other product decision flows from it.

Step-by-Step

  1. Step 1: Articulate Your Product's Core Value Exchange

    Before evaluating any metric, write down — in plain language — what value your product delivers to customers. Be specific about the outcome, not the feature. Complete this sentence: 'Our product helps [target customer] achieve [specific outcome] by [mechanism].'

    For example, a project management tool might say: 'Our product helps small teams achieve on-time project delivery by making task coordination effortless.' A food delivery app: 'Our product helps busy professionals enjoy restaurant meals at home by removing the friction of ordering and logistics.'

    This value statement becomes your filter for everything that follows. If a candidate metric doesn't connect to this value exchange, it's the wrong North Star — no matter how easy it is to measure.

    Tip: Interview 5-10 recent customers and ask: 'What would you lose if our product disappeared tomorrow?' Their answers reveal the real value exchange better than any internal brainstorm.

  2. Step 2: Brainstorm 8-15 Candidate Metrics

    With your value statement clear, generate a wide list of metrics that could quantify that value. Don't filter yet — this is a divergent phase. Include metrics across several categories:

    • Consumption metrics: sessions, time spent, features used
    • Transaction metrics: purchases, bookings, sends
    • Output metrics: items created, tasks completed, messages sent
    • Outcome metrics: goals achieved, problems resolved, milestones hit
    • Adoption metrics: weekly active users, activated accounts

    For each metric, write it with a specific unit and time frame. Not 'engagement' but 'weekly active projects with at least one completed task.' Not 'usage' but 'monthly meals ordered per active customer.'

    Include your team — engineers, designers, data analysts, and customer-facing roles often surface metrics that product managers miss. The more diverse your brainstorm participants, the better your candidate list.

    Tip: Include at least 2-3 metrics that feel uncomfortably close to measuring customer outcomes rather than product usage. These are often the strongest candidates.

  3. Step 3: Apply the Six Evaluation Criteria

    Score each candidate metric against six criteria on a 1-5 scale. These criteria are the core of the selection process:

    1. Value Alignment: Does improving this metric mean customers are getting more value? (Not just using more features, but achieving better outcomes.)
    2. Leading Indicator: Does this metric move before revenue and retention improve? A lagging metric like revenue tells you what already happened. A leading metric predicts what will happen.
    3. Actionability: Can your product team directly influence this metric through product changes, experiments, and improvements?
    4. Understandability: Can every person in your company — from engineering to sales to the CEO — immediately understand what this metric means and why it matters?
    5. Measurability: Can you reliably track this metric with your current (or near-term) data infrastructure? A perfect metric you can't measure is useless.
    6. Revenue Correlation: Is there evidence (or strong logical reasoning) that growth in this metric will eventually drive sustainable business growth?

    Create a simple scoring matrix. Be honest — a metric that scores 5 on measurability but 2 on value alignment is a trap.

    Tip: Have different team members score independently before comparing. Disagreements on scores often reveal deeper disagreements about product strategy that need to be resolved.

  4. Step 4: Stress-Test Your Top 2-3 Candidates

    Take the 2-3 highest-scoring metrics and subject them to adversarial stress tests. For each candidate, ask:

    • The Perverse Incentive Test: 'If we optimized ruthlessly for this metric, what bad behavior could it encourage?' For example, 'time spent in app' could incentivize addictive dark patterns rather than efficient value delivery.
    • The Ceiling Test: 'Is there a natural ceiling that would make this metric plateau even as the business grows?' Monthly active users, for instance, can plateau in mature markets.
    • The Decomposition Test: 'Can we break this metric into input metrics that different teams can own?' If a metric can't be decomposed, it's hard to make actionable across the organization.
    • The Gut Check: 'If this metric doubled next quarter, would we be confident the business is healthier?' If the answer is ambiguous, it's the wrong metric.

    This step often eliminates candidates that looked strong on paper but fail under scrutiny. It's the difference between kpis product manager teams aspire to track and the one metric that actually drives the right behavior.

    Tip: The perverse incentive test is the most important. If your metric can be gamed in ways that hurt customers, it will be — even unintentionally.

  5. Step 5: Validate with Historical Data

    Before committing, backtest your top candidate against historical data. Look for two signals:

    1. Correlation with retention: Cohorts that scored higher on your candidate metric should show better retention over time. If users who had more 'weekly completed projects' retain at 2x the rate of those who didn't, that's strong validation.
    2. Correlation with revenue: Periods where your candidate metric grew should roughly correspond with revenue growth (with some lag). If the metric grew but revenue didn't follow, the metric may not capture real value.

    If you're at an early stage without much data, do qualitative validation instead. Interview your most engaged customers and your churned customers. Map their behavior to your candidate metric. Do power users naturally score high on it? Do churned users score low?

    This step converts your hypothesis into evidence. Skip it, and you risk building your entire strategy on an unvalidated assumption.

    Tip: Even directional data is valuable here. You don't need statistical rigor — you need to confirm that the metric moves in the direction you'd expect when customers are getting real value.

  6. Step 6: Define the Metric Precisely

    Once you've selected your North Star Metric, write a precise definition that eliminates ambiguity. Your definition should include:

    • Exact formula: How is it calculated? What's the numerator and denominator (if it's a ratio)?
    • Time frame: Is it measured daily, weekly, monthly?
    • Inclusion/exclusion criteria: Which users count? Do free trial users count? What about internal accounts?
    • Data source: Where does the data come from? Which event or table?

    For example, instead of 'weekly active projects,' define it as: 'The number of unique projects that had at least one task marked as complete by a non-admin team member in a rolling 7-day window, excluding internal test accounts, sourced from the task_completed event in our analytics pipeline.'

    This precision matters because different interpretations of the same metric lead to conflicting dashboards, confused teams, and eroded trust in the number.

    Tip: Write the SQL query (or pseudocode) that calculates the metric. If you can't write the query, the definition isn't precise enough.

  7. Step 7: Build Organizational Buy-In

    A North Star Metric only works if the organization rallies around it. Present your selection to leadership and cross-functional partners with a structured narrative:

    1. Start with the customer value statement (Step 1)
    2. Show the candidate metrics you evaluated and the criteria you used (Steps 2-3)
    3. Explain why the winner passed the stress tests that others failed (Step 4)
    4. Present the data validation (Step 5)
    5. Share the precise definition (Step 6)

    Anticipate objections. Sales may worry the metric doesn't map to revenue targets. Engineering may question measurability. Finance may want a metric closer to the P&L. Address each with evidence, not assertions.

    The goal isn't unanimous enthusiasm — it's informed commitment. People don't need to love the metric; they need to understand why it was chosen and agree to orient their work around it. This alignment is what makes the North Star Metric framework powerful — and it begins with a selection process rigorous enough to earn trust.

    Tip: Frame objections as features: 'Yes, this metric doesn't directly measure revenue — that's by design. Revenue is a lagging indicator of the value this metric captures.'

Examples

Example: Selecting a North Star Metric for a B2B Project Management Tool

A product manager at a mid-stage B2B project management SaaS (think Asana or Monday.com competitor) needs to select a North Star Metric. The product serves teams of 5-50 people managing cross-functional projects. Current kpis product manager teams at this company track include DAU, number of tasks created, paid seats, and NPS.

Step 1 — Value Statement: 'Our product helps cross-functional teams deliver projects on time and on scope by centralizing task management, communication, and status visibility.'

Step 2 — Candidates: The team brainstorms 12 metrics including: weekly active users, tasks created per week, tasks completed per week, projects with status updates in last 7 days, teams with 3+ active members weekly, average project completion rate, weekly comments per project, paid seats, NPS, time-to-first-project, projects completed on deadline, and weekly active projects with at least one completed task.

Step 3 — Scoring: 'Weekly active projects with at least one completed task' scores highest. It measures value alignment (projects moving forward = value delivered), is a leading indicator (project activity predicts retention 4 weeks out), is actionable (product can improve task workflows, notifications, templates), is understandable ('active projects' is intuitive), is measurable (task_completed events exist), and correlates with revenue (teams with more active projects convert and expand seats).

Step 4 — Stress Testing: The perverse incentive test reveals a minor risk — teams could game it by completing trivial tasks. The team mitigates this by noting that the metric is 'projects with completed tasks,' not 'total tasks completed,' so the incentive is to keep projects moving, not inflate task counts. The decomposition test shows clear input metrics: new project creation rate, task creation rate per project, task completion rate, and team member activation rate.

Step 5 — Validation: Historical data shows that teams with 3+ active projects in week 1 retain at 72% after 90 days vs. 31% for teams with 0-1. Revenue correlation holds: quarters where this metric grew 10%+ saw ARR grow 8-12% the following quarter.

Step 6 — Definition: 'Count of unique projects where at least one task was marked complete by any team member (excluding account admins) in a rolling 7-day window. Excludes template projects and internal QA accounts. Source: task_completed event joined with projects table.'

Result: The team selects 'Weekly Active Projects' as their North Star Metric, defined precisely and validated with data.

Example: A Consumer Fitness App Choosing Between Engagement and Outcome Metrics

A consumer fitness app (similar to Strava or Peloton) is debating between 'weekly workouts completed' and 'weekly active minutes' as its North Star Metric. The product manager has narrowed it down to these two finalists after the initial brainstorm and scoring.

Stress-testing 'Weekly Workouts Completed': This metric is clean and understandable. But the perverse incentive test raises a flag: it incentivizes short, low-effort workouts. A user who logs five 2-minute stretches looks identical to one who completed five intense 45-minute sessions. The ceiling test is fine — the metric scales with user base growth.

Stress-testing 'Weekly Active Minutes': This metric better captures intensity and depth of engagement. However, it has its own perverse incentive: it rewards long, slow activities over efficient high-intensity workouts. A 60-minute casual walk scores higher than a brutal 20-minute HIIT session, even though the HIIT user may be getting more value.

Resolution: The team realizes both metrics are flawed in isolation. They reframe the value statement: 'We help people build a consistent exercise habit.' Consistency, not duration or count, is the core value. They create a hybrid: 'Weekly users who completed 3+ workouts of any type' — measuring habit formation rather than volume. This passes all stress tests: it can't be gamed easily (you need three separate days), it correlates with 6-month retention (validated at 4.2x retention lift), and every team can influence it (content team creates varied short workouts, product team builds streak reminders, growth team optimizes re-engagement).

Takeaway: When your top candidates both fail stress tests, it often means you haven't precisely articulated the value. Revisit Step 1.

Best Practices

  • Choose a metric that measures value delivered to customers, not value extracted from them. Revenue is an extraction metric; it follows value delivery but doesn't measure it. The best North Star Metrics are ones where growth means customers are genuinely better off.

  • Favor rate or frequency metrics over raw counts for mature products. 'Weekly meals ordered per active customer' is more actionable than 'total meals ordered' because it normalizes for user base growth and reveals whether you're delivering more value per person.

  • Revisit your North Star Metric selection when your product undergoes a fundamental strategy shift, not on a fixed schedule. A metric chosen for product-market fit exploration may not serve a scaling-stage product — this connects directly to the skill of evolving your North Star across growth stages.

  • Document the 'runner-up' metrics and why they were rejected. This prevents the organization from relitigating the decision every quarter and provides useful context when conditions change enough to warrant revisiting the selection.

  • Ensure your North Star Metric can be decomposed into 3-5 input metrics that different teams can influence independently. If the metric can't be broken down, it becomes a spectator sport rather than an actionable guide — this is where identifying input metrics becomes essential.

  • Test your metric's understandability by explaining it to someone outside your product team — a new hire, a board member, a customer support agent. If they can't immediately grasp what it measures and why it matters, simplify it.

Common Mistakes

Choosing revenue or a revenue-adjacent metric (like MRR or ARPU) as the North Star Metric.

Correction

Revenue is a lagging output of delivering customer value — it tells you what already happened, not what's about to happen. Choose a metric that captures the value exchange that drives revenue. Revenue growth should be a consequence of your North Star growing, not the North Star itself. Among all the kpis product manager teams track, revenue is critical but belongs as a business outcome metric, not a North Star.

Selecting a vanity metric like total registered users, page views, or app downloads because it's easy to measure and always goes up.

Correction

Vanity metrics never go down (unless something is catastrophically wrong), which means they can't signal problems. Your North Star Metric should be capable of declining when you stop delivering value. If your metric only ever increases, it's measuring accumulation, not value.

Trying to combine multiple metrics into a composite score or index to avoid making a hard choice.

Correction

Composite metrics (like a 'health score' that blends engagement, retention, and NPS) obscure more than they reveal. When the composite moves, nobody knows which component drove it or what to do about it. The power of the North Star framework is the discipline of choosing ONE metric. If you can't choose, you haven't clarified your value proposition.

Selecting the metric in a small room with only product and data teams, then announcing it to the rest of the organization.

Correction

Cross-functional input during selection prevents political resistance later. Include engineering, design, marketing, sales, and customer success in at least the brainstorming and stress-testing phases. People support what they help create. The skill of aligning cross-functional teams starts during selection, not after.

Changing the North Star Metric every quarter based on shifting priorities or executive preferences.

Correction

A North Star Metric should be stable for 12-18 months minimum. Frequent changes signal strategic confusion and prevent teams from building the intuition, dashboards, and experiments needed to actually move the metric. If you feel the urge to change it, first ask whether the problem is the metric or the strategy.

Frequently Asked Questions

What are the most important kpis product manager teams should consider for a North Star Metric?

The best North Star Metric candidates are kpis product manager teams don't always think of first — they measure value delivered to customers, not business outputs. Look at metrics that capture the moment customers achieve their goal: meals delivered, projects completed, messages that got replies, searches that led to purchases. These 'value realization' metrics tend to be leading indicators of both retention and revenue.

Can a product have more than one North Star Metric?

No. The entire power of the North Star Metric framework lies in the discipline of choosing one. Multiple North Stars create competing priorities, split focus, and dilute alignment. If you feel you need two, you likely have two distinct products or two customer segments that need separate strategies. Your single North Star should be supported by 3-5 input metrics that provide nuance.

How is a North Star Metric different from OKRs or regular KPIs?

OKRs change quarterly and define time-bound objectives. Regular kpis product manager teams use cover many aspects of the business — acquisition, engagement, monetization, support load. The North Star Metric is the single, stable metric (lasting 12-18+ months) that represents the core customer value your product delivers. OKRs and KPIs should ladder up to or support the North Star, not replace it.

How often should I change my North Star Metric?

Rarely. A well-chosen North Star Metric should remain stable for 12-18 months minimum. Change it only when your product undergoes a fundamental strategic shift — entering a new market, pivoting your value proposition, or transitioning between growth stages. For guidance on when and how to evolve it, see the skill on [evolving your North Star across growth stages](/skills/evolving-north-star-across-growth-stages).

What if my North Star Metric is hard to measure with our current data infrastructure?

If the ideal metric is difficult to measure today, use a proxy metric that's highly correlated and currently measurable, while investing in the instrumentation to track the real metric. Document the proxy relationship explicitly. A proxy that's 80% correlated with the ideal metric and measured reliably today is more useful than a perfect metric you can't track for six months.

How do I know if I've chosen the wrong North Star Metric?

Three warning signs: (1) The metric is growing but retention or revenue aren't following within a reasonable lag period. (2) Teams are making product decisions that improve the metric but clearly hurt the customer experience. (3) Cross-functional teams frequently debate whether the metric reflects real progress. If you see these signals, revisit your stress tests and validation data.