The RICE Framework for Feature Prioritization
The RICE framework scores initiatives by Reach (users affected), Impact (how much it moves a metric), Confidence (certainty of estimates), and Effort (work required). Score = (Reach × Impact × Confidence) ÷ Effort. Higher scores mean higher leverage on the roadmap.
RICE replaces endless prioritization debates with a single comparable score per initiative. Teams align on Reach, Impact, Confidence, and Effort, then rank by (Reach × Impact × Confidence) ÷ Effort.
It works best when scoring is a repeatable ritual: same time horizon, same Impact scale, and honest Confidence—so the list stays stable enough to plan from.
How It Works
List candidate initiatives
One row per distinct, shippable outcome; dedupe and align scope before scoring.
Estimate Reach and Impact
Same time window for everyone; Impact on the shared scale; document units.
Set Confidence and Effort
Confidence from evidence quality; Effort as total person-months including design and QA.
Compute and rank
Apply the formula, sort descending, then discuss ties with strategy—not only the number.
When to Use
- You need a shared way to compare unlike initiatives on one backlog
- Stakeholders disagree and you want explicit assumptions
- The backlog is large enough that intuition alone misses quick wins
When Not to Use
- Hard compliance or legal deadlines with no tradeoff
- You lack any way to estimate Reach with a straight face
- The team rewrites strategy weekly and maintaining scores adds no value
Examples
Checkout vs referral
High Reach + moderate Impact can beat niche high-Impact work—RICE makes that visible instead of debating vibes.
Skills in This Method
Calculating RICE Scores
Apply the formula and rank a backlog in one pass.
Estimating Reach
Turn analytics and proxies into comparable Reach numbers.
Calibrating Confidence
Calibrate confidence so weak evidence does not masquerade as certainty.
Mapping Effort to Person-Months
Convert estimates into consistent person-months for the denominator.