Calibrating Confidence in RICE Scoring

Make confidence reflect evidence, not enthusiasm.

Map confidence to evidence—production data near 100%, strong qualitative signals lower, guesses much lower. Confidence multiplies through the score, so it should punish untested optimism.

Outcome: Scores where speculative work cannot hide behind 80% defaults.

Synthesized from public RICE references and reviewed for accuracy.

ProductIntermediate20–30 minutes per scoring session

Prerequisites

  • Draft Reach and Impact estimates to critique
  • Whatever evidence exists (analytics, research, tickets)

Confidence is the discount on shaky inputs. If Reach is measured and Impact is a hunch, confidence should follow the hunch.

Agree on a simple banding rubric (e.g. 100% / 80% / 50% / 20%) tied to evidence types, score the weakest link in the chain, and challenge anything stuck at “default 80%.” The goal is separation between validated bets and speculation—not decimal precision.

Frequently Asked Questions

Is confidence about team belief?

No—it is about evidence quality. Excitement without data should still score low.

When do we update confidence?

When new data arrives—ship results, experiments, or clearer analytics.