Is Using RICE Score Actually Increasing Productivity?
RICE is best in the beginning when you don’t know where to start. Learn why RICE isn’t enough for an advanced prioritization and how you're using RICE wrong.
Join the DZone community and get the full member experience.
Join For FreeWhen deciding your product’s future, you have to collect data and opinions, analyze, and estimate issues and ideas. Unless, of course, you want your prioritization to be a gut feeling show in a zoo with HiPPOs, RHiNOs, and ZEbRAs.
A good prioritization framework will help you structure the data and be consistent. Deriving your own requires months of careful prioritization when you finally begin to understand how to detect small rare problems. And you have to define your priorities right from the start—a vicious circle.
Thanks to the community, that circle is easy to break. There are dozens of frameworks proven by hundreds of great teams. In this article, we want to discuss one of the most popular—RICE Score—why most use it incorrectly, when it really works and when it becomes useless.
RICE Score Definition
RICE is an acronym for the four factors used to evaluate project ideas. It was developed by Sean McBride when he was a PM at Intercom. It’s a simple yet effective method to find a balance between values a feature will bring and efforts it requires, impacting a single objective. RICE stands for Reach, Impact, Confidence, and Effort.
Reach
Reach ranks your ideas by the number of leads and users it will affect. If it’s a registration page, it will touch upon every potential customer. If it’s an in-depth tweaking, probably only loyal users will notice that.
- Answers the question: How many people will this feature affect within a defined time period?
- Originally measured: Number of people/events per time period (any number).
Impact
Impact ranks your ideas by the amount of influence on the objective. You have to identify one main objective to aim for a definite goal. If you score Impact like “Idea A increases conversion rate—2,” “Idea B increases adoption—3,” and “Idea C maximizes delight—2,” such scores make no sense.
- Answers the question: How much will this feature impact the objective when a customer encounters it?
- Originally measured: 0.25—Minimal; 0.5—Low; 1—Medium; 2—High; 3— Massive.
Confidence
Confidence is used to support or skepticize your estimates. You only can be confident if you’ve done the research and have back up data. Confidence scores make the whole evaluation more data-driven and less emotional.
- Answers the question: How confident are you in your reach and impact estimates?
- Originally measured: 20%—Moonshot; 50%—Low Confidence; 80%—Medium Confidence; 100%—High Confidence.
Effort
Effort ranks your ideas by the amount of time their implementation requires. It completes the prioritization with the Value/Effort balance and helps you surface the Easy Wins.
- Answers the question: How much time will the feature require from the whole team: product, design, and engineering?
- Originally measured: Number of “person-months” (any number).
RICE Formula
To get the RICE score, you multiply Reach, Impact, and Confidence, and divide the result by Effort. The grand total tells you how you will influence the objective per time worked. Thus, you can only focus on significant tasks, understanding whom you will impact, why, how, and how soon.
RICE Score Usage
As a prioritization framework, RICE is best for estimating the value of projects, features, user stories, ideas, and hypotheses, and it is mostly used by product or project managers.
For the most part, PMs use standard RICE to prioritize on their own. Then they go to the team to tell what their tasks for the next quarter or sprint are. That is a terribly ineffective approach for it is:
- Slow. Not to make shots in the dark, you have to gather a lot of information. So you spend days downloading and analyzing statistical data from the tools and services your team uses.
- Vague. More often than not, you see that the information you have is not enough. So you distract your teammates from work to get the additional information and ask for their opinions. Sometimes you get nothing, so you waken that gut feeling and make your wild guesses.
- Dictatorial. Data-driven or not, you make all the decisions yourself and force your conclusions on others. Thus your team feels disregarded and will get unmotivated over time. You don’t give them the right to choose the future of the product they build and nurture. Plus, you feel overly responsible for the decisions and get stressed out.
How to Use
To fix it, you have to start prioritizing collaboratively. Involving your teammates solves all the problems. Together you estimate faster and more accurately. Moreover, you destroy silos, achieve alignment, and create shared understanding.
First, divide the criteria between the teams to get firm estimations. Think, whom would you go to for advice, evaluating the four criteria alone? For example, Sales and Support are probably the best to estimate Reach; Products—Impact; Engineers and Designers—Effort. The one that should be evaluated by everybody is Confidence because specialists too may lack information and have doubts. And it’s best to collect all opinions, regardless of a newbie or an expert—their average score is the most accurate estimation you ever get.
Then, evaluate asynchronously. Teams’ average estimation is most precise when people don’t deliberate because, thus, they preserve their own unique vision and don’t emulate each other’s thoughts. Don’t meet and discuss your scores before they are assigned. Let people estimate on their own, like in planning poker, but out of the meeting room when it’s convenient for them.
As such:
- You’ve saved time on gathering and analyzing tons of information;
- All the team has a full context of every idea and issue;
- The team knows exactly what their goal is and they’ve taken part in deciding how they achieve it;
- You got an accurate list of priorities.
A bonus—you eliminated at least one unnecessary meeting and saved your team’s time and nerves.
When RICE Works and When Doesn’t
RICE Score is perfect for kickstarting when you just launch the product or your prioritization journey. You need to decide on features that will give you quick wins and bring your customers’ core values as fast as possible. RICE saves you a whole load of time on thinking up sensible criteria and enables swift decision-making.
And just to mention, criteria changes are essential when using the RICE score.
Again, RICE is great in the beginning, but even Intercom quit using it over time. After a few cycles of prioritization, you will start noticing that RICE isn’t enough. It allows you to keep in mind only one objective. I bet your product is way more complex. You can’t forget everything but one goal all of a sudden.
Furthermore, the criteria themselves are rather vague, and you will face difficulties evaluating them because you will interpret them unevenly from idea to idea. You will want to change the description, change the numbers used for scoring, rename the criteria, and add specified ones.
We started with RICE, but over time we customized the criteria and complemented them with some criteria from AARRR, our Business Pain Point, and North Star Metric. We tried evaluating all criteria by the whole cross-functional team, and then by functional ones, distributing the criteria among roles in Ducalis. And we still often change and update the way we prioritize.
Key Takeaways
- RICE is great for prioritizing projects, features, user stories, ideas, and hypotheses.
- Many are smarter than the few, and prioritization must be teamwork.
- RICE is best in the beginning when you don’t know where to start.
- Prioritization must be a time-saver, and it’s better to use specialized tools.
- Prioritization mechanism isn’t something you set up once and for all. It must keep up with the pace your product develops and grows.
All in all: the less time you spend on setting up the evaluation process and evaluation itself, the more time you can devote to bringing core values to your customers and building a strong team culture.
Published at DZone with permission of Natasha Beseda. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments