Normalized Capacity instead of Velocity

Regardless of the technique you use to apply a ‘cone of uncertainty’ approach to predictability, remember that velocity and story points are preliminary, and are intended to prevent teams from overcommitting to work, increasing the likelihood that they will achieve their goals. The true measurement that builds and maintains trust with our customers and stakeholders is being customer-centric and aligning around business value.

Normalize Capacity and not Velocity

Too often I hear of SAFe implementations where team of teams are still using normalization techniques to calculate their anticipated velocity (aka capacity) and while applying  story point estimates, even after their initial PI Planning event to re-baseline their backlog for each and every subsequent PI Planning event. When I reflect back on specific training that I received, I was fortunate enough to have instructors versed in the intent of the application of normalized story points. I believe that the intent and application of normalized story points are one of the biggest misunderstood components of the SAFe framework.

Why does SAFe call for the use of normalized story points?

  1. At scale, the need to have a higher understanding of relative size for potential candidate Features that contribute to the Strategy of the organization (or even Epics) to understand costs and investment ranges required
  2. Some teams new to Agile may not have any historical data available
  3. Ensure a pattern to  capitalize labor on Lean Agile implementations as part of the Lean Portfolio

What is Story Point normalization?

All teams aligned with the same Development Value Stream (DVS) in an Agile Release Train (ART) have a common baseline story point definition, which enables a shared basis for economic decision making.

However, it is important that all teams who reference the same baseline share:

  1. A common backlog
  2. A common Definition of Done

A common approach to normalizing involves:

  1. **Find a small story that would take about a half-day to develop and a half-day to test and validate, and call it a ‘one’
  2. Estimate every other story relative to that ‘one’

Other approaches to normalization include:

  1. Participatory – periodically review completed stories and allow ARTs to agree what constitutes an example of a 1,2,3,5,8 story to reference in the future

**Observed Anti Pattern:

Many assume that this implies that 1 story point is equivalent to 1 day of effort (between 1 and 24 hours) – or even worse 8 hours of effort, which is incorrect. Story Points are unitless measurements. While it is true that the normalization is initialized to a small piece of work, something that could likely be completed in a single day, it by no means implies that all estimates with a relative size of 1 will be completed in a day.

However, this only needs to be done initially, as Product Management can start to use historical data driven by the team for future preliminary relative sizing.

Additionally, many teams continue to apply this pattern incorrectly in future sizing sessions. Additionally, as teams evolve their maturity, say with test automation and relentless improvements, velocity will fluctuate as investments are made to the architectural runway or gains could be seen with the addition of automated testing and/or automated deployments and releases.

How can teams dive deeper into the framework and apply a technique to incorporate historical data balanced against their latest capacity for future Iterations and PI Planning events?

Rather than continuing to normalize velocity of what might occur in the future, start to normalize capacity of the team and weigh it against the teams past historical data (both their actual delivery and actual availability).  While at the same time, encourage Agile teams to reflect on the actual delivery of the work, to better improve future estimation and be poised to enable Lean Agile Accounting for capitalization of labor.

Teams can then continue to normalize their capacity and weigh it against their historical velocity (or throughput).

Example:

An Agile team forms and is preparing for their first PI Planning event with an Agile Release Train. The team consists of 8 team members, plus a Product Owner and Scrum Master (A 10 person team).

PI 1 (Capacity)Iteration 1Iteration 2Iteration 3Iteration 4Iteration 5 (IP)
Jon88888
Molly88888
Lauren88888
Sarita88888
Tim88888
Marcus88888
Srini88888
Jose88888
Amy (Product Owner)
Wu (Scrum Master)
Totals (Story Points)6464646464

During preparation for PI Planning, as part of backlog estimation, they begin with a normalized sizing process by selecting a small user story from their backlog (something that would take approximately a half day to develop and a half day to test), and assign a story point value of 1. Then, in all of their future Iteration Planning events and any other time that they team needs to perform relative sizing of work, they reference that baseline story to all future efforts.

Additionally, as the teams establish their starting capacity, teams use a similar normalized approach and allocate 8 points per team member at full capacity. For our Agile team above, our starting Iteration capacity for a team that is available full time would be (8 points per team member X 8 team members = 64 story points).

PI 1Iteration 1Iteration 2Iteration 3Iteration 4Iteration 5 (IP)
Team Capacity6464646464

Capacity is the anticipated velocity that teams could effectively deliver during each iteration of a Program Increment (PI), reinforced through individual Iteration Planning.

However, how can we continue a normalized approach after the first PI Planning event, during PI Planning execution and beyond?

If teams continue to monitor their planned capacity against their actual capacity, we can use this empirical data to better predict their future velocity. Actual Capacity is the team members actual availability during each iteration. For example during Iteration 1, Tim was out sick for 2 days, and Marcus was planning to join the team but was delayed. So we would subtract 2 points from the capacity calculated initially to account for Tim, and 8 points to reflect Marcus. Also, the team will reflect the number of story points that they were able to have accepted (meeting their definition of done).

First PIActual VelocityActual Capacity
Iteration 14854
Iteration 2
Iteration 3
Iteration 4
Iteration 5 (IP)

We repeat this pattern for iterations 2 and 3, see simulated results below:

First PIActual VelocityActual Capacity
Iteration 14854
Iteration 24560
Iteration 35264
Iteration 4
Iteration 5 (IP)

Now, as we enter Iteration 4, lets apply a calculation to predict the teams future velocity, while taking into account their past data that was generated.

image.png

Vnew is the expected velocity for the upcoming Iteration that teams should plan for

Vavg is the average actual velocity of the past 3 to 5 iterations (Average of 48+45+52) = 48

Cavg is the average actual capacity of the past 3 to 5 iterations (Average of 54+60+64) = 59

Cnew is the forecasted capacity of the team for the upcoming Iteration (the team reports their new capacity for the next Iteration is 61).

So the expected velocity (that is normalized against teams capacity and actual velocity) is calculated as follows:

(48/59) X 61 = 49

*So at their Iteration Planning event teams should not load more than 49 points of effort into their Iteration. The new forecasted velocity (49) is balanced against the slight increase in velocity but also weighted against their past average velocity.

I encourage you to experiment with this method with your teams and let me know the results. Remember, this is only a planning tool to help increase the likelihood that teams will have enough capacity to meet their Iteration and Product goals. It is important to steer leadership towards delivery of business value when looking to maintain predictability and build trust. These techniques are only intended to be used for the internal core team.

Evolving beyond the framework

The techniques presented above are temporary. As Agile teams mature, success can be found in placing less emphasis on story points, gravitating to use pure throughput (the number of stories (count) delivered during a PI). This reduces the overhead of Agile teams even further, and allows more time to be spent delivering value, and most importantly, the customer (while at the same time not disrupting potential CapEx opportunities, and future cost range predictability).