Applying a consistent measurement approach to teams can be a contentious issue, I believe the foundation of any attempt to measure a set of teams in the same manner is to use this as a tool for the teams to reflect on how they are working and discuss with other teams, rather than any sort of ranking or absolute value comparison. So as a senior manager I might use a comparison to ask a question like “why are this team having such high variance in delivery?” or “are we taking too much risk in this area?” As opposed to team x are not as good as team y.
Velocity is in my experience often used to track the rate at which a team can deliver, I believe this is not only a crude mechanism, but also drives little meaningful discussion between teams as one teams 8 stories points is another teams 3 story points. Also velocity is useful for a team to make broad brush judgments / estimations of a longer term effort and to reflect on broad brush judgments in retrospectives, but is rightly very team specific.
My counter proposals to support cross team discussion are:
- Mean time to production – idea in a backlog to live in production. This highlights over producing (which is a waste), and shows cycle times.
- Deployment risk – this can be hard to quantify, but something like oldest commit to newest (with a penalty for old) in the release, or team view of risk which is less easy to calibrate. This measure looks at over producing code as a waste and the risk of releasing forgotten code. A team using continuous delivery / deployment which I value would score highly, a waterfall team releasing old code wouldn’t.
- Variance stabilisation – For teams to really learn from estimate variance (particularly underestimating) you should see a stabilisation overtime of the variance (quality systems like six sigma value low variance – 6 sigma being the process capability 3 standard deviations either side of the mean).
The above measures reflect the actual questions I’m asking teams and asked of stakeholders every day. They reflect stakeholders and customers’ needs for new products and features at pace and quality. There is no point attracting new customers with a new feature and breaking long term customers most used features because we released old forgotten code and caused a regression.
I think Velocity is really for a product owner to judge rough dates if and when discussing dates is important like a marketing spend decision, timing an automation cost saving or hitting a key external event like the FA Cup Final.
Responding to customer needs requires the ability to pivot more than the ability to build pace over time in a single direction (velocity is a vector not a scalar). Can anyone honestly say they have ever been asked by a stakeholder for the velocity of a team, in an occasion other than the week after a stakeholder went on an agile course?
Secondly velocity is a discrete and composite data type which takes many, many sprints to become statistically significant, whereas the above measures while still discrete gain significance faster.
I have heard the argument that we need one measure to rule them all, well imagine if an exec team steered a business based purely on measuring revenue without considering costs, profit, tax, interest rates etc etc.
Let’s take some examples team profiles and review the benefit of even discussing velocity:
- Team 1 – New Joiners with large variation in estimates, they stabilise and increase velocity with a conscious calibration against their first view of 3 story points.
- Team 2 – Established team, low variation, don’t re-calibrate against where they were 3 months ago (as that would not improve predictability or value delivery), but it would increase velocity.
- Team 3 – Drives to a business goal, does not story point the odd piece of work here and there (which is a lack of discipline, but can happen in Kanban style drives for a business goal), velocity stable / drops slightly around the goal.
- Team 4 – Diligently tracks story point size with a view to always delivering 10% more story points sprint on sprint and adjusts the estimates to fit the goal (like the bad smell of a perfect burn down), this team feel an incentive to increase velocity, because they see it measured and discussed.
I think we get more from teams like profiles 2 & 3. They sound a lot like teams delivering huge amounts of value and adjusting to the customer need as they go along.
Finally I think it is worth drawing the analogy to the physics of velocity:
u = initial velocity
v = final velocity
a = acceleration
t = time
s = displacement
You can see that velocity is largely a function of initial velocity and distance already travelled. Acceleration is therefore:
So acceleration is reduced for a team which has travelled a large vector distance (displacement). I would argue we want teams who have travelled a large distance in the company as they grok the product and the customers the most. These same well-travelled teams will not be affected or improved by comparing velocity, they will however have lower variances, handle risk well and ultimately deliver ideas to production faster. Put another way they will operate at pace, be disciplined and smart, which sounds like the perfect team to work with to me….