We all know we should focus on outcomes over outputs. But product outcomes don’t come from thin air.
Outputs lead us there.
You can’t have meaningful outcomes without producing sensible outputs first. So although maximizing outputs doesn’t guarantee better outcomes, it does help a lot.
A smooth delivery leads to better agility and more experiments, which leads to more learning and a better product that generates better outcomes.
But what is good delivery, anyway? And how do you even measure that?
Most people treat velocity (number of story points delivered per sprint) as their primary delivery metric. Although there’s nothing wrong with that, velocity itself gives us a limited amount of information.
Let’s take a look at five other delivery metrics that are worth monitoring.
Cycle time is the amount of time that elapses between starting work on a work item until it’s completion (based on the definition of done).
To make it more tangible, let’s say your average work hours are 9 a.m. to 5 p.m. where you accomplished the following tasks throughout the week:
Since cycle time measures the time from the start of the actual work, you measure the time between Wednesday 10 a.m. and Monday 11 a.m. The cycle time here is 29 working hours.
Cycle time = time elapsed between the start of the work and its completion
Cycle time is the most holistic metric to track. Low cycle time requires optimization in all areas of product development, including:
Cycle time is one of the best indicators of delivery process health. Keeping cycle time low allows the team to achieve:
Keeping the cycle time low is one of the best investments you can make to improve your delivery process.
In an ideal world, we would know exactly how much something will take. Should we plan five sprints for a given initiative? Six? Six and a half?
You probably already know it’s impossible, especially in a complex software development setting. But you also can’t fall into the “it will be done once it’s done” approach.
External deadlines, dependencies, and proper resource management require you to have some degree of predictability. But most teams don’t track it.
And then they are surprised they don’t feel confident committing to deadlines. After all, it’s hard to improve something you don’t track.
There are various ways to track predictability. One of the common ones is tracking sprint plan vs. outcome.
Predictability = 100 percent – |percentage delivered – 100 percent|
Let’s take a look at an example:
Average predictability = (90 percent + 80 percent + 74 percent) / 3 = 81.3 percent
As you see, overdelivering also lowers the overall predictability. It works as a safeguard.
The team could plan a ridiculously low amount of story points per sprint just to over-deliver each time. But it wouldn’t make them predictable, would it?
Improving predictability is worth our attention because it allows:
Keep track of your last three to five sprints’ average predictability and strive to move it a little bit closer to 100 percent every time.
Although you probably won’t reach 100 percent predictability, it’s a worthy north star.
Everyone talks about technical debt, but only a few measure it.
Technical debt isn’t bad per se. It might even be your best friend. It’s all about proper debt management.
It works similarly to corporate, monetary debt. The more debt you take on, the more you can deliver in the short term, at the cost of interest that needs to be paid off in the long run.
And sometimes, the short-term is better. Especially when you work on a brand-new MVP concept.
The best way to properly manage the tech debt is to measure it and agree with the team on your desirable debt amount at a given moment.
First of all, what qualifies as a tech debt? It’s a question every team should agree on individually, based on their current context.
In most cases, I treat technical debt as the difference between the current technical state of the product and the desirable one. This difference can include:
I like to list all the known things and estimate them just like any other backlog item. Although estimating bugs is quite controversial, I haven’t yet found a better approach.
By listing and estimating all known issues, you can calculate the overall size of the tech debt.
Tech debt size = estimate of all known differences between the current technical state and the desired state
Estimating technical debt is very ambiguous, and that’s okay. We are more interested in the scale than a specific number.
Let’s assume your tech debt backlog is 400 story points big.
Now you can calculate your tech debt to velocity ratio, which should roughly answer the question: how many full sprints do you need to pay off all known debt?
Tech debt ratio = technical debt size / velocity
If your tech debt is 400 sp, and your average velocity is 90, then your tech debt ratio is 4.44, meaning that you need roughly five sprints to pay off the debt.
Even though it’s not a precise calculation, it’s still valuable insight.
Whether the ratio of 4.44 is high or low depends on specific circumstances.
I encourage all teams to consciously sit and discuss their desirable tech debt ratio. Say that you agree that in your current product state, you aim for a 3–7 tech debt ratio.
Crossing 7 would signal that you are taking up too much debt and that you should prioritize paying some of it off to avoid long-term consequences.
On the other hand, going below isn’t perfect anyway. It could mean that you focus on technical excellence over fast value delivery.
Perhaps you can allow yourself to deliver more value in the next sprints by taking up some debt.
As much as I’d love to give you an optimal range, it’s a complex question, you have to take it on a case-by-case basis.
As a rule of thumb: if your priority is learning and testing assumptions, take up more debt. If you have a validated and certain idea, keep a lower debt ratio.
Monitoring how much time a specific product backlog item spends in a given status allows you to get a more detailed perspective on your workflow.
Time-in-status = time elapsed between going into and moving out of status
This metric is especially relevant for more complex workflows with multiple statuses. By monitoring how much time a ticket spends in a given status, you can quickly notice bottlenecks.
Are your work items spending too much time in code review status? Maybe you need to dig deeper into repository metrics to fix the issue. Is your QA taking too long? Perhaps it’s time to add more resources or improve the testing process.
I treat time-in-status as a compass metric. It helps you focus your attention on areas that need the most improvement.
There are different ways to measure time-to-market.
Let’s say you are developing a new feature. Your time-to-market is the time between the decision to build the feature and pushing it into production.
Low time-to-market improves your agility, helps you test and fix ideas faster, and allows you to capture more time-sensitive opportunities.
It’s a sum of wait time, epic cycle time, build and integration time and release stabilization period.
In most cases, some time passes between deciding on a feature and actually starting work on it. There are many reasons that contribute to this, like:
Focus on just-in-time planning and smooth processes to limit wait time.
Epic cycle time is the main body of work. It works in the same manner as cycle time but concerning the whole epic.
Meaning it’s time that passes between picking up the first epic-related task to completing the last one.
Naturally, the smaller and more well-scoped the feature, the lower the cycle time will be.
After all work is completed, it’s time to prepare a package or integrate it with the main codebase. It might take minutes or weeks, depending on the scale of the product and your processes.
Keep the build and integration time low by using robust automated pipelines.
The release stabilization period is the time spent correcting product problems between the point the developers say it is ready to release and the point where it is actually released to customers.
All final touches are here, and these include:
Bottom-up your release process to reduce the stabilization period.
Time to market = Wait time + epic cycle time + build and integration time + release stabilization period
Stay ahead of the competition by continuously reviewing and improving your time-to-market.
Although the list is not comprehensive, there’s also no need to track 50 various delivery metrics.
Focus on big chunks, and dig deeper into more detailed metrics only if you need to better understand the core metrics.
Properly tracking your speed (cycle time, time-in-status, and time-to-market), predictability, and quality (technical debt) should give you a wide enough picture to understand where your low-hanging fruits are and where you should focus most of your attention.
Featured image source: IconScout
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
As the name alludes to, the way you frame a product significantly changes the way your audience receives it.
A Pareto chart combines a bar chart with a line chart to visually represent the Pareto Principle (80/20 rule).
Brad Ferringo talks about how he helped develop modern “earconography” — sound language that creates context-driven audio notifications.
Without a clear prioritization strategy, your team will struggle to tackle competing demands and can end up confused and misaligned.