Being agile is a concept that everyone understands, but hardly anyone can truly explain.
We can intuitively say what good agility looks like — such as quickly building MVPs. We also understand what’s not desired — such as haphazardly changing direction all the time.
Yet, it’s still hard to pinpoint what exactly being agile means, let alone how to quantify it.
Luckily, a few years ago, the authors of Scrum.org released an EBM Guide in which they more or less defined agility as a combined measure of an organization’s speed and ability to innovate.
I think they nailed the definition, so let’s dig deeper.
Time to market is all about speed. Rule of thumb — the faster we are, the better.
Speed gives us various benefits, such as:
There’s a reason why tech giants like Facebook release multiple times a day.
Some of the most insightful speed gauges we have include:
Cycle time tells us how much time passes from taking up a work item to its completion, which depends on the definition of done. It’s also one of the most holistic delivery metrics to track.
Low cycle time is a side effect of best practices such as:
The lower the cycle time and the more efficient your delivery practices, the faster your team can release new enhancements.
There’s no value unless end users have it. You could’ve spent last quarter working on 30 new fancy features, but if they still sit on a staging, they are worth nothing.
Ideally, you should have regular release cycles, be it daily, weekly or monthly – depending on the product type and your team maturity.
More frequent releases lead to:
The more the better. It’s very rare that a company releases too often; more often, they don’t release frequently enough.
Learning is everything. The more you learn — about your customers, market, etc. – the better you are at designing truly valuable solutions.
We define our learning pace by measuring how much time it takes from planning an experiment to concluding its impact.
Say you believe adding a second CTA on your homepage will improve conversions. You plan the experiment on Jan. 1. You have it developed by Jan. 14, and the team released it on Jan. 21. You then run an A/B test and reach a statistical significance after seven days, which makes it Jan. 28.
In this example, your time-to-learn is 28 days. It took you four weeks from stating a hypothesis to validating it.
Some ways to optimize time-to-learn include:
Even though most would agree that learning is everything, only a few actually measure it. You can come up ahead just by consciously optimizing your learning pace.
Ability to innovate (A2I) answers how efficient we are at delivering value. We can be incredibly fast yet still fail if all we do is chase bugs and move pixels around.
A2I also serves as a counter-metric for speed. If we focused only on speed, it would be easy to over-optimize it by lowering quality or taking on technical debt.
We can measure our innovation capabilities by understanding our innovation rate and tech debt baggage.
Teams create value in two ways. They either learn or deliver product capabilities that drive outcomes. Everything else is a potential waste.
We can calculate how much of the team’s effort translates into an actual value with an innovation rate.
Innovation Rate = (effort spent learning + effort spent delivering product capabilities) / total effort
For example, let’s say your team members spend, on average:
In this example, the innovation rate is 50 percent ((5 learning hours + 15 development hours) / all 40 hours).
It’s natural for innovation to drop over time. The bigger the product gets, the more it takes to maintain it and coordinate all dependencies. Regardless of that, we should always strive to keep the innovation rate as high as possible.
There are many approaches to defining what technical debt is.
My approach is to treat technical debt as any gap between the state of technical excellence and the current state.
By technical excellence, I mean a state in which the whole development process is flawless. Continuous deployment is bread and butter; everything is automated and up-to-date, and there’s no single unused property in the codebase.
Just to be clear, I don’t advocate technical excellence. I will even discourage it if you are a brand new company without a product-market fit. It’s about having a benchmark — something we can compare ourselves to.
At the end of the day, it’s all about a conscious tradeoff. The more tech debt we take, the faster we can deliver in the short term, at the cost of slowing us down in the long run.
My way of assessing tech debt is by creating and estimating a ticket for every gap between the current state and the state of technical excellence. It includes:
I then compare that estimate with the average velocity we have. For example, if our average velocity is 40 story points and there are 133 story points in the tech debt epic, then:
Technical debt ratio = 133/40 = 3.25
Tech debt itself isn’t an inherently bad thing. As long as we are strategic with it, it can even be our friend.
If your tech debt ratio is as low as 0.5 (and assuming you capture it effectively), you have the comfort of taking up more debt to hit short-term deadlines if needed. If it reaches some crazy value, like 12 (assuming 2-week long sprints), then perhaps it’s time to slow down and fix the mess before it bites.
I once encountered an organization that used so-called “agility maturity checklists.” Oh boy, that was bad.
At the end of each month, teams were asked to go through the checklist and self-assess. The questions were like:
The problem with these questions is that it doesn’t have anything to do with being agile. Agility is about doing whatever it takes to maximize speed and value delivery, not about following some 30-year-old guide to the dot.
Think about it; which team is truly agile? A team that delivers value fast and adapts quickly — even though its dailies take 30 minutes — or a team that nails the timebox but hasn’t released anything in a quarter?
Focus on speed and value, not some dubious checklists.
Featured image source: IconScout
LogRocket identifies friction points in the user experience so you can make informed decisions about product and design changes that must happen to hit your goals.
With LogRocket, you can understand the scope of the issues affecting your product and prioritize the changes that need to be made. LogRocket simplifies workflows by allowing Engineering, Product, UX, and Design teams to work from the same data as you, eliminating any confusion about what needs to be done.
Get your teams on the same page — try LogRocket today.
A marketing plan is a structured guide for a company’s marketing activities across a specific period.
Alan Fliegelman shares how his work at DHI is transforming the job search process and the various transitions he’s seen in his time there.
Loss aversion is the psychological concept behind the human response that attributes more to losses versus gains.