Lead Time – let’s re-evaluate how we measure this DevOps Metric
It’s hard not to mention Accelerate when writing about DevOps; rarely does such a seminal piece of work provide an equally well-defined set of practical applications. It was a landmark engineering and business handbook that, through outlining what we now refer to as the DORA metrics, kick-started a DevOps metrics gold rush.
Lead Time is one such DORA metric. It’s defined as the time from when a change request is initiated to when that change is running in production. In practice, this is usually measured as the time it takes from first committing a code change to the point when that change is deployed. Placed in such terms, it seems rather cut and dry to track.
Despite this, there’s a fair amount of ambiguity surrounding the best place to start measuring Lead Time. Does the activity ranging from first commit to production truly encapsulate the development journey?
The appeal of DORA flavored DevOps metrics is that they can be tied to business impact i.e. they offer a way for engineering to measure something which truly matters to the entire business and that can be easily communicated beyond traditional engineering silos. DevOps lives and dies by its holistic ability to evaluate your engineering process; when it fails, it tends to be because its focus has become too narrow.
That’s why at Jellyfish, we believe extending the scope of your change lead time metric beyond the first commit to encompass the initial Jira Issue is a more well rounded practice; one that can help you achieve one of the ‘true’ versions of Change Lead Time. In this article, we’re going to explain why.
What’s the Issue (with Lead Time)?
There’s a reason Commit Lead Time is the accepted standard within the DevOps community; the data required to accurately measure it is readily accessible to most engineering organizations. All that’s required is a binary pair of dates from your source control tool and you’re in business.
It’s no secret that, in the face of the relative simplicity of all the DORA definitions, they’re not particularly easy metrics to track or analyze. For businesses that are trying to uplevel their teams by adhering to the DevOps methodology, it can be tempting to track DORA in its simplest possible terms. This is understandable and completely valid.
But in practice, stripping DORA metrics to their simplest definitions can pose a real challenge. This is because, as your toolset and process footprint grows, the complexity of engineering exponentially increases. In the case of Lead Time, the escalating nuance involved in defining processes means that measuring from commit to production might not tell the whole story. You can run the risk of invalidating it as a metric, devaluing your organization’s wider DevOps adherence.
There are numerous offshoots of Lead Time which address this contingency: Time to Review, Commit to QA and Time to Merge to name a few. These metrics pose their own challenges. Namely, it can be difficult to measure when the data required has to be pulled from across multiple tools. You also need to maintain the data union to assure accurate reporting.
Jellyfish believes balancing data consolidation with broad engineering visibility is the optimal approach. That’s why, in addition to Commit Lead Time and other metrics, we provide Issue Lead Time within our Engineering Management Platform.
Issue Lead Time is the time from when an issue is created to when that change is deployed.
You may be wondering: why does including the issue creation have such an impact on the value of Lead Time as a metric?
By only measuring Lead Time from the initial commit, you highlight a metric that lacks context and applicability outside of engineering. While it may be useful for understanding process improvements within the engineering team, it’s somewhat arbitrary for the rest of the business. A large part of the goal in measuring Lead Time is understanding how long it takes to deliver value to your customers from the time when the necessary change was conceived. Expanding the definition past the first commit to include issue creation gets closer to encompassing the initial change request, and therefore is a better measure for the time between the inception of the idea and when that idea begins to bring market advantage.
Going this route does mean expanding the metric beyond your source control tool to incorporate the likes of Jira, making the metric more difficult to calculate. But in doing so, other organizations within the business, such as Product and Sales, gain more value from this measurement and become invested in Engineering success. By highlighting how long it takes to bring value to the wider business, we believe Issue Lead Time is a powerful tool toward measuring DevOps effectiveness.
It’s also worth noting that, depending on the working style of your organization, your engineers may have been thinking and writing code days before the initial commit is submitted. By involving issues in the equation, you incorporate focused time and attention – making it that much more valuable as a metric.
Software engineering has become the most integral part of our technology driven market. With this rise to even greater prevalence comes an associated responsibility; technical processes can no longer be siloed strictly within the realm of engineering. The scope for measuring engineering success needs to be expanded to reflect business initiatives. By widening the field of view for the metrics you track, you can make strides in aligning engineering work with key enterprise objectives and, in doing so, earn a greater respect and candor with your business counterparts.
Jellyfish has recently added a whole host of DevOps metrics to our Engineering Management Platform. If you’re looking for a solution to support your organization, check out our feature announcement blog post to see how Jellyfish is helping elite engineering teams optimize their DevOps processes.