Many of the concepts that will featured in this post originate from Milan Thakker’s session at Glow 2022 titled Signal Over Noise: Selecting the Right Metrics To Improve Process, Collaboration, and DevOps Outcomes. Check out the teaser below and you can view the full discussion on our resources page.
There is a misconception amongst business leaders that engineering isn’t data-driven. In reality, engineering teams have been using data for many years to make sense of an increasingly complex software development process. But by and large, they have faced obstacles to getting the most out of their data:
- Developers are outputting more data than ever from the tools they use
- The data is siloed across the dozen tools that software teams leverage
- And solving these two previous challenges requires significant time and effort that take time and focus away from strategic business priorities.
But while all these challenges are addressable with an Engineering Management Platform (EMP), leaders still need to face the final and most important question: among all possibilities, what should you be measuring? With so much data at your disposal, it’s easy to get lost in analysis paralysis. Conversely, measuring too few metrics paints an incomplete picture of your engineering organization and can lead to misinformed strategic and operational decisions. Ultimately, what you decide to measure is a signal of what matters to you and your organization.
At Jellyfish, we work with hundreds of engineering leaders as they navigate answering this question for themselves. The truth is that there’s no one set of metrics that applies to all teams, BUT we’ve arrived at a common principle driving the most successful adopters of a data-driven approach. The secret to a mature data-driven approach lies in a single word: balance.
Challenges with Engineering Metrics
Teams adopting engineering metrics, usually suffer from one of two challenges:
They Focus on Too FEW Metrics
If teams focus on just one or two metrics, the results negatively impact other parts of the software development process. When a metric becomes a target, teams will inherently find a way to ensure that those metrics are achieved (game the metric). For example, if you focus too heavily on cycle time while ignoring PR reviews, you could be gaining speed, but the effect is to reduce the amount of collaboration amongst the teams and ultimately the quality of the release. Quality, productivity, process, and collaboration will inevitably become secondary if you incentivize only speed. The same is true for any single metric.
Some engineering teams recognize the first problem and over rotate – measuring every metric possible in order to avoid the outcomes of Goodhart’s Law. In these instances, teams can lose sight of the outcomes that matter. Simply put, if everything is a priority, nothing is a priority.
Outcomes Over Outputs and The Objective Based Approach
Ultimately, each problem is a different side of the same coin. Teams are prioritizing outputs over outcomes. Below is an example of an output-based approach. A team might prioritize increasing the number of PRs landing in production. Of course, teams could scope work into smaller pieces. Teams might also look to reduce the total time reviewing code which likely will increase the number of bugs and diminish the customer experience.
On the other hand, an outcome-based approach only prioritizes optimizing for engineering metrics that map to your engineering priorities. In the example below, the team not only cares about the volume of outputs, but collaboration, the type and impact of the work, and the efficiency of the total process. The metrics they’re measuring directly reflect their team’s long-term vision and overarching mission.
Start by writing out the broadest mission statement for your engineering team, then work to hone it from there. It should encapsulate your team’s overall vision relatively well. Don’t worry too much about getting it perfect; aim for approximately 80% right and evolve it over time.
Balance in Metrics Strategy
A balanced metrics strategy is designed with the explicit purpose of limiting an overabundance in metric categories, while not over rotating on any single category. By maintaining this focus and monitoring for undesired consequences, the effects of Goodhart’s Law can be mitigated or avoided entirely.
The following diagram below encapsulates 5 simplified outcome categories that might matter to your organization. But your priorities and outcome will be as unique as the companies you represent. Our best advice is to focus on your priority outcomes, pick a set of metrics that align with those outcomes, and keep an eye out for secondary effects. The leading engineering teams in this space stay laser-focused on the outcomes they want to drive and employ a balanced set of metrics that help their teams achieve their objectives.
To see a more in-depth discussion of the concepts covered in this post, check out the full discussion on our resources page.