Skip to content
Engineering Investment and Business Alignment

Engineering Metrics: How Data-Driven Management Can Go Horribly Wrong

We’ve all heard horror stories about leaders using data in the absence of context, and limiting or actively damaging the culture of their teams. Data-driven insights can be a force multiplier for positive, growth-oriented leadership practices, or, used recklessly, for micromanagement and reducing meaningful conversations with teams. It’s up to all of us as leaders to decide how we’re going to use it, and who we’re going to be. At Jellyfish, it’s up to us to develop and design a product that promotes good engineering leadership practices and encourages behaviors that have a positive impact on the software engineering world. When we see something going wrong, we owe it to our community to call it out.  

So now let’s address the elephant in the room, how can engineering leaders misuse software engineering metrics? Are there any risks to your software developers in adopting them? 

Measuring Engineering “Performance”

There’s a problematic line of thinking that oversimplifies how metrics should be used to evaluate the impact of an engineering team, and it limits the ability to appreciate the contribution of developers. An engineer’s role in an organization is to solve problems in an ever-increasingly complex software development environment that drives better outcomes for the business. While almost all jobs have some form of performance reviews, insinuating that metrics can solely communicate individual performance is reductive, an oversimplification of the role, and harmful to the teams you want to “perform.” 

Engineering metrics shouldn’t be used to determine whether specific teams and individuals are performing better than others. The reality is that the teams you’re trying to compare are probably scoped with entirely different work, have different compositions of experience and tenure, and stories behind each trend. Without understanding why metrics look different between teams or individuals, these comparisons can be 1.) a waste of time and 2.) promote unhealthy competition. Using engineering metrics to evaluate an individual’s “performance” or “productivity” without qualitative context can be incredibly harmful to engineering organizations. It will (justifiably) erode trust in leadership amongst developers, while simultaneously painting an incomplete picture of what’s really going on within the engineering teams. Look no further than Twitter, or…X, for a real-life example of this…

The Slippery Slope of Software Engineering Metric Analysis

Metric analysis can very quickly become a slippery slope. One moment you’re looking at a high-level operational dashboard and the next you’re comparing a member of the security team’s issue cycle time to someone on the support team…during  May…back in 2019. 

So that example might be over-the-top, but we all are guilty of getting too far into the weeds. Managers are susceptible to examining metrics at a pretty granular level, and a more basic metric such as issue cycle time can be viewed across an organization, team, or individual level. Just because you can view this metric at a VERY granular level, doesn’t mean that doing so will necessarily provide profound insights. Borrowing from the earlier ridiculous example, a developer that works on security might have a much higher average issue cycle time than someone on the support team. The nature of their work is vastly different, and their metrics will reflect this. 

We have plenty of examples of engineering leaders and managers looking at issue cycle time at a granular level and uncovering interesting and actionable insights. Having some metrics down to the individual level can be helpful, but they must be interpreted in context. Unfortunately, not all leaders are taking the necessary time to gather that context. The irony is that getting accurate data has never been easier, freeing time to get this context.

Data as a force for good 

There is a version of the future where leaders use data to become “big brother”. It’s a picture fueled by fear, anxiety, and distrust of leadership. But those are not the leaders we at Jellyfish have had the privilege of working with, and it’s not the future that we want to build for our engineering teams. 

Better strategic decisions are made when engineering leaders are informed with data-driven insights. We’ve seen and documented how the leaders that we work with enable their teams to spend more time on innovation work in part due to insights provided by work Allocation. Leaders are using data to improve development processes, running experiments within their engineering operations, and enabling teams to focus on what matters most to the business. 

But it’s imperative that all leaders leveraging software engineering metrics acknowledge and address how metrics can be misused when not properly fostered in a culture of inclusion, communication, and collaboration. We’ve always advocated for a balanced approach to a metrics strategy, one that avoids the effects of Goodhart’s law, and aligns to the principles of the DORA team, and DevOps best practices. 

And we won’t stop here. We must remain diligent. Jellyfish will continue to work with and educate the engineering community on how to use engineering metrics as a force for good. There’s nuance required in interpreting all engineering metrics, and none of us can be complacent when learning how to use these metrics to drive the right outcomes.