In times of economic uncertainty, efficiency and time-to-value take center stage.
This reality impacts all professional spheres and engineering management is no exception. At Jellyfish, we’ve noticed that focus has begun to shift away from team health and an inordinate amount of emphasis is being placed on ‘speed’.
Questions like “What are the engineers working on” or “Can we stack rank the engineers by lines of code?” are crossing the desks of engineering managers the world over.
Whilst shipping products at pace will always be an imperative part of software development, the pressure from external sources to deliver under these conditions can pose a significant long-term threat.
Using metrics to optimize solely for speed as an output across each stage of the Software Development Life Cycle (SDLC) can, at best, lead to inaccurate reporting, stemming from engineers inadvertently gaming the system. At worst, it can mean hastily planned, poorly executed, rubber-stamped software is being rushed out the door.
Balance is a more important outcome to strive for.
Software Development isn’t Linear
Before I get into what I mean by balance, I need you to take onboard a few of my beliefs regarding the modern software development life cycle.
The first is that engineering is a nonlinear process.
Software development is such an involved, complicated endeavor that there are no longer binary hand-offs between team members.
Iterative ideation is now part of the hands-on-keyboards work of engineering teams. That doesn’t mean that best practice is to build the plane whilst flying it, It’s simply recognizing that requirements are rarely set in stone in the fluid world of modern software development.
The flip side of that coin is accepting that there can be gaps between workflow stages. The compartmentalization of engineering work common within agile development means that as priorities change, focus will be shifted and there will inevitably be gaps between when a feature is proposed, worked on, reviewed, and deployed.
Now, if all you care about is speed, my viewpoint will probably rub the wrong way.
When prioritizing speed as a metric, development becomes a game of hot potato. Work progresses across stages without a moment’s pause to question whether, say, a feature is really delivering value. And gaps? The cardinal sin. Remember, it’s all about speed, speed, speed!
There are a lot of vendors in the space who claim to be an ally of engineering teams, but fixate on speed as the sole threshold when it comes to measuring improvements to life cycle time. To that, I’d say:
Have you ever met an engineer who likes being benchmarked on how fast they are?
If, however, you accept that there will be overlap between stages, that gaps between them are sometimes acceptable – then engineering management becomes the practice of balancing between these different points of process friction in order to keep your teams happy whilst also optimizing for efficient, high-quality product delivery.
Recognizing the Indicators
In order to perceive these different points of process friction, you need to start questioning what each signal could mean within the context of your engineering team. Using a tool like a Life Cycle Explorer makes this very easy to visualize, although it’s not required. You do need some level of visibility into your software development life cycle stages though.
Say for example, there’s a lot of overlap between refinement and work; what could this represent? Perhaps an ever-changing set of requirements for engineers, leading to thrash and an overall degradation of team health. What about a big gap between the two? That could reflect a lot of issues sitting in the backlog – perhaps it’s time to either hire more engineers to address your desired scope, or improve your process around delivery expectations.
What if work is stretching too long – is the work too complex or are folks just spinning their wheels? Inversely, if work is executing too quickly, are things being rushed? Is tech debt piling up, creating a bottleneck in the long run?
And what about review; a lot of it could mean that your feedback process needs to be streamlined or that the work being submitted is controversial. Too little might mean things are being rubber stamped and buggy code is being delivered to customers. Understanding these indicators is crucial in order to inform decisions around tooling, process, workflow and team health.
These are just a few example scenarios, but there are many more to consider.
If you’d like to delve deeper into creating balance between process health and deployment efficiency ala the friction/signal dynamic I’ve defined here, then you might be interested in Jellyfish’s latest eBook “How to Use a Lifecycle Explorer | A Guide for Engineering Team” where we go into a lot more detail. We also have a short demo video of our Life Cycle Explorer feature.