Skip to content

Jellyfish Deep Dive: Measuring the Adoption and Impact of AI Coding Tools

Jellyfish AI Impact Webinar

Engineering leaders don’t need more hype about AI coding tools – they need data and expertise. In our latest Deep Dive webinar, I joined my Jellyfish colleagues Jackson Nordling, Customer Education Manager, and Pamela Bergson, Engineering Manager, along with special guest Robert Freeman who leads Github’s Copilot business. Together we unpacked exactly how to measure AI adoption, enable teams to get the most out of their tools, and demonstrate impact – without getting lost in vanity metrics or vendor noise.

Below is a recap of the discussion and the core practices we see moving the needle in real engineering organizations today.

First, to level set and get a quick pulse of the audience we asked about:

  • Weekly usage of AI coding tools: The largest share of attendees reported 50-75% of their teams using an AI coding tool weekly.
  • Most-used tool: GitHub Copilot led the pack, with strong showings for Claude Code as well as Windsurf, Amazon Q and others.
  • A year from now most attendees expect their orgs to be using two to five AI tools.

These answers match what we’re seeing with our customers specifically and in the engineering field more broadly – enthusiasm, pockets of deep adoption, and a reality where multiple AI tools and types will coexist.

Usage is already near-universal, but the real gap that needs addressing is structured adoption – where in the SDLC each tool fits, how usage is enabled, and how outcomes are measured. Keep reading to learn how we’re addressing each of these concerns.

Measuring Adoption and Building Momentum

Measuring Adoption and Building Momentum

Jellyfish’s AI Impact solution helps leaders see not just how many licenses they’ve distributed, but who is actually using AI tools and where. Teams can track weekly active users, adoption by role or location, and even which parts of the codebase show the most AI activity. These insights reveal a predictable pattern: curiosity at first, uneven adoption across teams, and then gradual, steady growth as comfort and capability build.

The goal isn’t to chase 100% adoption on day one. Instead, it’s to identify patterns – teams or individuals experimenting effectively – and use those insights to spread best practices across the organization.

Turning Adoption into Enablement

Turning Adoption into Enablement

According to GitHub’s Robert Freeman, scaling AI within large organizations is simple, but not easy: “Build a champions program.” Power users can act as multipliers, helping their peers learn how to use AI tools effectively within their specific contexts. Webinars and recorded trainings help, but true enablement requires hands-on mentorship, shared playbooks, and ongoing office hours.

The organizations that plateau at 60% or 70% adoption, Freeman observed, are often those that stopped investing in enablement once licenses were distributed. The best results come from teams that treat AI as a skill set to develop, not just a product to deploy.

Measuring Real Impact

Measuring Real Impact

As adoption takes hold, the question shifts from “Are we using AI enough?” to “What’s the impact?” At Jellyfish, we encourage teams to think in terms of leading and lagging indicators. Cycle times for issues and pull requests are good early signals of improved efficiency, but the true business case lies in throughput – whether teams are shipping more meaningful work as a result.

Beyond velocity, the quality and type of work matter too. Many organizations are finding that AI frees engineers from repetitive maintenance tasks often referred to as “toil”, allowing them to focus on higher-value product development. Tracking that shift – from keeping the lights on work to growth work – can be one of the clearest signs that AI is paying off not just in terms of velocity gains, but also where teams allocate and spend their valuable time.

Still, metrics require interpretation. Freeman added that some teams actually see pull-request cycle times increase as they adopt AI. In most cases, this isn’t a sign of regression but of a bottleneck: AI allows engineers to generate code faster than reviewers can keep up. Others simply experience the temporary slowdown that comes with learning a new workflow.

Understanding these nuances is critical for leaders trying to connect metrics to real-world impact.

The Multi-Tool Reality

The Multi-Tool Reality

If early adopters once imagined a single dominant AI coding tool, that’s no longer the case. Most engineering organizations now operate in a “multi-tool reality,” with two to five tools in play at any given time. Different teams use different assistants depending on their stack, preferred editor, or even personal preference. Freeman likened it to the IDE ecosystem – most companies standardize around one, but exceptions always exist for good reason.

The challenge for engineering leaders is balancing flexibility with control. Security, governance, and sustainability become paramount. Tools should be evaluated not only for their capabilities but for their long-term viability. “You don’t want to spend months building your workflows around a tool that disappears,” Freeman added, noting that some newer entrants in the space are already struggling to survive the post-hype market correction.

Connecting AI to Business Value

Connecting AI to Business Value

Ultimately, leadership teams want confidence that AI investments translate to measurable results. Freeman put it bluntly: “I always ask executives, ‘How are you measuring developer productivity today?’ Most aren’t.” Tools like Jellyfish make that conversation data-driven for the first time.

The best approach, he suggested, is a balanced view – combining productivity metrics like cycle time with indicators of throughput, quality, developer satisfaction, and cost efficiency. AI’s true value lies not just in faster code, but in happier engineers and more innovative products shipped.

The Path Forward

The Path Forward

The consensus from our webinar is: AI coding tools aren’t plug-and-play. They’re an evolving capability that requires thoughtful rollout, continuous enablement, and a commitment to measurement. Start by baselining adoption and cycle times, build a culture of champions, monitor for new bottlenecks, and tie everything back to the outcomes that matter most – more of the right work delivered faster and at higher quality.

As Freeman summed it up, the next era of AI in engineering isn’t about replacing developers. It’s about equipping them – turning curiosity into capability, and capability into measurable impact.

To get started and learn more about AI Impact from Jellyfish, request a demo here. Or for a deeper dive into this content, access the full webinar recording here.

About the author

Ryan Kizielewicz

Ryan Kizielewicz is a Product Manager at Jellyfish.