Skip to content

Cracking the AI Productivity Paradox: A Framework for Real Impact

blog-ai-impact-framework

Last month, I had the privilege of speaking at ELC Annual in San Francisco, which brings together thousands of engineers and engineering leaders each year. As you might have guessed, this year’s event was widely focused on AI and its impact on the software development lifecycle (SDLC) as well as how to best lead through this change. That last topic was the subject of my own talk.

More specifically, I shared real AI adoption and impact data from the Jellyfish platform, how to roll out AI to your engineering org and, crucially, how to get the most leverage from AI via the Jellyfish AI Impact Framework. This collective knowledge is based on years operating in the developer productivity space along with hundreds of hours spent with our customers as well as AI adoption and usage data from the Jellyfish platform spanning tens of thousands of engineers across hundreds of organizations.

Even at ELC, which took place in the heart of the AI boom, people were taking pictures of the slides and coming up after my talk to ask questions about our data and framework – all the more reason to share our data and slides here with you.

You can access my ELC presentation here or keep scrolling for a deeper look.

AI Hype vs. Reality

AI Hype vs. Reality

AI is decidedly mainstream – ChatGPT reached more than 100 million users in under two months. But separate from broad consumer adoption and the hype surrounding all things AI, neutral third parties report a different reality when it comes to real business impact.

According to a recent Boston Consulting Group (BCG) study, 74% of companies report no measurable business results from their AI investments. Our own data, looking specifically at AI coding tool and agent use, shows a similar pattern.

AI adoption is skyrocketing. The share of companies using AI coding tools like GitHub Copilot, Cursor, and Claude has jumped from 61% to 90% in just the past year.

At the same time, impact isn’t keeping pace. Only about 30% of organizations are seeing substantial gains (e.g. half their code is AI-assisted).

This is what I call the AI productivity paradox – everyone’s trying AI, but few are benefiting.

Why Impact Lags Adoption

Why Impact Lags Adoption

While only 30% of companies are seeing substantial impact from AI today, it’s important to note that the outlook around AI remains positive. This isn’t a will or sentiment issue. Jellyfish’s 2025 State of Engineering Management report found that eight in 10 respondents believe that at least 25% of the work humans do today will be handled by AI five years from now. Even though most of us aren’t seeing the gains yet, we’re also not giving up.

So why then does impact lag adoption? Some might think it’s because engineers aren’t trying hard enough or because the tools are bad, but we think the reasons are more subtle:

  • Individual experience varies: Even top engineers sometimes find AI-generated code slower or lower quality than their own work.
  • Emotional tension exists: Some engineers worry AI could threaten their roles, while others are eager to tinker, but frustrated when tools fall short.
  • Type of work and context matters: AI may perform better on certain tasks (short bug fixes, boilerplate code) and worse on complex, infrastructure-heavy projects. In other environments we observe the opposite.

This mix of human, cultural, and technical factors makes adoption uneven and impact inconsistent.

The Jellyfish AI Impact Framework

The Jellyfish AI Impact Framework

In reviewing our data and working with our customers across a range of industries and company sizes, we identified three focus areas to drive the best results.

  1. Adoption: Who’s using AI tools, how often, and are they applying the output?
  2. Productivity: Are those users actually delivering more, faster, or more efficiently?
  3. Outcomes: Are those gains translating into meaningful business results?

Based on this experience, we summarized our findings into a simple but powerful Framework spanning those three pillars:

1. Adoption

Start by mapping:

  • Which AI tools are available to your engineers.
  • Who’s using them, how frequently, and which pull requests incorporate AI-generated code.

Be sure to segment your data. Look at adoption by team, role, tenure, or type of work. This helps uncover why some engineers are power users while others remain on the sidelines. Maybe they lack training, or the tools aren’t well-suited to their kind of work.

2. Productivity

Next, track whether AI users are actually becoming more effective:

  • PR throughput: Are they producing more PRs over time?
  • PR cycle time: Are they getting those PRs reviewed and merged faster?

Then interpret the data carefully. A spike in throughput but flat cycle time could mean engineers are using AI on simple, fast tasks – not on complex, high-value work. Conversely, flat throughput could mask gains on tougher projects.

Also watch for experience-level differences: historically, senior engineers have seen bigger productivity lifts from AI than juniors, though that gap is closing.

3. Outcomes

Finally, tie it all to what matters most to your business:

  • Are you shipping more growth-oriented work (features, roadmap items) versus just maintenance and bug fixes?
  • Are you accelerating delivery timelines or reducing costs?

Here again, connect your AI usage data to project and ticketing metadata. This lets you see which types of projects benefit most from AI, and where it’s not moving the needle.

Putting It All Together

Putting It All Together

Here’s how to make this framework work in practice:

  • Pick two to four metrics per pillar (adoption, productivity, outcomes) for balance.
  • Segment relentlessly – by role, tenure, tool, project, or type of work.
  • Pair the data with qualitative feedback from engineers to understand the “why” behind the numbers.

Most importantly remember that adoption alone isn’t success. Productivity gains don’t matter if they don’t lead to the business outcomes your leadership cares about.

Looking Ahead

Looking Ahead

Today’s AI tools focus on code completion, but the future will likely be agent-based systems where engineers orchestrate multiple agents. This framework still applies. Whatever the tool, ask:

  • Are people using it?
  • Is it making them faster or better?
  • Is it changing the trajectory of the business?

Answering those questions with real data will be the key to moving from hype to impact.

Download the Deck

The AI Impact Framework

A Data-Driven Model for Adoption, Productivity, and Business Outcomes

Access the Framework

About the author

Krishna Kannan

Krishna Kannan is Head of Product at Jellyfish, the leading Software Engineering Intelligence platform, where he drives product vision and strategy for over 500 customers. Previously, he held senior product leadership roles at Pluralsight and Smarterer, among others. Krishna resides in Greater Boston with his partner and two young children, learning as much from them as they do from him.