AI Engineering Trends
Last updated: February 17, 2026
About this report
About this report
This resource represents the industry’s most comprehensive quantitative analysis of AI transformation in software engineering. Compiled from the largest study of its kind, these results comprise real-world engineering signals about how hundreds of organizations and hundreds of thousands of developers are using AI tools across the SDLC.
Use these metrics as an objective baseline to benchmark your organization’s AI adoption and usage maturity. Beyond simple activity signals, this dataset quantifies the correlation between deep tool integration and its impact on measurable gains in delivery throughput and engineering outcomes.
Median AI adoption
across companies
63%
Generate most code with AI
across companies
64%
PR throughput
for top AI adopters
2x
PR throughput
from autonomous agents for top adopters
8%
Total Engineers
200K
Pull Requests
20M
Companies
700+
Tools Covered
Adoption
Adoption
Adoption tracks how often teams regularly engage with the provided set of AI tools, enabling them to improve the efficacy of said tools, build trust in the outputs, and remove barriers to adoption.
Jellyfish tracks signals like access, adoption, code ration and AI assistance to help engineering leaders assess adoption maturity. At the Adoption stage, determine:
- How and how much are your teams using AI tools?
- Are there friction points blocking adoption?
- Are engineers maturing from experimentation to full adoption?
Access %
Access Percentage
Jellyfish measures Access Percentage as the fraction of engineers at a company who have a license to an AI coding tool. As a baseline adoption metric, Access Percentage establishes the foundation for deeper analysis, including how frequently and deeply AI tools are integrated into developer workflows.
Weekly Active Users %
Weekly Active Users (WAU) Percentage
Jellyfish measures Weekly Active Users (WAU) percentage as the fraction of engineers at a company who actively use an AI coding tool in a given week. This metric captures the frequency of adoption, showing whether engineers are actually integrating AI tools into their regular workflows.
Frequent AI Users %
Frequent AI Users Percentage
Jellyfish defines a “frequent AI user” as an engineer who is using AI coding tools three or more days a week. Frequent AI Users Percentage measures the fraction of engineers that have achieved this level of adoption as a separate metric beyond basic WAU.
AI Code %
AI Code Percentage
Jellyfish measures AI Code Percentage as the fraction of a company’s shipped that is AI-assisted. It is calculated as the fraction of merged code additions that were AI-assisted, relative to all code additions in merged pull requests for each company.
This metric moves beyond tool usage frequency to capture the actual depth of AI’s impact on codebases and coding work.
Autonomous Agent Activity
Autonomous Agent Activity
Jellyfish measures Autonomous Agent Activity as the percentage of pull requests that are autonomously created or generated by AI agents. A PR qualifies as agent-generated if it was opening user id is an agent or if it contains commits where the committing user id is an agent. This metric tracks the emerging frontier of AI adoption, where AI isn’t just assisting engineers, but independently producing shippable work.
Impact
Impact
Understanding AI’s impact on productivity. Where are you seeing gains and in what scenarios (languages, type of work, etc.)? What learnings can be shared to drive greater productivity across the organization? At this stage, ask:
- How is AI affecting development throughput and team performance?
- Where are you seeing efficiency gains? What kinds of work are benefiting most?
- Are there new bottlenecks (e.g. delayed PR reviews) limiting broader productivity gains?
PR Throughput
PR Throughput Impact
Jellyfish measures productivity impact in terms of PR throughput, specifically through differences in the average PRs per engineer merged (by company). The data below represent weekly PRs per engineer averaged over the last three months. Companies are grouped by adoption level, defined by the percentage of “frequent” AI users (i.e. engineers using AI coding tools 3+ days per week).
The average PR throughput increases with the adoption level tier, with approximately 2x the throughput at the highest tier relative to the lowest.
PR Revert Rate
PR Revert Rate
One measure of quality is the fraction of “reverted” PRs – code that was deployed but needed to be rolled back. The data below represent the percent of reverted PRs relative to total PRs for each company, averaged over the last three months. Companies are grouped by adoption level, defined by the percentage of “frequent” AI users (i.e. engineers using AI coding tools 3+ days per week).
The average revert percentage increases slightly with higher adoption level tiers, with representing a 7-11% increase over the baseline (0.04–0.07 percentage points) .
Download this report
Download the AI Engineering Trends Report
Get this report sent to your inbox to save or share with your team.