Six in 10 (64%) believe they are achieving at least a 25% increase in developer velocity and productivity using AI, an increase from 2025. Only four respondents said AI is slowing them down.
Respondents with very high AI adoption are more likely than those with low adoption to say AI increases their job satisfaction, their team’s overall efficiency, and productivity. They’re also more likely to believe their company’s growth outlook is better than last year.
Claude Code, made generally available in May 2025, is now the most popular AI coding tool, followed by Gemini Code Assist and GitHub Copilot. Code writing, review, and explanation are the top AI use cases.
84% of respondents said engineering productivity is a top management concern for them, and 3 in 4 believe engineering productivity is a strategic concern for their business.
“The average engineering team sees tens of percent growth in output, yet the most advanced teams are seeing improvements of 100–150% and beyond. AI is driving increased productivity, and the role of the engineer is transforming as a result.”
A year ago, GitHub Copilot was the clear leader at 42%. This year, it's third. The new #1, Claude Code, wasn't even an option on last year's survey.
Three tools now cluster within nine points at the top, with twelve more behind them. No tool dominates the way Copilot once did.
Code writing is still the top use case at 53%. The bigger shift is in code review. In 2025, it sat near the bottom of the pack at 20%. In 2026, it's second at 49%, behind only writing.
AI's role has expanded beyond generation. Engineers are using it to review, explain, and debug code at rates that didn't exist a year ago.
Engineering teams with very high AI adoption report dramatically better outcomes than ad-hoc adopters across every measure: productivity, job satisfaction, efficiency, and growth outlook. The largest gap is on productivity, where 97% of high adopters say AI is boosting their team versus 60% of low adopters.
The returns scale with commitment. Only 10% of respondents reported strong enablement and high adoption, which means the upside is significant for the other 90%.
Jellyfish platform data
Jellyfish data from 1,000+ companies tells a similar story: top AI adopters see 1.8× PR throughput, and autonomous agents are now generating 21% of PRs for those teams.
Explore AI Engineering Trends →
The top three AI challenges reported this year were: managing the cost, change, and complexity of putting AI to work at scale.
Token cost leads the list. The AI focus so far has been to spend tokens freely so developers can learn and ship quickly. With more efficient models arriving and cost awareness rising, attention is shifting toward measurement and ROI.
increasing cost of AI tools
reluctance in adoption from senior engineers
proliferation of tools making it difficult to select the best one
“Over-reliance on AI in the SDLC, unclear responsibility when AI makes mistakes, and data privacy or security risks when using AI tools are all my concerns.”
– US-based Individual Contributor
“Adoption and getting more than 10-20% impact is largely about change management once the agentic tooling is capable of doing real work (and it is close). We are working on adapting our process (integration of AI agents, larger expectations) and team structure (smaller) and sharing best practices within the team to help everyone start to see what is possible.”
– US-based Engineering Executive
“It’s hard to get usage to be uniform across the team. People continue to go rogue and use AI for own benefit, but not always the organizations.”
– US-based Engineering Manager
“AI usage can degrade thinking and decision-making capacity. Increasing engineering productivity doesn’t mean they are building the right things. I don’t think Engineering throughput has a direct measurable impact on business outcomes yet, because Product Management is becoming more and more a bottleneck, and Product Operations need more attention to ensure Product work can flow smoothly. I don’t think a lot of companies have realized this yet.”
– US-based Individual Contributor
“The main challenge is enablement – a small group of people have become experts, while others do not take the time to learn the new skill of AI-enhanced coding. More intentional training and enablement is required.”
– US-based Engineering Executive
“Bait & switch tooling experience. Claude Code use took off in January 2026 with many teams seeing improvement in velocity and efficiency. By the end of January, leads were panicked over the token expense. This led to pulling back access to the tool that clearly worked and replacement with a rate-limited tool.”
– US-based Platform Leader
“I fear that AI will eventually take my job.”
– US-based Platform Leader
“My main concerns are quality drift (AI-generated code that looks right but embeds subtle bugs), security and compliance risk (leaking sensitive data, introducing vulnerabilities, weak patterns, exposing IP), and maintainability (inconsistent style, duplicated logic, shallow tests, harder reviews). I also worry about slower decision-making from over-reliance, skills atrophy in debugging/design, and new operational burden when AI is in production. Finally, AI can amplify throughput without improving alignment, so teams may ship more while accumulating more tech debt and KTLO.”
– Canada-based Engineering Manager
“The biggest concern for me is the occasional instances of gibberish still associated with AI. We’ve already come across accuracy issues on two separate occasions and that’s alarming.”
– US-based Engineering Manager
“Engineers pushed to deliver more, faster means less code ownership and increased potential for lower code quality. Less time spent reviewing code and analyzing exceptional or edge cases.”
– US-based Individual Contributor
One of the promises of AI coding tools is that engineers will have more time for roadmap work and less for “Keeping the Lights On” maintenance.
So far, that promise seems to be coming to fruition, especially for teams further along on AI adoption. 60% of teams with very high AI adoption expect time on roadmap work to increase, compared to 38% of low adopters.
“The advent of more effective models in late 2025 and early 2026, and a rising awareness of the costs being encountered, has changed the calculus. We expect the industry to pay more attention to token costs and ROI.
This will put pressure not only on the technical ability to track and attribute token spend data, but also on tracking productivity metrics in order to measure ROI.”
Productivity up. Budgets up. Outlook up.
of respondents believe productivity on their team has increased in the past 12 months
expect their department’s budget to increase over the next 12 months
of respondents said engineering productivity is a top management concern
The full 2026 State of Engineering Management Report has 18 pages of data, benchmarks, and insights from 636 engineering leaders.