Earlier this fall I sat down with Neil Kimber, Operating Principal, Technology at Accel-KKR, a tech-focused PE fund, to get his thoughts on the impact of generative AI in engineering. We touched on topics including team health, measurement, adoption and more. Keep reading for the full conversation.
Andrew Lau: Can you share your career trajectory and more about your current role as well as some background on Accel-KKR and its portfolio? What does your day to day look like? What are some of your main objectives?
Neil Kimber: I’m an Operating Principal at Accel-KKR, a private equity company that invests in enterprise software companies. Over the course of our 20-plus year history as a firm, we’ve invested in over 450 software companies – our current portfolio includes over 60 companies. As Operating Principal, Technology I work with the CTOs across all of our companies to help support their success.
Prior to AKKR I’ve served as CTO or VP Engineering for a number of software companies. I taught myself to code when I was 11 years old and never stopped. These days I’m only coding for fun. My day is busy and always different. I may be in meetings with our portfolio companies, industry partners or AKKR’s investment professionals. I could be helping a portfolio company solve a problem or execute on an initiative, or I might be performing technical due diligence on a target investment or I might even be helping a portfolio company deal with an issue.
Andrew: Do you encourage the AKKR portfolio to leverage generative AI generally? GenAI coding assistants, specifically? Why or why not? If so, how do you do this?
Neil: Absolutely. In my opinion, I see genAI as a significant advancement in technology, it’s one of the most exciting times in technology that I can remember. It’s a complete paradigm shift, and that creates opportunity for new types of innovation that just weren’t possible before. I’ve been encouraging our portfolio companies to embrace genAI and look for interesting use cases that benefit their company, people or customers.
Regarding GenAI Coding Assistants, this is a moving target. There are some interesting legal questions that still need to be fully answered and there are some court cases that are pending on this matter. Personally, I feel that coding assistants are amazing. I still write code, infrequently, but I do. The biggest problem with leisure programming is being able to work fast to accomplish what you want when you’re not familiar with the programming library or framework – it’s what we call friction. I find coding assistants remove that friction and allow you to move quickly. It makes programming fun again. My job isn’t to write code, but when I do I find myself being many times more productive using genAI.
But there are questions about IP ownership. So from a commercial sense there is a problem here that still needs to be resolved. Personally, I think genAI breaks the current IP and copyright frameworks. I honestly don’t think that in a few years from today, companies will be able to compete without using genAI to code. But, does that mean that no one can copyright their codebases because large chunks are written by genAI? What does it mean if people are submitting code generated by genAI to OSS projects, but aren’t attributing the code to AI? There are only so many ways to sort a list… does no one own any code or do we end up with the opposite and OSS projects ‘own’ all the code? I think there is going to have to be some serious rethinking about the impact of AI on copyright laws.
Andrew:Are there exciting or surprising examples you can share about how your portfolio companies are using genAI as it relates to engineering?
Neil: While I can’t share specifics regarding our portfolio companies, I’ll tell you something that I’m still looking at. And that’s using AI to modernize aging technology stacks. For example, imagine inheriting a product that has one million lines of T-SQL stored procedures and you want to migrate the database to Postgres. There are tools that use traditional programming techniques to convert the code. You can grab the T-SQL grammar, throw it into ANTLR and generate a parser, then you write the code to emit PostgresSQL. That’s how many tools currently work.
I’m interested in moving the Stored Procs out of the database entirely and generating a complete business layer automatically. I took the complete set of SPs written in T-SQL and wrote some code to iterate through and pump into GPT-4 using the API. I asked GPT to convert to C# and use EF Core. This was about a year ago, it did a ‘reasonable’ job. It was really good at the easy CRUD code, but it struggled with some of the more complex business logic. It also struggled because the output context window was 4k, while the input context window was 128k. So, for long SPs the output would just stop.
So, I stopped looking at this for a while. When GPT o1 came out I took another look. It had an output context window of 32k and it’s better at code generation. Suddenly we’re one step closer. I feel as though we’re on the cusp of being able to use AI to auto modernize large chunks of legacy systems. This has the potential to save businesses millions of dollars.
Consider using genAI to generate test cases and test data for the existing system, then convert the codebase to something more modern, run the test cases in parallel against each codebase, and confirm the same results. We’re still not there, but we’re getting really close.
Andrew: What are some signs the CEO / CPO / CTO should keep an eye out for to determine whether or not this is an area worth investing in? For example, there’s measuring impact as a means of informing your genAI investment decisions and measuring impact to improve engineering operations, inform headcount planning, etc. What do you think is important to consider and measure in each of these scenarios?
Neil: GitHub ran their own research on productivity gains with GitHub Copilot two years ago. The response from their research found self-reported productivity gains from developers in the region of 88%. They ran an A/B test on developing an actual project, the test set using Copilot took half the time to complete the project. My own personal experience tells me that coding assistants are an accelerator and a net win.
Finally, let me tell you about the experience of a software company that I am aware of. They wanted to introduce Copilot, but some of the engineering team were uncomfortable. So, they took an approach of making Copilot available and optional to use – if you want to use it, go ahead. If you don’t, then don’t. In January 40% of engineers were using Copilot, by the end of February it was 85% and by the end of the test period, it was 100%. They let Copilot usage adoption grow organically, they didn’t need to push or require it.
On top of that, it’s currently $20 per month for one Copilot seat license. If it saves one developer 20 minutes a month, it has already paid for itself. The reality is that it is likely the single biggest ROI you can get on developer tooling today.
I’m continuously looking for solutions coming to market, and with genAI we’re in a golden age where something new and promising appears every month. I engage with industry partners and sometimes run proofs of concepts with the solution to gauge impact.
If a business decides to add genAI solutions to improve efficiency in its engineering output, then the best measure is the sentiment feedback from the users – the engineers themselves.
In our companies’ products, I encourage companies to test out new features. For example, several of our companies are testing AWS QuickSight with NLP querying features. It’s low risk to test features like these because the cost to implement is low. If you can get to a usable PoC then you put the solution into the hands of the customer and from there, let them tell you if there is a value.
Andrew: Where do you see AI coding tools a year from now? Beyond genAI, what other developments do you see on the horizon?
Neil: I see wins in a number of different areas. First, we’re going to continue to see advances in the speed, quality and economics of using various AI tools. We are likely to see continual advances with higher levels of abstraction. We’re going to see more and more interesting tooling, some of which we may not be able to conceive of today. I’ve seen people experimenting with a mix of agents for building code, and I think we’ll likely start to see this type of technology embedded into IDEs. You could have agents that act as architects, code quality experts, performance experts and so on. Next, you could ask an agent to write a complex program. The architect would structure the codebase, the coding agent would write individual functions, the QA agent would write unit tests… you start to get the idea.
I don’t think developers go away, remember that these solutions are assistants. A person still needs to review the code. It may be that you work with each agent as they are working – you’re effectively steering the results. The point is that this would rapidly accelerate the development process, enabling those who embrace the technology to operate more efficiently. We’re starting to see things like this now, Replit announced the Replit agent earlier this year.
Andrew: At Jellyfish, we’ve been focused a lot on team health lately, specifically Developer Experience. Can you share more around how your portfolio companies approach developer sentiment / team health and experience?
Neil: I think it is really important and it is something that I’ve been thinking about. It’s easy to look at the high-level results of a development team and miss issues that occur at the individual level. We recently put together an engineer satisfaction survey that we are trying out at the moment. We want to cover the ‘S’ in the SPACE framework. The goal is to get information beyond the simplistic information reported upwards from team managers. The approach that we plan to take is to make sure that engineers are comfortable that the survey is anonymous, with a one-time use survey link (to prevent survey spamming) and to adjust questions based on the role: developers, QA, product. It’s an early attempt, but an effort nonetheless to assess developer sentiment and team health.
Going back to genAI, studies seem to consistently show that developers have higher job satisfaction when using assistants. So, tooling is important. I mentioned friction earlier: as a developer, friction is something that always destroys my satisfaction. Good tooling removes friction.
Andrew: Can you share more around how you got involved with the AWS Prompt100 program?
Neil: AWS has a private equity group that works directly with private equity firms to help their portfolios of companies get the maximum benefit from the AWS platform. Microsoft has a similar setup. Last year, when AWS released Bedrock, the AWS PE group set up an initiative to help PE portfolio companies learn and leverage the Bedrock services. This was called Prompt100. AWS provides access to expertise on the genAI technologies, and they also provided AWS partners to help portfolio companies to implement Proof of Concepts of genAI solutions. A number of our portfolio companies took advantage of this initiative. It has allowed our portfolio companies to explore and experiment with the use of AI within their SaaS solutions without impacting their existing roadmap.
Andrew: Finally, what are some books you’ve read recently that you’d recommend?
Neil: I’ll give you two books, one fun and one technical.
I rarely have time to read novels these days, but over the summer I got to read a book by a swimming pool that was fun. It’s called Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin. It’s about two people who met as young teenagers and eventually ended up building video games. It’s an interesting mix of nostalgia, tech and the struggles of human relationships. I enjoyed it.
The second book I’m still in the middle of reading. It’s called The Singularity is Nearer by Ray Kurzweil, it’s a follow up to his 2005 book, The Singularity is Near. If you’ve never read a Kurzweil book you should, he’s a fascinating character.
If you enjoyed this interview and want to hear more from Neil, follow him on LinkedIn here. Similarly, tune into more executive interviews with Andrew on his podcast 5 to 9. Listen and subscribe here.
Note: The views and opinions expressed are those of the speaker and do not necessarily reflect those of Accel-KKR. Accel-KKR has not verified the accuracy of any statements by the speaker and disclaims any responsibility therefor. Accel-KKR’s participation in this article does not serve as endorsement of Jellyfish, its products and / or services.