From Coders to Curators: How AI-Generated Code Will Redefine Software Engineering

From Coders to Curators: How AI-Generated Code Will Redefine Software Engineering

Brian Fox, Co-founder and CTO, Sonatype

  •  No Image

Brian Fox, Co-founder and CTO, Sonatype, in an interaction with Janifha Evangeline, Editor of CIOTechOutlook, discusses his view on the power of AI-generated code and its transformation in the software engineering ecosystem. He highlighted the implications for future talent mix, governance obstacles, outsourcing models and new classes of digital products, while also stressing the critical role of domain expertise and responsible adoption in unlocking its full potential.

Brian Fox is Co-founder and Chief Technology Officer of Sonatype has more than two decades of experience in software development and open-source innovation. A core contributor in the Apache Maven ecosystem, he created the maven-dependency-plugin and the maven-enforcer-plugin, and oversees Maven Central which is the world’s most extensive aggregator of open-source Java components.  He is also a Governing Board member of the Open Source Security Foundation where he drives initiatives in order to strengthen open-source supply chain security.

In your view, how fundamentally will AI-generated code reshape the future talent mix and skill sets required in software engineering over the next five to 10 years?

Generative AI makes programming accessible to people who are not traditionally trained as software engineers. Generative AI may lower the entry-level barrier and enable more people to create software, but generating sustainable, maintainable and scalable architectures remain challenging than generating code. Less-experienced users can be able to spin up the code quickly using these tools; however, these users will struggle to advance their code into production systems. Additionally, the real concern lies in how the industry manages entry-level roles. If companies rely too heavily on AI as a substitute for junior developers, aspiring engineers may lose professional development opportunities, leading to talent deficiency down the line.

What ethical or governance frameworks do you think the industry needs to establish or to manage AI-generated code responsibly?

One of the greatest risks today is indiscriminate model adoption, often from open platforms. Organizations may not even fully understand whether these models are trustworthy, safe or aligned with their intended use. The rush to "do something with AI" has led to ignoring reasonable governance and posing real risk.

It is important for governance to provide clear guidance on which models are used, their origins, and the contexts in which they are applied. Moreover, governance should relate to tools such as Copilot or Cursor. If a developer uses the free version of either tool, their organization’s data could be unintentionally used for training purpose and there are significant risks associated with the leaking of intellectual property. To prevent this, organizations should establish safeguards such as access controls, approved tool lists, and clear usage policies. Without such measures, the industry will lead to creating data breaches, model manipulation, and reputational damage.

Do you see AI code generation narrowing the digital divide between large enterprises and startups, or will it create new competitive gaps?

This could go either way. AI could allow smaller teams to build functional products with fewer engineers; however, the long trend may favor large enterprises, with sufficient resources to modify and train their models, as well as keep the actual infrastructure. Similar to cloud computing, AI democratizes many powerful capabilities but still enforce reliance on hyperscalers. Startups can use prebuilt models through large providers, but those same providers can re-create those innovations and build them at scale.  These dynamic mirrors the cloud era: while startups benefited from faster time to market, the structural gap between the "haves" and "have-nots" ultimately grew wider.

Do you think AI-generated code will change the way companies approach software lifecycle management, specifically maintenance and updates?

Yes, and perhaps not for the better. Certainly Generative AI can speed up initial development; however, it may also introduce latent inefficiencies, vulnerabilities and poor designs that complicate maintenance. As a result, debugging, refactoring and a long-term support may consume more resources than the initial build. Until tooling and practices mature, maintenance could become the most significant bottleneck in AI-driven software delivery.

Could AI change the traditional roles of architects and senior developers from code authors to code curators and validators?

Yes. Generative AI can automate routine architecture patterns and implementations. This will help architects and senior engineers to focus on validation, refinement, and governance. AI may provide architectures, but only a trained engineer can evaluate the viability, security and maintainability of that architecture.  As such, AI complements the contributions of practiced professionals rather than replacing them.

How might AI-driven code generation reshape outsourcing strategies and global delivery models?

AI could change outsourcing dynamics, especially for projects that were previously outsourced due to lack of in-house expertise. Organizations may now experiment with AI-generated code for early-stage or lower-priority initiatives. In some cases, this mitigates reliance on outsourced service; however there are still obstacles to maintenance and scalability.

Over time, some outsourced development for simpler or experimental projects can be replaced, but complex large-scale systems continue to require the skilled human teams. As a result, the outsourcing model could transition to a higher-level service such as integration, customization and continued evolution of systems.

Beyond efficiency, where do you see the biggest opportunities for AI-generated code to enable entirely new classes of digital products or services?

The transformative opportunities lie in the AI-native products such as agents, platforms, and digital assistants which were previously impossible to build. AI has the potential to introduce a new generation of software that can help in reasoning, learning, and adapting in real time and may unleash an entirely new category of digital services. For instance, a personal AI assistant that is capable of handling multi-step processes across systems, or any autonomous platform capable of independent workflow completion from end-to-end, represent entirely new possibilities.

What role could domain knowledge play in guiding AI tools to generate more context-aware and business-line code?

Domain expertise is still critical. Experts can write better prompts, critically assess AI output, and refine it toward the desired outcome. By contrast, non-technical users may take the AI’s output at face value, lacking the ability to assess accuracy or relevance.  Just as product leaders apply their subject expertise to guide human engineering teams that same expertise now is used to direct AI systems. The stronger the domain context, the more accurate and business-relevant the AI-generated code will be.

Do you have any other additional information to add?

A key challenge lies in ensuring the timeliness of AI training data. Many generative models are trained on snapshots of code and dependencies that may already be outdated. An AI tool might recommend upgrading to the latest version of a library from the previous year, without recognizing a previously discovered vulnerability or a more recent version.

Technologies such as the Model Context Protocol (MCP), which connect an artificial intelligence model to real-time, real-world data sources, are critical to addressing this limitation. If organizations do not have this type of augmentation, they risk the potential of deploying outdated code or insecure code into production environment. In summary, generative AI provides phenomenal capabilities and productivity but it should be governed responsibly, and an understanding that the greatest benefits come not from replacing human expertise, but from enhancing them.