Lessons from 2025: Software's Year of the Coding Agents
Tunnel. By Ed Lyons via Midjourney
It’s nearly 2026. After decades of writing code by hand, and in less than one chaotic year, we software developers find ourselves in an entirely new world. Our machines not only can write most of the code for us, but far more than we could have ever written ourselves. Vast amounts of new software are now being generated, and sometimes by non-programmers. If software architecture was as visible as civil engineering, we would be seeing towering new skyscrapers appearing all over the globe.
So what have we learned in 2025, in this extraordinary year of software agents, that can help us better adapt to this new world in 2026?
For us professional developers, the job has changed, the opportunities have changed, and the risks have changed. Even the culture has changed, moving away from pure determinism to accommodate superstitions and alchemy.
The impossibly-energetic and productive coding agents that have colonized our industry are also quite difficult to manage properly, especially across teams that are not that interested or disagree about how to tame them.
As for me, here in December 2025, I almost never type in code anymore. And I wouldn’t go back to what I was doing for decades.
It is hard to accept what has happened so quickly. It has helped me to think of what we would have thought, back in the 1990s, about the job of programming up until this year: we were still writing for-loops by hand, and still sometimes getting off-by-one errors. What would our much-younger selves have said to us about that? Probably, “Why are you still doing that? Aren’t robots writing all the code for you? What has gone wrong?” That thought helps me accept that this year brought a fulfillment of what we would have imagined long ago, even if what we got was not truly intelligent machines, but a world of magic lamps.
A review of this tumultuous and unbelievable year is in order. There are lessons available from 2025 that can guide us toward what we need to do next year to get teams and organizations using coding agents more effectively.
At the start of this year, there was a lot of talk about coding and AI. Cursor was getting a lot of traction, and many developers were getting inline help through GitHub Copilot. Having seen Cursor, I was concerned about the future of my career. What if this technology got far better someday? But I wasn’t motivated to jump in and really play with it. No one I knew and respected in the field was using it at work. AI code suggestions seemed good enough for me.
That is a lesson that matters now as much as it did then: enthusiasm for working with coding agents moves from person to person, not from team to team. You can mandate AI for your company, but you won’t get much out of it without enthusiasm and ambition from your people. The way to spread productive adoption is by seeing your organization as a social graph, and figuring out which people could activate interest among others close to them.
In late February, Claude Code appeared as a public preview. I didn’t notice the release until a fellow developer emailed me two weeks later and told me that it was expensive, but incredible. He was a long-time friend and a brilliant engineer. I decided that I had to give it a try, and got hooked on it. Other long-time friends began talking about Claude Code. Their testimonies reinforced my will to spend time learning about it, and even to do my own late-night experiments.
Claude’s then-unusual command line interface also did something in hindsight that is noteworthy: it got me out of my normal habits. Cursor felt too much like an IDE. And a lot of developers who were using Github Copilot for inline suggestions held onto existing habits far longer. (Microsoft was quite late to make ‘Agent Mode’ the default interface in copilot for Visual Studio Code.) For most Copilot users, it was merely an assistant to what they were already doing. Using Claude Code got me out of the IDE and into an entirely different agentic workflow. This is another lesson I learned for adoption: showing people how to improve their existing tasks is not as potent for changing behavior as getting them to do something very different, which will be far more engaging for their minds.
In the spring, adoption of coding agents was exploding, and there was so much noise about them on the web! Much of the industry discussion was being warped by promoters showing unrealistic demonstrations, leading to cynicism from developers trying to use those techniques at work. At the same time, prominent AI contrarians constantly pointing out the limitations of models for general use were also reducing enthusiasm for coding agents among many developers who were looking for reasons to stay on the sidelines.
Among my peers who had been in the field for decades, skepticism was the dominant posture. And why not? How could these agents be as good as we are?
What I learned by early summer is that I had to stop listening to people in social media, to ignore the skeptics, and to find a small group of friends and strangers who were working hard at improving their results on their professional projects. I invested time in reading the long blog posts about people’s custom setups and the often strange incantations they uttered to get better results. What counted was their ambition. Yes, a lot of what I perused ended up not being useful. But usually I picked up a new idea that really helped.
Even today, when the skeptics and buskers no longer dominate the discourse, it is still important to steer people to high-quality sources, such as the blogs of Simon Willison or The Pragmatic Engineer. Otherwise, the confusion of weekly headlines about agents will sap their motivation to invest in any particular technique.
Other than the release of Claude Code, the other big development of the year that had a huge impact on my commitment to agent coding was the release of ChatGPT 5 in August. That model was estimated to cost around one billion dollars to train. OpenAI had delayed it for months as they wanted it to be a game changer for generative AI. Up until that release, I was worried that some huge breakthrough would invalidate all I had learned and might even replace me in the foreseeable future. But once ChatGPT 5 showed that an extraordinary concentration of money and great minds at OpenAI could not change the game, I decided the technology had reached a very gradual ascent that we would be on for a while. It was safe to commit to the paradigm we had, and to build skills. I put aside six months of loud LinkedIn posts telling me that “software engineering is over.” For it was not.
It was also at that point I became very interested in coming up with techniques to scale skilled use of agents across teams. Because like a lot of generative AI technologies, the individual has done far better than the organization in getting value. In fact, throughout the previous months, one thing I noticed was that the best agent users tended to be lone wolves working in their organizations. They were dedicated to making the agents work better for themselves, but among co-workers they were often alone in their vigor.
It is much harder to figure out how your techniques for better results can be used by your coworkers, especially if they are not dedicating as much time as you are to the craft of managing agents.
Fortunately, in the fall of this year, coding agent vendors added more ways to scale AI techniques across teams and organizations, such as Anthropic’s rollout out of Claude skills and marketplaces in October. These were crucial developments.
Vendors also started adding many more features dedicated to improving the developer experience. It was a welcome sign of maturity. They were acting more like traditional tool vendors. These quality of life improvements aid adoption of new workflows.
So what lies ahead? Now that we are in this new world of generated code and rapidly evolving techniques, what should we be focusing on in 2026?
In addition to getting more people more skilled using agentic development techniques, we also need to figure out how these agents will interact with other tools and project processes. We must think hard about how much generated code people can realistically review, and if the traditional PR process should be abandoned in favor of something else. We have to keep talking about how less senior engineers can gain traditional technical skills while agents are doing so much of the work that would previously have educated them.
Lastly, we need to make more unstructured time for people to just sit and talk about how the journey is going without their bosses around. I never would have made it this far if I wasn’t able to sit down with developers I know to talk about how it’s all going in this time of unprecedented change. Genuine human solidarity goes a long way in helping people adapt, and these extraordinary agents cannot fill that very human need.