How Teams Can Collaborate on AI Coding Agent Effectiveness with Templates by Ed Lyons

“Agents using patterns” by Ed Lyons

As the use of coding agents has become fully mainstream, it is now time to help teams on large projects work together to make their AI use more effective, rather than continuing to leave individuals to improve results by themselves.

For a while now, developers have been working on their own, crafting prompts and paying attention to what is in the context of those prompts. At best, each has access to an instruction file for the project, such as Claude.md, Gemini.md, or copilot-instructions.md. They are usually generated by the agent itself after an initial review of the codebase.

But even if you generate one big instruction file and place it into your repository, it tends to be forgotten quickly. Like all other documentation, it becomes outdated as the code changes. Problems in agent results are always seen as weaknesses in prompting or in the model, as no one remembers that the instructions file has an outdated pattern that’s causing confusion when applied to code that now does things differently.

Even if you wanted to update the big markdown file, it can be intimidating. There is often a lot in it, from code samples to descriptions to architectural principles. Since these files have no defined structure, it can be a messy job to update it. 

And it is much easier to ignore that file and refine your own prompts to solve your generated code quality problems. In fact, many developers have their own highly-specialized techniques and tooling. They may even have multiple specialized agents with instructions that go beyond the one big file everyone has.

Yet leaving AI effectiveness and improvements up to individuals results in wildly different rates of success and adoption on the team. Worse, non-standard code creeps into projects due to differences in techniques, too-fast PR reviews, and the amount of experience a developer has in a particular area of the code. When someone is working in code they do not know well, it is far more likely they will submit code that is not what the team lead wanted.

A good way to get the team collaborating and improving results together is to move away from reliance on one instruction file and toward templates that tell agents exactly how to generate specific components. 

I call these files templates because they aren’t quite like the auto-generated markdown files for your project. Those files do a lot of explaining about the codebase that isn’t relevant to an agent creating the pieces. It is far better than having nothing, but I have found creating a lot of specific templates produces more accurate results and are easier to maintain.

I also call them templates because it is a concept that developers already understand. So if I say, “When you create a new pattern or component that others need to use, just write a little markdown file explaining exactly what it is, and how to build it, and put it into our ai-templates directory. Then, when someone wants to create one of those components, they can just point their agent at your file.”

I don’t recommend asking your agent to generate these for you, though using an agent to create the first version might be a good starting place. Agent-generated instructions can have a lot of noise in them that are not about what code to write. We keep hearing that coding agents can get confused on long tasks by a lot of unhelpful information, so it is good to strip out everything that isn’t a clear directive in how to write a component.

We also use another folder for prompt templates with placeholders for ticket-specific details that describe all of the templates and features that are needed to creating a larger component. 

All of these files are in the repository, and anyone can improve templates or prompts at any time. I recommend that a PR with significant changes to the code should come with refinements to the templates in that area, if possible. In this way, the instruction files are not drifting from the codebase.

There are also benefits of doing things this way that are not strictly about AI. 

The first benefit is getting all technical requirements into prompts. When I was experimenting with templates and prompts on real project tickets, I noticed that a big problem in my results was due to me omitting features I needed. For example, My first prompt attempts didn’t have instructions about error handling, so that was absent in the generated code. It wasn’t until my fourth attempt that I finally remembered to include everything. And of course, this also happens when writing code by hand. But with proper templates, I do not have to remember all the details.

The other benefit of using AI and template instruction files is gained from generating these components as a practice going forward which changes incentives around fixing messy code or incorrect pattern usage. In my initial experiments, I found areas where there were typical inconsistencies in patterns, or code that needed refactoring that hadn’t gotten it yet. In these cases, I was very motivated to clean up the code and then document the one correct way to handle those pieces, knowing I wouldn’t need to deal with them again. 

In another case, I refactored a large SQL script from a file that was hard to understand to one that focused on human readability. Ironically, this was because I wanted AI to handle all future changes. When you seek to orchestrate a lot of generation, you should think about a human reviewing the output. I wanted my SQL file to be so easy to understand that even a quick glance at changes would be enough to know whether it was correct.

Good practices around consistency, requirements, and readability are things we should always be minding. But AI generation of code promises previously unachievable efficiency gains from getting these things right, and since agents are not accurate out of the box, constraining them and providing detailed specifications are the only way to get those benefits without a lot of human repairs.

What I like best about this approach is that everyone on the team benefits when one developer gets the template instructions correct in an area where they worked. No one has to understand the whole codebase, and if your feature crosses into code you don’t know, the templates will ensure you don’t make mistakes while you are there.

Finally, for developers who are less skilled in using coding agents or more skeptical of their effectiveness, this approach will improve their results, allow them to make small contributions to instructions, and get them off the sidelines. 

Ed Lyons