← All Articles

Our 10 rules of using coding agents

Khash SajadiKhash Sajadi
Oct 24th 25

Like many others, as a company, we are going through a steep learning exercise with coding agents. As we are all learning together how to face the challenges of this new era of coding, I wanted to share our learnings with you and, in turn, benefit from what you have learned in your journey with using coding agents.

1. Treat coding Agents like junior engineers

Many have said this point before, and I agree with it in general: Coding agents are like junior engineers. This means they are eager, make mistakes, and need someone to watch over their work.

This also means using Agents effectively depends a lot on your ability as a company to onboard and manage junior engineers. If your company is good at onboarding, training, and reviewing the work of junior engineers, you are better positioned to benefit from Agents too.

For us the handover process of using Agents is very similar to the handover process for a junior engineer:

  1. Your Agent is not going to have a better idea about what you want to achieve than you do. In other words, if you don’t have a clear idea about what you want, your agent won’t either.
  2. The best coding Agents using the best models in the world are no replacement for being able to elaborate your idea clearly. This is another aspect of knowing what you want. If you are not clear about know what you want, then you can’t explain it clearly to others, Agents included.
  3. You will not give a junior engineer a large project. Don’t try that with an agent either. Break up big tasks into smaller ones. More on this later.
  4. Sometimes you might want to hide your ultimate goal from your junior engineer to spare them the confusion and keep them focused on the step they are working on right now. In order to understand an ambitious goal, you need visibility that might not be available to a junior engineer and won’t be to an Agent either. This means breaking up the task into pieces that can be explained and understood by an Agent.

2. Building bottom-up vs top-down

When building frameworks, we often use the rule of “3 examples”: We need at least 3 examples of use cases before generalizing them into a framework with abstractions. This is one case where Agents and human developers do not work the same. Let’s assume you want to build an API. In a normal case, you might sketch out your first 3 endpoints before trying to come up with a framework for the way you serialize the results or validate the payloads.

However, we found that building the framework first works better with Agents. There are two reasons for this:

  1. More examples mean larger context, which leads to more uncertain outcomes for the Agent. This is not necessarily a bad outcome, but shepherding the Agent through a large context is more work.
  2. Agents are better at refactoring than humans, so you can afford to build the first sketch of your framework first, then refactor it many times with Agent speed. While your context will grow with each refactoring, your scopes of refactoring can be limited through multiple iterations, making it easier to communicate.

3. Lean into components and services

The term component is usually used in the context of UI, but it doesn’t have to be. You can call backend “components” services if you like. Whatever the terminology, using components helps a lot with Agentic coding. Components are a good way to enforce clean interfaces (or contracts) between interacting pieces of your code.

This is not to say that using components or services without Agents is a bad thing. That is up to you and your needs. But using them with Agents is a great way to get better results from Agents.

4. Use the REPEX flow

Review, Plan, Execute: REPEX. This is how we describe our approach to most Agentic coding. Suppose I want to add functionality or refactor our instrumentation subsystem of our codebase and I also know what I need to achieve. Here is how we do it with REPEX:

  1. Ask the Agent to review the current instrumentation subsystem and write down a detailed document about how it works and is architected. We then store this document as CLAUDE.md or AGENT.md (or even README.md) in the most relevant part of the codebase. We then point to this file with a description in the main CLAUDE.md file in the repository. This keeps the main CLAUDE.md file that is loaded with each session small but allows the Agent to learn about the specific point in the codebase they work on if there is a need.
  2. Depending on your Agent, you might want to clear the context here to avoid reaching context limits mid-flow or compact the context (see point 10 below).
  3. We then describe what needs to be done (remember, small bite-sized pieces) and reference the summary file in the request while the Agent is in planning mode.
  4. Go back and forth with the Agent a few times over the plan, but try to strike a balance between getting it perfect and how the Agent might lose some context as it gets longer. You can always fix small things later.
  5. Once you’re reasonably happy with the plan, go ahead and execute.

5. Observed Edits vs Auto Approvals

There is an ongoing debate about approving changes one at a time vs letting the Agent make all the changes and then doing code reviews at the end, the way you would do for a human developer. So far, we use both approaches depending on the situation.

If the changes are relatively small in scope and safe to make, we let the Agent off with auto approvals, documentation is usually a good candidate for this. However, sometimes without proper step-by-step guidance there is a high chance of the Agent going off the rails. That’s when we want to approve and guide the Agent on each change. I don’t have a hard rule for this yet, but perhaps that will change.

6. Emphasize Learning

Agents are fast, but also forgetful. They have no memory except for the ones you feed them through context via AGENT.md files and, recently, Skills. All of these are the same: a distillation of learning that is fed back into the Agent with every relevant task.

We treat teaching Agents as a high priority because it improves the quality of their work and makes our lives gradually easier. This means creating the review summary files (through REPEX) and asking the Agent to update them with a summary of their learnings or specific points after each task.

After a while, these files get big and we ask the Agent to summarize them or do it manually.

7. Beware of Fly-By-Wire code

Fly-by-wire (FBW) systems enable the control of aerodynamically unstable aircraft by using computers to continuously make micro-adjustments to control surfaces, stabilizing the plane automatically. In other words, a pilot cannot keep a modern fighter jet airborne without the onboard computer.

The same can happen if you let Agents run amok on your code.

Agents have a tendency to use brute force for most tasks. If they can code their way out of a problem they will, instead of actual design or problem solving. In this manner, they are not too dissimilar to junior developers, and that’s not always a fault. It can be an asset, for example, when it comes to writing tests or trying the same thing with different approaches multiple times (see point 8). However, being mindful of this tendency is important and should be constantly controlled, otherwise you will end up with code that is fly-by-wire code.

8. Running Parallel Experiments

Agents are fast but can make mistakes. This means you can get them to do the same thing multiple times from different angles or with slight variations. We use git worktrees to spin up multiple coding environments and run experiments in parallel. Often you will see that an Agent does one part of the job better and lacks in another part, even with the same context and prompt. You can use this to improve the overall outcome by feeding the findings or techniques of one Agent into another one, finally making one the winner.

9. Using multiple Agents to avoid diversions

Sometimes when an Agent is in the middle of work, you realize the goal will be achieved better by making more fundamental refactoring to the code. In cases like this, we usually stop the Agent (keep the context) and either do the refactoring manually or use another Agent to work on the refactoring. Once the refactoring is done, we tell the first Agent to continue its work, using the changes that were made while it was paused. Using the original Agent to do the refactoring almost always increases the context and distracts the Agent from being efficient and causes a lot of diversions.

10. Be in charge of compacting

With the limitations on context window size, Agent will need to compact (summarize) the context when they reach the limit. While Agents are getting better at this summarization, we have found that the quality of their work can drop after they have summarized their context, if the nature of the work before and after the compacting is the same. For example, if your Agent needs to compact the context between doing the same refactoring on two parts of the code, you might want to pay more attention to its work after the summarization. This is why we usually try to be do the compacting manually and that a natural point when the work has reached a milestone that can be summarized. Better still, you might want to update the CLAUDE.md or README.md file at this point to benefit from the learnings that have happened until this point (see point 6 above).

Summary

We consider our experiments with coding Agents early and a work in progress. It is quite possible that we have gotten some of these points wrong and will change or abandon some of the practices we have recommended here. This could be due to us gaining more experience or the Agents getting better over time; most probably both. However, we are keen to hear what you think about these points and how you use Agents for your coding tasks.


Try Cloud 66 for Free, No credit card required