In late May 2025, Microsoft opened the beta for their GitHub Copilot Coding Agent: a cloud-based asynchronous coding agent that runs in an ephemeral VM. This agent not only writes code, but can install dependencies, compile code, execute tests, and access the internet.
We've been experimenting heavily with async coding agents (GitHub Copilot and beyond) since its release. They have already become a core part of our development process. We still have a lot to learn, but we've adopted a number of strategies that are giving us a clear competitive edge.
While powerful, coding agents are bad at guessing what we want. We see better results and faster iterations from agents when we give more specific instructions.
This means describing not just what work needs to be done, but how it should be approached, which files are likely relevant, what dependencies might be affected, and what the expected outcome looks like.
We've taken the time to compile system-level rules and write GitHub issue templates to expedite this process and save us from repeating ourselves.
Different agents look in different places for system-level instructions. We use multiple coding agent providers, so we have to keep close tabs on these configs to avoid drift.
We pay close attention to scope creep and/or overlap in issues. Agents generally do better when scope is smaller. We also leverage parallelization of agents (discussed below), and having two or agents working on the same area of the same code base is a recipe for conflicts.
Writing quality issues can get pretty tedious, and we'd rather be coding. So, we're constantly evolving strategies for writing high-quality issues faster. Keep an eye on our blog for more discussions on this topic.
These cloud-based coding agents run in ephemeral VMs for a reason. We took the time to set up the necessary environments, so our agents can compile and execute our code.
This increases the quality of output per iteration because the agents can test their changes and respond to errors before signaling us to review.
We've adopted an aggresive culture of always giving a coding agent a first go at every feature or bug fix, regardless of scope. Their are multiple reasons for this:
We generally have anywhere between 0-5 coding agents running asynchronously across our code bases. When we have a lot of feature requests and bugs, we scale up towards five. When there is less or no technical work to do, we scale down to zero.
We could technically run more than five agents in parallel, but the real bottleneck becomes our capacity to review and test agent output. Five agents iterating on issues simultaneously is enough to keep two humans busy all day.
To avoid excessive context switches, we try to assign issues and review agent outputs in bulk a few times per day. This works well, because an agent iteration can take anywhere from 15 minutes to an hour to execute.
Many of the major players in AI (OpenAI, Anthropic, Microsoft) have async coding agents that run in the cloud and integrate directly with GitHub. We're experimenting with all of them simultaneously.
This has a few benefits:
Before signing off, we review any pending PR iterations and/or assign issues for the next batch of work. The goal is to get at least a few agents working on something overnight. By morning, we generally have several completed implementations waiting for review.
Git repositories can contain a lot more information that just source code. For example, this blog is written in Markdown and stored in a GitHub repository. We've already started using coding agents powered by Claude 4 Sonnet to write early drafts of new blog posts.
We also have coding agents writing and organizing prompts and building documentation. The same strategy could be applied to onboarding guides, marketing materials, etc. We're always looking for new ways to apply our agents.
Ready to explore AI-powered development workflows for your team? Schedule a consultation to discuss how coordinated coding agents can accelerate your development process.