GitHub Copilot
Claude Code
OpenAI Codex

How We Maximize Productivity with Async Coding Agents

June 11, 2025
Cody Lund
6 min read

In late May 2025, Microsoft opened the beta for their GitHub Copilot Coding Agent: a cloud-based asynchronous coding agent that runs in an ephemeral VM. This agent not only writes code, but can install dependencies, compile code, execute tests, and access the internet.

We've been experimenting heavily with async coding agents (GitHub Copilot and beyond) since its release. They have already become a core part of our development process. We still have a lot to learn, but we've adopted a number of strategies that are giving us a clear competitive edge.

Invest in high-quality GitHub issues

While powerful, coding agents are bad at guessing what we want. We see better results and faster iterations from agents when we give more specific instructions.

This means describing not just what work needs to be done, but how it should be approached, which files are likely relevant, what dependencies might be affected, and what the expected outcome looks like.

Leverage templates and system-level instructions

We've taken the time to compile system-level rules and write GitHub issue templates to expedite this process and save us from repeating ourselves.

Different agents look in different places for system-level instructions. We use multiple coding agent providers, so we have to keep close tabs on these configs to avoid drift.

Limit scope creeep

We pay close attention to scope creep and/or overlap in issues. Agents generally do better when scope is smaller. We also leverage parallelization of agents (discussed below), and having two or agents working on the same area of the same code base is a recipe for conflicts.

Automating issue drafts

Writing quality issues can get pretty tedious, and we'd rather be coding. So, we're constantly evolving strategies for writing high-quality issues faster. Keep an eye on our blog for more discussions on this topic.

Configure a proper build environment for the agents

These cloud-based coding agents run in ephemeral VMs for a reason. We took the time to set up the necessary environments, so our agents can compile and execute our code.

This increases the quality of output per iteration because the agents can test their changes and respond to errors before signaling us to review.

Agents always get the first stab

We've adopted an aggresive culture of always giving a coding agent a first go at every feature or bug fix, regardless of scope. Their are multiple reasons for this:

  1. We think this is the way of the future, and we want to be on the cutting edge.
  2. By developing agent-first, we are forcing ourselves to refine and improve our agent processes as we go.
  3. Our coding agents are already surprisingly good, especially at scoped tasks. For bugs that might take us 30 minutes to investigate and fix, Copilot can one-shot a solution in the same time. And we save ourselves a context switch from whatever other problem we were focused on.

Parallelize (within reason)

We generally have anywhere between 0-5 coding agents running asynchronously across our code bases. When we have a lot of feature requests and bugs, we scale up towards five. When there is less or no technical work to do, we scale down to zero.

We could technically run more than five agents in parallel, but the real bottleneck becomes our capacity to review and test agent output. Five agents iterating on issues simultaneously is enough to keep two humans busy all day.

To avoid excessive context switches, we try to assign issues and review agent outputs in bulk a few times per day. This works well, because an agent iteration can take anywhere from 15 minutes to an hour to execute.

Experiment with multiple providers

Many of the major players in AI (OpenAI, Anthropic, Microsoft) have async coding agents that run in the cloud and integrate directly with GitHub. We're experimenting with all of them simultaneously.

This has a few benefits:

  • We can figure out which agents are better at which tasks.
  • We can maximize parallelization (e.g. GitHub Copilot throttles requests which can limit concurrency).
  • We don't dip into usage-based billing as fast, because we use our base quotas on the various platforms.

Take advantage of off hours

Before signing off, we review any pending PR iterations and/or assign issues for the next batch of work. The goal is to get at least a few agents working on something overnight. By morning, we generally have several completed implementations waiting for review.

Think beyond the code

Git repositories can contain a lot more information that just source code. For example, this blog is written in Markdown and stored in a GitHub repository. We've already started using coding agents powered by Claude 4 Sonnet to write early drafts of new blog posts.

We also have coding agents writing and organizing prompts and building documentation. The same strategy could be applied to onboarding guides, marketing materials, etc. We're always looking for new ways to apply our agents.

Ready to explore AI-powered development workflows for your team? Schedule a consultation to discuss how coordinated coding agents can accelerate your development process.