The way we used to code was manual and iterative. You'd get a JIRA issue or Linear ticket, read through it, and then jump straight into the codebase. This approach worked because you were there, building context as you went, figuring things out step by step.
Writing good code requires modeling both the feature area and the code area in your head, then making changes to shift that model toward your desired state. There are fundamentally two ways to approach this: you can do all the reading upfront and document everything that needs to be done before making changes, or you can jump in and iteratively figure it out as you go.
For humans working alone, the iterative approach usually makes more sense. You're present, you're building understanding, and you can course-correct in real time. But AI coding agents flip this dynamic completely.
AI agents are significantly worse at iterative coding and figuring things out on the fly. They excel when given comprehensive upfront context. The more complete information you provide at the beginning, the better they perform. This makes writing high-quality instructions upfront a source of huge leverage in AI-assisted development.
The new workflow looks like this: write a really detailed issue, assign it to your coding agent, let the agent write the code, review and iterate, then merge. That first step—creating comprehensive upfront context—is now your point of maximum leverage. If you can do that well, everything downstream works better.
The problem? Writing great issues is tedious and takes forethought. Most developers hate it, and honestly, it's hard to get right.
Here's a technique I've been experimenting with that's been surprisingly effective. Instead of opening GitHub and typing away at a new issue, I record a Loom video. I walk through the UI of my application, explain what I want to build, and then dive into the relevant parts of the codebase.
Recently, I wanted to add a sidebar to my local MCP client to show chat history. Rather than writing a traditional issue, I recorded a 2-minute Loom video. I showed the current UI, explained where the sidebar should go, and highlighted the specific files and components that would need to be modified.
Then I fed that video to a tool I built called "loom to issue." It downloads the Loom video, passes it to Gemini along with instructions on how to create GitHub issues, and connects to GitHub to pull in all the file names for absolute paths. The output was a remarkably detailed issue with a clear title, description, end goal, current state, and—most importantly—the specific files to look at.
When I assigned this to GitHub Copilot, it knew immediately where to go. No searching, no iterations to find the right files. It created a pull request in about four minutes with new components, proper layout refactoring, and a thoughtful description.
There's something fundamentally different about the context you can provide in a video versus what you'd typically write in an issue. When you're walking through code on screen, you naturally call out the important pieces, explain the relationships between components, and provide the kind of spatial context that's really hard to capture in text.
The AI model processing the video can see exactly what you're looking at, understand the visual hierarchy of your application, and map that to the code structure. It's not just parsing your words—it's seeing your interface, understanding your intent, and connecting the dots between what exists and what you want to build.
I also gave the Gemini the full list of files in the repo. That way it can add specific instructions for which files to edit. This saves the agent from having to do loops of searching for the right files to edit or pull context from.
While handwriting an issue might force you to articulate your thoughts more clearly, recording a video allows you to get more content out of your head much faster, without all the manual, fiddly parts of typing. You can naturally demonstrate the current state and explain the desired changes without getting bogged down in documentation overhead.
I got about 80% of the way to a complete feature with this approach. The AI agent created all the right components, implemented the core functionality, and handled the API integration correctly. I just needed to clean up some UI details and provide a bit more feedback to get across the finish line.
But the real win isn't the time saved—it's the elimination of back-and-forth. The agent didn't have to search around the codebase or make multiple attempts to understand the requirements. It had everything it needed upfront.
This represents a broader shift in how we work with coding agents. Instead of treating them like junior developers who need constant guidance, we can frontload the context and let them work more autonomously. The investment in creating better issues pays dividends in execution quality.
This isn't a magic solution. Video processing and LLM inference so far add cost and latency to the workflow, but they seem to be worth it. The quality of the generated issue is only as good as your video walkthrough—if you skip important context or make assumptions, the agent will too.
But for the right type of work, this feels like a glimpse into how future development workflows might evolve. As AI capabilities improve and video processing becomes more sophisticated, the gap between human intuition and machine understanding will continue to shrink.
Recording a five-minute video to get high-quality code generation is a much more effective trade-off than spending an hour writing out detailed tasks.