If you have scrolled through tech Twitter lately then you know that next AI “innovation” is orchestration tools: “Manage all your agents. Never open your editor again.” Now you’re endlessly scrolling through your feed looking at the same orchestration app with an “opinionated and innovative” design: a sidebar, prompt, work trees, cloud agents, and a commit button. These apps are a step in the right direction by moving us away from the terminal, but they don’t answer a fundamental question about AI. Which is, what is the role of AI in software development?

Early tools like Cursor, Windsurf, Codex CLI, Claude Code, and more showed us the potential of AI. However, we never defined a clear role and goal for AI. Instead, we got excited about the speed at which we can deliver and mistakenly have handed the wheel to AI and doubled down on the agent model.

I’m no stranger to this hype trap. I got addicted to the speed of delivery and the gap of skills it exposed. Instead of asking, “What should AI do in my codebase?”, I overfocused on delegating, orchestrating, communicating, and making design decisions. I became dissatisfied with my work and started producing more projects just to chase the high of programming.

Eventually, I fell deep into orchestration tools myself due to the friction of my environment and dissatisfaction. Orchestration tools were a solution for the parallel work, and way to manage my agents, repos, and code diffs. However, it became a poison to the quality of my code. I traded quality software for my own dopamine and over time stopped reviewing code as I began trusting the output of AI. I saw a working product and told myself “lgtm.” Although, this was never a new issue in software engineering, AI has amplified its effect and only we are to blame.

Now going back seems unreasonable, and it looks like AI will take on the role of implementers in software development. But that’s exactly where things start to break down, as implementation was never the hard part. The real work has always been in understanding the problem, defining constraints, and making tradeoffs that can evolve over time. Currently AI doesn’t understand your previous intent. Nor does it carry context, history, or long-term ownership. It produces output that looks correct, and increasingly enough, that’s enough to trick us into accepting it. The danger isn’t that AI writes bad code, but it’s that it writes plausible code that we stop questioning.

So the role of AI shouldn’t be an “autonomous implementer.” That framing leads us software engineers to passivity, where we become reviewers at best and spectators at worst. Instead, AI should act as a multiplier for thinking, not a replacement for decisions, ownership, or options.

The moment we stop engaging with our code is the moment we lose the ability to judge it. So, orchestration tools aren’t the problem but a symptom to scale output. However they don’t solve a new core issue arising: What does good collaboration with AI actually looks like? Until we do, we’ll keep optimizing for speed, mistaking it for progress, and shipping systems we no longer fully comprehend.

And that’s a much bigger problem than slow development ever was.