
There’s a rhetorical purity test floating around AI discussions: “If there’s still a human in the loop, is it even an agent?” The implied answer is usually a smug no, as if the presence of oversight nullifies agency entirely.
But taken seriously, that logic would also disqualify most forms of human labor. By that standard, anyone under managerial oversight isn’t really working. Anyone collaborating, asking for feedback, or operating within a process isn’t truly doing their job.
Which is obviously nonsense.
Oversight doesn’t erase agency. It enables it to scale without collapsing into chaos.
Bad Management Doesn’t Prove We Don’t Need Management
There’s a reason people bristle at the idea of “management.” Sometimes oversight isn’t helpful, it’s obstructive. Anyone who’s ever sat through a status meeting that somehow made them less informed knows that “structure” alone doesn’t guarantee anything (except maybe more meetings).
Management can hinder progress just as easily as it can support it. The same is true in AI systems.
A poorly implemented human-in-the-loop (HITL) process can grind decision-making to a halt, inject inconsistency, or neutralize the very capabilities that made the system useful in the first place.
That’s not an argument against oversight. It’s a case for designing it well!
Digital Labor: A Grounded Framing
I’m not particularly attached to the phrase “digital labor”. To me, it has a slightly sterile, condescending classist vibe. In this case, however, this framing is a useful construct for analogy.
Instead of asking whether an AI agent is autonomous in some metaphysical sense, it’s more useful to think of it as a form of digital labor. To be very clear: AI is NOT a conscious worker, and it doesn’t come with feelings. It is just a participant in a task pipeline, executing discrete (often valuable) work.
When looked at this way, the parallels to human work become clearer.
Most jobs require a balance between independence and accountability. We rarely give people total autonomy with no oversight. We give them autonomy within a structure: deadlines, policies, escalation paths. That’s how work gets done without unraveling into chaos. That scaffolding isn’t a failure mode it’s how complex work gets done.
AI agents aren’t exempt from needing scaffolding. Framing them as digital labor reminds us that their value isn’t in being left alone, it’s in being integrated thoughtfully. They need integration, not isolation.
Oversight Isn’t Binary. It’s Architectural.
The quality of HITL depends entirely on its design. Are humans stepping in as curators of edge cases? As quality-checkers? As ethical filters? Are they just clicking “approve” to maintain plausible deniability?
The question that matters in agentic systems isn’t “is there a human in the loop”, it’s “what is the human’s job in that loop”, and maybe more specifically, does having the human there make the system better or worse?
That’s where bad design breaks things. Performative oversight is worse than none. A rubber-stamp isn’t safety, it’s theater.
Effective oversight, whether over humans or machines, is specific. It’s contextual. And it doesn’t look the same for every task.
Stop Pretending “Autonomy” Means “No Humans Allowed”
Too many AI narratives treat human involvement as a failure condition. But removing oversight isn’t a sign of progress , it’s a risk multiplier.
A human in the loop doesn’t mean your system is broken. It means you’ve acknowledged reality and recognize that judgment, context, and edge cases exist, and accepted that even smart systems need constraints.
That’s not anti-autonomy. It’s the infrastructure that makes autonomy viable and allows it to produce reliable outcomes.
The Problem Isn’t Agency , It’s Architecture
Human-in-the-loop isn’t an admission of failure. It’s a recognition that effective agency , like effective labor , requires context and structure.
You can grant your agent all the autonomy you want, but if your oversight is incoherent you’re not building intelligence, you’re building liability.
And if your definition of agency is “no humans allowed,” you’re not describing a system.
You’re describing a fantasy.