Why the best Agentic Systems have Human In The Loop (HITL)
Most people think Human-in-the-Loop (HITL) limits, or indicates the limitations of what AI Agents can do, but they've got it backwards.
HITL has proven to actually expand the surface area of what you can deploy quite a lot. Here's what I mean by "expand":
Now all deployable.
Without HITL, these would most likely stay stuck in pilot, but adding an extra layer of HITL, if done properly, they ship.
I'm not referencing these things in a vacuum, we are seeing it ourselves, both internally and through customers:
We call this the 90/10 rule, 90% automated, 10% human-augmented.
The exact ratio varies, some systems start at 30/70 and scale from that for example. The point isn't the number. It's having the architecture that supports both, so you can dial the ratio based on what the use case needs.
It's all about recognizing that certain decisions deserve human judgment, and building that into your architecture from the start.
In an earlier post I shared what we've learned about Agentic Systems from 2 billion agentic executions:
But I do think that HITL is an optional but relevant third layer for humans that introduces the ability for judgment and accountability.
A distinction worth making is that there's actually two modes here:
The first is about precision, certain steps need human judgment, the second is more about confidence, someone is watching, and can step in, but in the end of the day both expand what you can ship.
David Almeida, Chief Technology and Strategy Officer at AB InBev, spoke at our Signal conference about how they're thinking about agentic AI.

Fun fact: AB InBev sells one in every three beers globally (including many of my favorites), with millions of customers across all their platforms.
They are major adopters of CrewAI and have currently $30 billion in decisions influenced by AI annually. All that to say this isn't a company experimenting, they're operating at scale.
One pattern David shared: their contact model handles 20 million tickets a year. Before agentic AI, all manual. Now 30% are fully autonomous. The other 70%? Human-augmented, agents working alongside employees, routing requests, pulling information, drafting responses for human review.
Here's what he said that stuck with me:
AI is not gonna live on its own. AI is gonna live within our technical platforms to create value.
That's the architecture pattern we keep seeing. Agents and humans together, each doing what they do best. They're targeting $28M in value from this approach in this one use case alone.
This is what production-grade agentic systems look like at Fortune 500 scale.
On the open source side, CrewAI Flows now support HITL natively with the @human_feedback decorator. One line to add a checkpoint.
@human_feedback(
message="Review this before sending:",
emit=["approved", "rejected", "needs_revision"]
)
def review_content(self, content):
# your logic in here
return content
By using this simple annotation the flow pauses, presents the output for review, collects feedback, and routes to different paths based on the response. Full state persistence across async human interactions. Audit history built in.
On the enterprise side with AMP, we're adding the infrastructure that makes this production-ready, you just deploy that same code and this is what our customers are getting:

The open source decorator gives you the checkpoint. AMP gives you the control plane to run it at scale.
The timing of HITL becoming more adopted and highlighted isn't accidental.
EU AI Act is actively enforcing. FDA requires human oversight for high-risk AI. SOC2 audits are asking about AI decision trails. The regulatory world caught up faster than most teams expected.
But compliance is just one reason, in the end of the day it all come back to business outcomes.
Enterprises figured out fully autonomous agents are great but you can do a broader set of use cases with a third human layer as well. David from AB InBev said it clearly: they want to lead in agentic, not by removing humans, but by building systems where agents and humans work together.
The teams shipping to prod aren't removing humans from the loop, but the ones actually spending the time designing the loop itself.
There are two ways to think about human involvement in AI:
Some see it as a limitation, like something to minimize.
Others see it as architecture, something to design in.
I like the latter better!
The feature is live now. Docs are here.
Try it and let us know what you build.
Manage the full AI agent lifecycle — build, test, deploy, and scale — with a visual editor and ready-to-use tools.
All the power of AMP Cloud, deployed securely on your own infrastructure — on-prem or private VPCs in AWS, Azure, or GCP
An open-source orchestration framework with high-level abstractions and low-level APIs for building complex, agent-driven workflows.