AI Agents in the Enterprise: Where Autonomous Workflows Make Sense and Where Not
"Agent" is the most over-promised word in enterprise AI. The right question is not "can it act autonomously" but "how big is the damage when it acts wrong".

"AI agent" is currently the most over-promised word in enterprise AI. What is sold is autonomy. What is needed in most cases is something far more modest — and that is exactly the good news.
The decisive question is not "can the system act autonomously" but "how big is the damage when it acts autonomously and wrong".
Assistant, automation, agent — three different things
An assistant suggests, a human acts. An automation runs a fixed, predictable flow. An agent decides itself on steps, order and tools to reach a goal.
Most enterprise problems need an assistant or an automation. Real agents are rarely the right answer — and almost never the right first step.
The only question that counts: blast radius
Before talking about models, clarify: what happens in the worst case if the agent makes a wrong decision — and nobody notices immediately?
- Small, reversible, visible → an agent can make sense.
- Large, irreversible, invisible → no agent. Here a human belongs in the loop, no matter how good the model is.
Autonomy is not a capability topic but a risk topic.
Where agents make sense
1. Bounded, reversible tasks
Compiling research, preparing drafts, enriching data: if a wrong step is cheap to correct, autonomy carries real value.
2. Well-instrumented flows
An agent without a log is a black box with a keyboard. It only becomes sensible when every step, every tool and every decision is traceably logged.
3. Narrowly granted rights
An agent may only be able to do what it needs for exactly this task. The OWASP Top 10 for LLM Applications list "excessive agency" as a central risk — too many rights are the most common agent weakness.
Where agents do not belong
Irreversible actions (payments, deletions, binding customer communication), poorly bounded goals, and foreign, untrusted input as control. Whoever wires an agent to free text input also automates its manipulability (see Understanding prompt injection).
The Thoughtworks point: don't adopt reflexively
The Thoughtworks Technology Radar has warned for years against exactly the reflex adoption of new patterns just because they are available. Agentic workflows are a powerful tool for a few, well-bounded cases — not a default building block for every problem. The NIST AI Risk Management Framework describes the same discipline: control and oversight first, autonomy only where it is responsible.
Checklist before the agent
- Do we really need an agent — or is an assistant/automation enough?
- Is the blast radius small and reversible on failure?
- Is every step logged and traceable?
- Does the agent have only minimal, task-scoped rights?
- Are irreversible actions protected by a human approval?
- Is foreign input not a direct control of the agent?
- Is there a clear stop and escalation path?
Frequently asked questions
Are agents just hype? No, but oversold. For narrowly bounded, reversible, well-instrumented tasks they are strong. As a universal solution for everything they are an expensive promise.
Why not just give the agent all rights? Because rights equal damage on failure. Minimal rights are not a restriction but the actual security design.
What is the most common mistake? Starting with an autonomous agent for an important, irreversible process — instead of an assistant or automation for a reversible one.
Does an agent replace employees? Rarely sensible. The more valuable agent does reversible groundwork and leaves the decision to the human.
Conclusion
AI agents are not a status symbol but a tool for a few, well-bounded cases. Whoever first clarifies the blast radius, minimizes rights, logs everything and protects irreversible actions uses autonomy where it carries — and avoids it where it gets expensive.
Further reading
- AI in Customer Service: Faster Without Losing Control — autonomy behind the team, not in front of the customer.
- Understanding Prompt Injection: Why AI Needs Its Own Security Checks — why foreign input is not a control channel.
Next step
You're considering deploying an AI agent? Start with a short assessment of your requirements. We clarify blast radius and rights — and whether an agent or an automation is the right lever.
Sources
- Thoughtworks, Technology Radar — Techniques — thoughtworks.com
- OWASP, Top 10 for Large Language Model Applications — owasp.org
- NIST, AI Risk Management Framework — nist.gov