Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Bounded Agency

Concept

A phenomenon to recognize and reason about.

Bounded agency is the authority an actor holds to act on behalf of an organization, deliberately constrained by rules and guardrails so that delegation remains governable.

Understand This First

  • Ownership – ownership answers who is responsible; bounded agency answers what that responsible party is allowed to decide on its own.
  • Team Cognitive Load – bounded agency sets the scope of what a team or agent is expected to reason about and act on.
  • Bounded Autonomy – bounded autonomy is the action-level dial on a single agent; bounded agency is the organizational envelope that contains it.

What It Is

Every organization runs on delegation. A manager decides what the team can spend without approval. A senior engineer decides which architectural calls need a review and which they can make alone. A payments team decides which refunds they can issue and which need finance’s sign-off. Each of these is a small act of bounded agency: authority to act, bounded by an explicit envelope of what’s in scope and what isn’t.

Matthew Skelton and Manuel Pais named the concept directly in their 2026 keynote “Team Topologies as the Infrastructure for Agency with AI.” The framing has two parts. First, agency is the ability to act on behalf of the organization, whether the actor is a person, a team, or an AI agent. Second, agency is useful only when it’s bounded. Unbounded agency is not freedom. It’s chaos. The organization can’t predict what the actor will do, can’t evaluate whether it was the right call, and can’t recover when it wasn’t.

A bounded-agency envelope has four parts: a domain (what the actor is responsible for), a decision set (what calls it can make alone), an approval set (what calls require someone else’s sign-off), and a tripwire set (what calls should never happen at all without explicit reauthorization). Organizations that run well make these four parts legible. Organizations that don’t let them drift into tacit understanding, which works until someone new shows up or the stakes change.

Why It Matters

The concept has always mattered for humans. What’s new is that AI agents are now first-class actors on behalf of organizations, and most organizations haven’t drawn the envelope for them.

Skelton’s keynote cites a Gartner finding that 80% of firms report no tangible benefit from AI adoption. His diagnosis: the firms lack the organizational maturity to govern delegated agency. Specifically, they grant AI agents broad access to data and systems that they would never grant to an equivalently new human. An agent with write access to every data store across the company is not a capable tool. It’s an incident waiting to be reported.

This failure mode has a name in security literature. The OWASP Top 10 for LLM Applications formalizes it as Excessive Agency (LLM06:2025): the vulnerability that lets an LLM take damaging actions in response to unexpected, ambiguous, or manipulated outputs, precisely because it had the authority to do so. The fix OWASP recommends is structural: limit extensions, prefer granular functions over open-ended ones, and require independent verification for high-impact actions. That’s bounded agency restated as a security principle.

For teams building with agents, bounded agency also shapes what work can safely be delegated at all. An agent without a clear scope produces inconsistent work and touches things it shouldn’t. An agent with a clear scope, a known decision set, and explicit tripwires acts more like a new team member than a loose cannon. The envelope is what makes delegation reliable.

How to Recognize It

Bounded agency is easier to spot when it’s missing. Three patterns show up repeatedly.

The first is the agent-with-root configuration. A team gives an AI coding agent direct access to the production database, shell, cloud console, or source repository without narrowing what it can touch. The agent works well enough for ordinary tasks. Then a prompt injection, a misinterpreted instruction, or a confidently wrong inference leads it to do something the team would never have sanctioned if asked. The team didn’t grant that action explicitly. They granted the space that contained it.

The second is the tacit envelope. Everyone on the team “knows” the rules of what they can decide alone, but the rules are never written down. A new hire spends months discovering which calls need approval and which don’t. A temporary contractor never learns, and either asks permission for everything (slow) or guesses wrong (risky). An AI agent, which by nature lands as a new hire every session, cannot absorb tacit rules at all. If the envelope isn’t in an instruction file, the agent doesn’t have one.

The third is the uniform-trust mistake. An organization treats all actors at the same level of trust, regardless of the consequence of their actions. The same engineer can approve a CSS change and a production deploy with no structural difference. The same agent can read documentation and rewrite the deployment config with no structural difference. When every action lives in the same trust envelope, the envelope has to be sized for the most dangerous action, which means every action pays that cost. Or, more often, the envelope is sized for the most common action, which means the dangerous ones sneak through.

The positive signal is equally recognizable. In an organization with well-drawn agency envelopes, a new person or a new agent can be productive within a day because someone can hand them a written scope. Incident reviews rarely produce surprise at “I didn’t know they could do that.” High-impact actions consistently trigger a second pair of eyes, not because of bureaucracy but because the envelope says so and the tooling enforces it.

How It Plays Out

A bank deploys AI coding agents across its engineering organization. The CTO’s first instinct is to give each agent the same permissions a senior engineer has. Legal pushes back. They draft an agency charter for agents: an agent can read any code in the repositories it’s assigned to, run any test, and open a pull request. It can’t merge to main, deploy to any environment, modify CI configuration, or touch the secrets manager. Those actions are reserved for a human with an agent-attributed approval.

The charter is boring. It’s also the single document that makes agent deployment safe enough for legal to sign off on. When a prompt injection later causes one of the agents to propose a change that would have exfiltrated credentials, the charter catches it: the agent can propose, but it can’t merge, and the human reviewer sees the anomaly. The bank uses the same charter template for third-party contractors and for new hires in their first 90 days, which is how Skelton’s “organizations already structured for bounded agency in humans” find the transition to agents easy.

A platform team at a logistics company builds an internal agent that answers questions about the codebase. Early on, they give it read-only access to the repository and a search tool. The agent is useful, and pressure builds to give it more power: “let it run the tests,” “let it open pull requests,” “let it fix simple bugs.” Each step is reasonable. The team grants each one without revisiting the envelope as a whole. Six months later the agent has broad access to repositories, test runners, PR creation, and a Slack integration that can ping on-call. Nobody planned this shape. It emerged by small decisions. A retrospective forces the team to write down the agent’s current agency envelope, compare it to the one they would design from scratch today, and trim it back to what the actual use case requires.

A small engineering team tries to operate without explicit bounds, running on trust. Every engineer can ship anything. Every agent the engineers configure can do anything the engineer can do. For eighteen months this works because the team is small and the stakes are contained. Then they sign an enterprise customer with a security questionnaire that asks, in writing, what each role can and can’t do. The team discovers they can’t answer the question, because they’ve never drawn the envelope. The answers they write down for the questionnaire become the first version of their agency charter, and half the team realizes they’ve been making calls they shouldn’t have had the authority to make.

Tip

Write the agency envelope down before you deploy an agent, not after an incident. The envelope doesn’t have to be elaborate: a short list of what the agent can do alone, what requires human review, and what it must never do regardless of prompt. Store it in the same instruction file the agent reads at startup so the bounds are always in scope.

Consequences

Bounded agency costs up-front design work. Someone has to sit down, think through what an actor actually needs to do, and write the envelope. For humans, the envelope also needs to be taught and occasionally enforced. For agents, it needs to be technically enforced through tool access, approval policies, and tripwires, because agents will not respect an envelope that lives only in a wiki page.

The payoff is that delegation scales. An organization that has written down its agency envelopes can onboard new people quickly, introduce new agents without exhaustive security review each time, and respond to incidents with clear accountability rather than finger-pointing. Skelton’s observation is that this capacity is cultural before it’s technical: companies that already bound human agency well have the organizational muscle to bound agent agency. Companies that haven’t bounded human agency will not invent the discipline when the first AI agent arrives.

There’s a failure mode in the other direction. Envelopes that are too tight strangle work. A team with an approval gate on every change ships nothing. An agent that has to escalate every action produces a queue of interruptions rather than useful output. The envelope needs to be sized to the consequence of the action. Low-stakes, reversible actions belong inside the decision set. High-stakes, irreversible actions belong in the approval set or the tripwire set. Getting this calibration right is ongoing work, not a one-time design.

Finally, bounded agency creates legibility. When the envelope is explicit, the organization can reason about what happens when an actor misbehaves: an injected prompt, a bribed employee, a confused agent, a compromised credential. The envelope says what damage is possible and what isn’t. Unbounded agency offers no such analysis. Anything is possible, so nothing is predictable.

  • Depends on: Ownership – bounded agency presumes someone owns the outcome; the envelope defines what that owner can delegate without losing accountability.
  • Depends on: Team Cognitive Load – the envelope must fit within the actor’s capacity to reason about its scope; an oversized envelope creates incoherent decisions.
  • Depends on: Boundary – agency envelopes sit on top of system and team boundaries; unclear boundaries produce unclear authority.
  • Enables: Stream-Aligned Team – a stream-aligned team can deliver end-to-end only when its agency envelope covers the full path from idea to production for its stream.
  • Refined by: Bounded Autonomy – bounded autonomy applies the same logic one level down, calibrating oversight per action within a single agent’s envelope.
  • Refined by: Approval Policy – approval policy is the mechanical enforcement of the approval set in an agency envelope.
  • Contrasts with: Least Privilege – least privilege is the security-level sibling of bounded agency, expressed in permissions rather than authority; they prevent the same failure from two different angles.
  • Contrasts with: Trust Boundary – trust boundaries mark where one level of trust meets another; agency envelopes sit inside those boundaries and define what an actor at a given trust level can do.
  • Informed by: Human in the Loop – human-in-the-loop is one of the mechanisms by which the approval set in an envelope gets enforced.
  • Informed by: Instruction File – an instruction file is where the agency envelope lives in version-controlled form so the agent reads it at startup.

Sources

  • Matthew Skelton and Manuel Pais developed the bounded-agency framing for AI in their 2026 keynote “Team Topologies as the Infrastructure for Agency with AI,” delivered at QCon London and elsewhere. Their argument that agency is the ability to act on behalf of the organization, useful only when bounded, is the direct source for this article’s framing.
  • The OWASP Gen AI Security Project’s “LLM06:2025 Excessive Agency” entry in the OWASP Top 10 for LLM Applications is the canonical security-literature statement of the failure mode that bounded agency prevents. The entry’s three categories, excessive functionality, excessive permissions, and excessive autonomy, map onto the decision set, approval set, and tripwire set described above.
  • Skelton and Pais’s Team Topologies: Organizing Business and Technology Teams for Fast Flow (IT Revolution, 2019) established the cognitive-load and bounded-context framing that underpins the agency discussion. The 2026 keynote extends the framework to AI but doesn’t replace it.
  • The InfoQ coverage “QCon London 2026: Team Topologies as the Infrastructure for Agency with AI” summarizes Skelton’s argument that 80% of firms see no tangible benefit from AI adoption because they lack the organizational maturity to govern delegated agency.
  • The underlying concept of delegated authority bounded by rules is old. It appears in organizational theory (Chester Barnard’s zone of indifference, 1938), in political philosophy (the limits of legitimate authority), and in software security (capability-based systems from the 1960s onward). The 2026 contribution is adapting that long lineage to a world in which AI agents are the actors being delegated to.