Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Footgun

A feature, tool, default, or construct that is easy to use wrong and hard to use right: a design that makes self-inflicted damage the path of least resistance.

Concept

A foundational idea to recognize and understand.

Understand This First

What It Is

A footgun is a feature, API, default, command-line flag, or language construct whose correct use is less obvious or less ergonomic than its dangerous use. The term places blame on the design, not the user. Classic examples: C’s strcpy (no bounds check, buffer overflow by default), JavaScript’s == (type coercion surprises), Python’s mutable default arguments (def f(x=[])), Git’s push --force (no safety net), and the old shell hazard rm -rf "$FOO" when $FOO is unset.

Footguns aren’t bugs. The feature behaves exactly as documented. The problem is a design property: when a tired human or a confident agent reaches for the tool, the path of least resistance is the damaging path. The dangerous behavior is the default; the safe behavior requires more effort, more vigilance, or knowledge the user didn’t bring.

The word is old C folklore (“C gives you enough rope to shoot yourself in the foot”) and has been sharpened by practitioners over the years into its modern form. Forrest Brazeal gave the cleanest version of the operative rule: the word blames the design, not the user. If every user who touches a feature eventually hurts themselves with it, the feature is the problem.

Why It Matters

Every tool you hand an agent is a potential footgun. The agent’s bash tool can rm -rf. Its write tool can clobber. Its database tool can DROP. Its MCP server can exfiltrate. Agents reach for whatever is easiest in the moment, and a footgun is, by definition, easy. That puts the concept at the center of how you design agent tool surfaces.

Agents also make footguns worse in a specific way. A human reaches for a footgun occasionally; an agent running in a loop reaches for it at machine speed, at machine scale, across many files and many sessions. The blast-radius-per-minute of a footgun in agent hands is orders of magnitude higher than in a human’s. You don’t have days to notice the mistake; you have seconds.

Agents don’t just use footguns. They create them. Agent-generated CLIs with --force flags that skip confirmation. Agent-generated schemas with cascading deletes as the default. Agent-generated code that swallows errors silently. Every one of these is a fresh footgun aimed at whoever inherits the code next. In an agentic pipeline, that next reader is often another agent.

The concept also unifies mitigations the book already covers. Make Illegal States Unrepresentable is the type-level defense. Fail Fast and Loud is the runtime defense. Sandbox, Least Privilege, and Approval Policy are the structural and policy defenses. Footgun is the observational lens that sits above all of them: this is more dangerous than it looks like it is.

How to Recognize It

Footguns don’t announce themselves. They look like normal features in the documentation, because they are normal features — right up until somebody uses them wrong. The reviewer’s question is never “does this work?” (it does), but “what happens when somebody reaches for this without thinking?”

A few specific tells:

  • The default is the dangerous one. If the safe behavior requires an explicit flag and the dangerous behavior is what you get by typing the plain command, the design is upside down. Git’s push --force vs. --force-with-lease is the canonical example: the right flag is longer and less known than the wrong one.
  • The correct invocation depends on knowledge outside the call site. strcpy is safe only if you know the destination buffer is large enough. That knowledge lives somewhere else. Anything the user must remember to check is a footgun candidate.
  • The error is non-local. The call looks innocuous; the damage shows up three layers and two weeks away. Footguns love to violate Local Reasoning.
  • Retrying is destructive. Non-idempotent side effects become footguns the moment an agent retries a failed operation. See Idempotency.
  • The reversal path doesn’t exist, or is expensive. DROP, force-push, rm, chmod 000 on the wrong directory: the common footgun signature is “one character wrong and you can’t undo it.”

A useful heuristic for agent tools: if you would hesitate to give this command to a sleep-deprived junior engineer, don’t give it to an agent either. The agent’s confidence is higher and its fatigue is constant.

How It Plays Out

A team hands an agent a database admin tool that wraps psql with no restrictions. The prompt asks it to “clean up orphaned test records.” The agent reasons its way to DELETE FROM users WHERE email LIKE '%@test.com';. Production has real customers whose addresses happen to match. The tool did exactly what it said. The footgun was giving an agent unrestricted DELETE privileges in the first place.

A developer asks an agent to “speed up the deploy script.” The agent spots a --dry-run guard at the top and removes it, correctly reading the code as a flag check. What it misses is that the flag is the only thing keeping the script from mutating production. The refactored script is cleaner, shorter, and catastrophic on first invocation. The footgun was designing the dry-run as a flag to remove rather than an inversion to opt into.

An MCP server ships with an install_package tool that auto-approves any package the agent names. A prompt injection hidden in a scraped README tells the agent to install requests-lib, which is a real package, just not the one the author meant, and happens to contain a credential exfiltrator. The server’s author built a footgun by giving the agent permission to install arbitrary code without a Trust Boundary.

An agent generates a command-line tool and, following patterns it has seen many times, adds a --force flag that bypasses all safety checks. The author ships it. Six weeks later, a user copy-pastes the command from Stack Overflow with --force appended “to make it work,” and the tool destroys their home directory. The footgun is the --force flag itself. The agent manufactured it by imitation.

Warning

The most dangerous footguns hide inside tools you already trust. A CLI you have used a hundred times gets a new subcommand with a different default. A database driver’s new major version changes what happens on connection timeout. An agent framework adds a “helpful” auto-retry that turns non-idempotent operations into footguns. Audit the footgun surface of your tools after every upgrade, not just at adoption time.

Consequences

Once you name the lens, the question becomes mechanical. For any tool in the agent’s toolbox, ask: (1) what is the worst thing this tool can do? (2) how many steps from the agent’s default behavior is that worst thing? (3) what’s the reversal path? Rank tools by the product of blast radius and reachability. Defuse the worst three. Repeat.

The defusing moves are well-known, and footgun thinking gives them a shared target:

  • Remove the feature if the safe use case is marginal. A tool nobody uses is a tool nobody misuses.
  • Redesign so the safe path is the easy path. Make illegal states unrepresentable. Invert the default so it takes effort to opt into the dangerous behavior.
  • Rail off via Sandbox, Least Privilege, or an Approval Policy. A footgun you can’t reach is a footgun defused.
  • Accept and document when the other moves are impossible. Document the hazard clearly, arrange mitigations at the next layer up, and set expectations so readers and agents don’t stumble in.

Two failure modes on the lens itself are worth naming. The first is footgun nihilism: “everything is a footgun, so nothing can be fixed.” This loses the signal in the noise. The second is footgun inflation: calling any slightly surprising API a footgun. Keep the bar high. A footgun makes the default path damaging, not merely surprising. If the dangerous behavior requires deliberate effort, you’re probably looking at a sharp tool, not a footgun, and sharp tools have their place.

  • Related: Smell (Code Smell) — a smell is a symptom in existing code; a footgun is a property of a design that invites future damage.
  • Related: Smell (AI Smell) — agent-generated footguns (silent error swallowing, --force defaults, cascade deletes) are a flavor of AI smell.
  • Dual of: Load-Bearing — load-bearing names “matters more than it looks like it does”; footgun names “more dangerous than it looks like it is.”
  • Violated by: Local Reasoning — footguns frequently punish the reader who reasons locally; the call looks safe, the damage is elsewhere.
  • Prevented by: Make Illegal States Unrepresentable — the type-level defense against a large class of footguns.
  • Prevented by: Fail Fast and Loud — footguns tend to fail slowly and quietly; fail-fast is the runtime counter.
  • Manifests as: Silent Failure — the silent-failure footgun is the most dangerous kind.
  • Rated by: Blast Radius — the size of the crater when the footgun fires.
  • Railed off by: Sandbox — structural isolation for tools that can’t be redesigned.
  • Railed off by: Least Privilege — a footgun the agent can’t reach is a footgun defused.
  • Gated by: Approval Policy — the policy-layer defense for unavoidable footguns handed to agents.
  • Bounded by: Bounded Autonomy — scope limits on what the agent can touch reduce the footgun surface.
  • Audited via: Tool — every tool exposed to an agent deserves a footgun audit.
  • Weaponized by: Tool Poisoning — a tool-poisoning attack is a footgun planted on purpose.
  • Prevented by: Idempotency — idempotent operations turn retry-class footguns into no-ops.

Sources

  • The term “footgun” emerged from C-language practitioner folklore (the old line about C giving the programmer “enough rope to shoot yourself in the foot”) and was sharpened into its modern noun form across forums, mailing lists, and essays in the 2000s and 2010s. Wiktionary’s footgun entry captures the stabilized definition.
  • Forrest Brazeal’s widely-quoted formulation (that the word places blame on the design, not the user) gave the concept its operative ethical grip. The framing appeared on his social channels and has since become the default citation when practitioners define the term.
  • Ken Kantzer’s essay “5 Software Engineering Foot-guns” offers a concrete practitioner taxonomy covering common cases in C, SQL, and container configuration.
  • Matt Rickard’s short piece “Avoiding Footguns” develops the mitigation question: when you find one, should you remove it, redesign it, rail it off, or document it?
  • The principle that bad defaults are the root of most footguns has deep roots in human-factors and interaction design, most visibly in Don Norman’s The Design of Everyday Things (Doubleday, 1988; originally titled The Psychology of Everyday Things), which argued for designs that make the right action the easy action.
  • The agentic framing, in which every tool handed to an agent is a candidate footgun and agents manufacture new footguns by imitation, emerged across the practitioner community in 2025 and 2026 as teams began shipping agent-generated code and agent-accessible tool surfaces at production scale.

Further Reading