Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Ownership

Concept

A phenomenon to recognize and reason about.

Ownership answers “who is responsible for this code?” When nobody can answer that question, the code decays.

“Weakly owned code has on average six times more bugs than code with a strong owner.” — Bird et al., Microsoft Research, 2011

Understand This First

  • Conway’s Law – ownership boundaries become system boundaries.
  • Team Cognitive Load – ownership scope must fit within the team’s capacity to reason about it.
  • Boundary – ownership requires clear boundaries around what belongs to whom.

What It Is

Ownership answers a direct question: when this code breaks at 2 AM, whose phone rings?

In small teams, the answer is obvious. Everyone built everything, everyone knows the system, and whoever is awake handles the problem. But as systems grow, ownership fragments. Different teams handle different services, different modules, different layers. The clarity of “we all own it” gives way to ambiguity: the billing module was written by a contractor who left, the authentication layer was contributed by three teams over two years, and the data pipeline was built during a hackathon and never formally assigned to anyone.

Microsoft Research studied this empirically across Windows Vista and Windows 7. They tracked who contributed code to each binary. Files where many engineers each contributed small amounts (“weakly owned” files) had six times more bugs than files with a clear owner. The finding replicated across codebases and time periods. The mechanism isn’t mysterious: when many people contribute with no single person responsible for coherence, the code accumulates inconsistent interfaces, misaligned assumptions, and gaps that nobody feels accountable for filling.

Ownership operates on a spectrum. At one end, strong ownership means one person or team is responsible for a component, reviews every change, and maintains its architectural integrity. At the other, collective ownership means the whole team owns the whole codebase, anyone can change anything, and the team maintains coherence through shared conventions and continuous review. Both can work. What fails is the middle: code that has no clear owner and no collective accountability. That’s where defects concentrate.

Why It Matters

Two forces have made ownership harder to maintain.

The first is organizational complexity. Modern software systems span dozens of services, each with its own deployment pipeline, schema, and conventions. Teams split, merge, reorganize, and hand off responsibilities. A service built by Team A gets transferred to Team B during a reorg, but Team B never fully understands Team A’s design decisions. The code still runs. Nobody feels responsible for its long-term health.

Matthew Skelton calls this the difference between ownership and stewardship: ownership is about possession, stewardship is about care. A team that merely owns code treats it as territory. A team that stewards code maintains it for the people who come after them.

The second force is AI-generated code. When agents produce hundreds of lines per hour, the volume of code that needs an owner grows faster than any team’s capacity to adopt it. The 2025 DORA report found developers merged 98% more pull requests with AI tools, each 154% larger. That code has to belong to someone. If no one reads it carefully enough to understand it, no one truly owns it, and the six-to-one bug ratio from Microsoft’s research applies to agent-generated code just as it applies to code written by a rotating cast of human contributors.

Agent systems sharpen the question: who owns the code an agent writes? The agent itself has no memory of it next session. The developer who prompted the agent may not have read the output carefully. The team lead approved the pull request but didn’t trace every line. Leading teams are converging on a model of “delegate, review, and own,” where agents handle first-pass execution and humans retain ownership of architecture, tradeoffs, and outcomes. If no human has internalized the design decisions embedded in agent-generated code, that code is effectively unowned from the moment it merges.

How to Recognize It

Ownership gaps don’t look like crises. They look like friction that everyone accepts as normal.

Watch for files that nobody wants to modify. Every team has them: the configuration parser that grew organically over three years, the middleware layer that “works but nobody understands why,” the test suite that nobody trusts enough to prune. These are symptoms of absent ownership. The code runs, so nobody fixes it. Nobody fixes it, so nobody learns it. Nobody learns it, so nobody owns it.

In codebases with version control, ownership is measurable. Count the contributors to each file or module over the past year. Files with many contributors and no dominant one are weakly owned. Files where the most recent substantial contributor has left the team are orphaned. These metrics don’t tell you everything, but they flag where to look.

In agent workflows, ownership gaps show up as a lack of continuity between sessions. An agent refactors a module in one session, and a different agent (or the same agent with a fresh context) reworks the same module next session with different assumptions. No one reconciles the two passes. The code accumulates contradictory design decisions because no persistent owner maintains a coherent vision for it.

How It Plays Out

A fintech company runs twelve microservices. Each was built by a small team with clear ownership. Over two years, three teams reorganize and two senior engineers leave. Five services now sit in a gray zone: technically assigned to teams that inherited them but never invested in understanding them.

Bug reports for these services take three times longer to resolve. Deploys happen less frequently because the teams aren’t confident in their changes. A new VP of engineering runs an ownership audit, mapping each service to a team and asking “do you feel confident making changes to this service?” Three services score below 2 out of 5. She reassigns them to teams with adjacent domain knowledge and gives each team a month to learn the service before taking on feature work. Resolution times improve within a quarter.

A development team uses AI agents to generate new API endpoints. Each endpoint ships fast, tests pass, and the feature works. Six months later, someone needs to change the pagination strategy across all endpoints. The code looks different in each one: different error handling conventions, different response envelope structures, different approaches to query parameter validation. No human ever owned the collection of endpoints as a coherent whole. Each was generated, reviewed superficially, and merged.

The team spends two weeks reconciling the designs before they can make the cross-cutting change. They institute a new rule: every agent-generated module gets a human owner who reads the code, understands the design decisions, and is accountable for consistency with the rest of the codebase.

Tip

When an agent generates code, assign a human owner before merging. That owner doesn’t need to have written the code, but they need to understand it well enough to maintain it. If no one can explain why the code works the way it does, it isn’t ready to merge.

Consequences

Clear ownership costs something. It requires someone to invest time understanding code they didn’t write, reviewing changes they didn’t initiate, and maintaining coherence across a component’s lifetime. For agent-generated code, this means human review that goes beyond “does it pass tests” to “do I understand the design well enough to change it next month.”

The payoff is reliability and speed over time. Owned code gets maintained. Bugs get fixed by people who understand the context. Architectural drift gets caught before it compounds. The Microsoft research finding holds across every replication study: clear ownership correlates with fewer defects, faster resolution, and more consistent design.

Stewardship is the more durable framing. Ownership implies control: “this is mine.” Stewardship implies responsibility: “I’m taking care of this.” In a world where agents generate code and teams reorganize, nobody can claim permanent authorship. But someone always needs to be responsible for the code’s health. The question isn’t “who wrote it?” It’s “who will fix it when it breaks?”

  • Depends on: Conway’s Law – ownership boundaries become architectural boundaries; unowned code sits in the cracks between teams.
  • Depends on: Team Cognitive Load – ownership scope must fit within the team’s capacity to reason about the code.
  • Depends on: Boundary – clear boundaries make ownership assignable; blurred boundaries create ownership disputes.
  • Enables: Cohesion – a single owner maintains consistent design within a module.
  • Enables: Bounded Context – ownership gives bounded contexts the accountability they need to maintain model consistency.
  • Contrasts with: Coupling – cross-team coupling creates shared ownership, which weakens accountability.
  • Informed by: Human in the Loop – human-in-the-loop review is the mechanism by which humans maintain ownership of agent-generated code.
  • Informed by: Approval Policy – approval gates enforce ownership by requiring the owner’s review before changes merge.

Sources

  • Christian Bird, Nachiappan Nagappan, Brendan Murphy, Harald Gall, and Premkumar Devanbu studied the relationship between code ownership and software quality across Microsoft’s Windows codebase in “Don’t Touch My Code! Examining the Effects of Ownership on Software Quality” (2011). Their finding that weakly owned files had six times more defects than strongly owned files has been replicated multiple times, including by Michaela Greiler in a 2015 replication study.
  • Matthew Skelton distinguished stewardship from ownership in his QCon London 2026 keynote on Team Topologies as infrastructure for AI agency. His framing — caring for systems for future users rather than merely possessing them — reframes ownership as an ongoing responsibility rather than a territorial claim.
  • The DORA 2025 State of DevOps Report documented the AI productivity paradox that makes ownership harder: individual output increases while organizational coherence stays flat, producing more code that needs owners faster than teams can adopt it.