Regenerative Software
Design systems so that individual components can be deleted and rebuilt from their specifications and tests, treating code as a disposable output of a durable design rather than the durable thing itself.
Also known as: Phoenix Architecture
Understand This First
- Component – the unit that regeneration works on.
- Boundary – what must stay stable while the inside changes.
- Contract – the agreement that survives regeneration.
- Eval – the correctness signal that lets a new implementation prove itself against the old one.
Context
Until recently, code was expensive. A module took days or weeks of careful human attention to write, and that attention was preserved inside the code itself as a kind of tacit asset. Infrastructure, by contrast, was cheap and disposable: servers were cattle, not pets. Immutable Infrastructure, which Chad Fowler named in 2013, pushed the disposability idea to its conclusion by declaring that no one should log in and fix a running server. Burn it and rebuild.
With a capable coding agent in the loop, the economics of the code itself start to look like the economics of servers in 2013. Ten thousand lines of a plausible implementation take minutes and a few dollars to produce. What is expensive now is not typing the code but understanding it, trusting it, and keeping track of what it is supposed to do. Chad Fowler and others have extended the old disposability thesis from servers to code, and the framing is beginning to show up across the agentic-coding literature. The question for the designer is: given that code is now cheap to produce, which parts of the system should you still treat as durable, and which parts should you plan to throw away?
Problem
When code is cheap to write but opaque to the humans who ostensibly own it, in-place maintenance quietly becomes the most expensive thing the team does. Every bug fix requires re-reading the agent’s output. Every small feature touches code nobody can summarize in a sentence. Maintenance costs climb while delivery velocity flatlines. The obvious response (“let the agent rewrite this”) is worse. A naive regeneration drops the undocumented edge-case fix from last July, silently changes a rounding behavior a downstream consumer depended on, and breaks the build at 4 p.m. on Friday.
So you can’t keep the old code forever, and you can’t let the agent throw it away. How do you design so that regeneration is safe, routine, and local, rather than a scary one-time rewrite the team only dares to run once a decade?
Forces
- Code the agent produces is fast to generate but slow for humans to comprehend, so retaining code that nobody genuinely understands accumulates a debt the original author can no longer help pay down.
- A working implementation contains years of fixes to problems nobody wrote down, so discarding it without first capturing what it does throws away real information.
- Consumers of a component depend on behaviors the component’s interface never formally promised, so any regeneration that preserves only the documented interface will still break someone.
- The unit of regeneration matters: rebuilding a whole application from scratch is nearly always unsafe, but rebuilding a well-bounded component with a tight contract is often boring.
- Different parts of the system change at very different rates, and treating fast-changing and slow-changing code the same way is a category error.
Solution
Treat the specification, the boundary, and the evaluations as durable; treat the code inside that boundary as a regenerable output. The design work is to decide which assets are durable and to invest in them directly, so that regenerating the code becomes a routine operation rather than a crisis.
Five architectural preconditions make this stance practical.
Give every regenerable component a boundary that survives its implementation. The Interface, Contract, and type signature of the component are the things callers depend on. They should be readable without opening the implementation, and they should not change every time the code behind them is rewritten. If the boundary leaks implementation details, regeneration forces cascading changes elsewhere and stops being cheap.
Write evaluations that define correctness independently of the current code. A test that reads like “this is what the function does” freezes today’s implementation into the test suite. A test that reads like “this is what any valid implementation must do” survives a rewrite. Good candidates are Consumer-Driven Contract Tests, property-based tests over the public interface, golden-input-output pairs recorded from production, and Evals that score outcomes rather than inspecting internals. The stronger this layer, the more you can trust a regenerated implementation.
Assign exclusive mutation authority for any piece of state to exactly one component. Regeneration is only safe when you can destroy and rebuild without corrupting data the rest of the system depends on. If five services all write to the same table, no one of them is regenerable; you must fix the ownership problem first. The concepts of Source of Truth, Bounded Context, and Aggregate are the design tools for getting this right.
Automate replacement so it stops feeling exceptional. Parallel Change, Feature Flags, shadow traffic, canary deploys, and comparison tests are the machinery that turns “we rewrote this” from a heroic effort into a Tuesday. A team that has these tools routinely running over a handful of small changes is already set up to run them over a full-component regeneration.
Name the pace layers. Some things in a system change weekly: UI glue code, feature flags, internal plumbing. Some change monthly: service internals, algorithms, storage formats. Some change yearly: data schemas, public APIs. Some should never change at all, including contracts with external partners and regulatory commitments. Decide, explicitly, which layer each thing sits in, and only regenerate at the right cadence. The UI component you rewrite every sprint is not the contract you promised a payments integrator for the life of the business.
The slogan is: code is cheap, comprehension is expensive, and contracts are sacred. Build the system around that fact.
How It Plays Out
A frontend team owns a paged-table component that appears on a dozen screens. They follow all five preconditions. The component has a documented prop interface, a small suite of rendering and interaction tests that exercise the interface from the outside, and a single state hook that owns the page’s local store. Every few months, an agent proposes swapping the implementation over to a newer design-system primitive. The team skims the diff, runs the tests, flips a feature flag on a single screen first, and watches the error budget. A week later the flag is on everywhere and the old code is deleted. The rewrite was not a project. It was a minor PR.
A platform team tries the same move on an analytics microservice and gets burned. They ask an agent to regenerate the service from its unit tests, and the agent produces code that passes every test while quietly rounding monetary values differently than the previous implementation. Three downstream reports subtly drift over the next week before anyone notices. The postmortem finds the root cause: the unit tests tested the existing implementation’s behavior, not the behavior the business needed. The consuming reports were the real test oracle, and no one had promoted that truth into an explicit eval. The team writes a boundary-level comparison suite that replays one day of production traffic through both implementations and flags any numeric divergence, then tries again. That round goes fine.
An infrastructure team wonders whether to regenerate their database-access layer under a new ORM. The pace-layers framing answers before anyone opens an editor. The access layer sits against the schema, which is a yearly-or-slower asset; the current ORM works; no business requirement is pushing a change. The right regeneration cadence for this layer is “when the schema itself changes or the ORM stops being supported,” and neither trigger has fired. They close the ticket and do something else. Using the framework to decide not to regenerate is as much the point as using it to decide to regenerate.
Before letting an agent regenerate a component, ask: if the regeneration silently changed the component’s behavior, what would catch it? If the honest answer is “a human reading the diff,” the component is not yet regenerable. Invest in the eval layer first; only then turn the agent loose.
Consequences
Regenerative practice changes which parts of the system you spend your attention on. You invest more up front in the durable assets (boundaries, evals, and data ownership) and less in reading and understanding every line of every implementation. The implementations get younger over time instead of older, because each regeneration can adopt a newer library, a simpler idiom, or a better pattern without rewriting the whole system. When a new implementation misbehaves, the blast radius is a single component and the fix is a rollback, not an all-hands incident.
The cost is discipline, and the discipline isn’t optional. Without clear boundaries, without evals that define correctness from the outside, and without single-writer data ownership, “regeneration” is just letting the agent churn out fresh slop on top of old slop. Teams that skip the preconditions and try to regenerate anyway get the worst of both worlds: the opacity of agent-written code plus the instability of a perpetual rewrite.
The pattern also sits uncomfortably with authorship-based ownership. An engineer who says “I wrote this module, therefore I own it” has a harder time watching an agent replace their work every quarter than an engineer who says “I’m responsible for what this module does, whoever happens to have typed the current version.” Regenerative teams tend to frame responsibility as stewardship rather than authorship, and teams that can’t make that shift struggle with the pattern no matter how strong their technical foundations are.
Finally, picking the wrong unit of regeneration breaks the pattern outright. Regenerating an entire application is almost never safe; regenerating a single well-bounded component with a strong contract almost always is. Most of the engineering judgment in this pattern is in choosing the right unit.
Related Patterns
- Uses: Strangler Fig – the migration technique for moving a legacy system into a shape where each component is regenerable; Regenerative Software is the target state, Strangler Fig is one way to get there.
- Uses: Parallel Change – the interface-level mechanism that lets a regeneration swap implementations without breaking consumers.
- Uses: Feature Flag – the runtime switch that makes each regeneration reversible on the order of seconds.
- Uses: Consumer-Driven Contract Testing – the evaluation technique that defines correctness from the consumer’s side, so a regeneration cannot silently break downstream callers.
- Uses: Eval – the outside-in correctness signal that determines whether a new implementation is acceptable.
- Uses: Bounded Context and Source of Truth – the data-ownership structures that make it safe to destroy and rebuild a component’s internals.
- Refined by: Evolutionary Modernization – the broader strategic framing; Regenerative Software is what “evolutionary” looks like at the per-component level once agents are writing the code.
- Contrasts with: big-bang rewrite – rewrites treat the whole system as the unit of regeneration, which is exactly the unit that doesn’t work.
- Mitigates: Technical Debt – regeneration under strong boundaries and evals pays down debt by replacing aging code rather than patching it.
- Related: Architecture – regenerative practice is the architectural stance that makes the pattern possible; the architecture has to be built for it.
- Related: Ownership – stewardship-over-authorship is the cultural prerequisite for a team that rewrites its own code on a cadence.
- Related: Architecture Fitness Function – automated integrity checks are one form of the outside-in eval that makes regeneration safe.
Sources
Chad Fowler’s 2013 talk “Trash Your Servers and Burn Your Code” named Immutable Infrastructure, the lineage text for every later claim that running systems should be disposable. His 2025 essays “Phoenix Architecture” and “Regenerative Software” extend that disposability thesis from servers to the code itself, arguing that a capable coding agent makes individual components as cheap to regenerate as servers became in the cloud era.
Martin Fowler’s earlier “Sacrificial Architecture” essay (martinfowler.com/bliki) framed the whole-system variant of the same idea: sometimes the right move is to build a system expecting to throw it away. Regenerative Software is the per-component refinement that agentic economics made practical.
Neal Ford, Rebecca Parsons, and Patrick Kua’s Building Evolutionary Architectures (O’Reilly, 2017; 2nd ed. 2023) contributed the fitness-function idea that regenerative practice leans on: automated, outside-in checks that define architectural qualities a valid implementation must preserve. Without fitness functions, there is no signal that tells a team whether a regenerated component is actually correct.
The idea of pace layers, that different parts of a system change at different rates and should be designed accordingly, comes from Stewart Brand’s How Buildings Learn (1994) and was adapted to software by Simon Wardley and others. The regenerative framing uses pace layers as a design tool for deciding which parts of the system to treat as durable.
Further Reading
- Chad Fowler, Trash Your Servers and Burn Your Code – the 2013 talk that introduced the disposability thesis at the infrastructure layer, which is the throughline for every later application of the idea.
- Neal Ford, Rebecca Parsons, Patrick Kua, and Pramod Sadalage, Building Evolutionary Architectures, 2nd ed. (O’Reilly, 2023) – the canonical reference on fitness functions and guided architectural change.
- Stewart Brand, How Buildings Learn (Viking, 1994) – the original pace-layers framing; the software adaptations all trace back to chapter 2.