Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

How This Book Writes Itself

An autonomous engine maintains this Encyclopedia. It researches topics, writes articles, edits them, reorganizes structure, credits the thinkers behind the ideas, evaluates its own process, and deploys changes to the live site — all in a continuous loop, without anyone pressing a button.

That last part matters. Other systems automate pieces of the writing process. Some can draft book-length content. Some can edit what they’ve written. A few can even publish without human approval. What we haven’t found is a public system that closes the whole loop: writing, editing, deploying, and rewriting its own process based on what it observes about its own output — continuously, for a structured book. Here’s the comparison:

What Came Before

SystemWritesEditsDeploysContinuous loopBook-scaleSelf-evaluating
EACP engineYesYesYesYesYesYes
AuthorClaw / OpenClawYesYesNoNoYesNo
Claude Book (Houssin)YesYesNoNoYesNo
Trusted AI Agents (De Coninck)YesYesNoPartialYesNo
Living Content AssetsYesYesPartialYesNo (blogs)No
WordPress AI AgentsYesYesNo (approval)NoNoNo
ARIS (Auto-Research)YesYesNoYesNo (papers)No
OuroborosNo (code)YesYes (git)YesNoNo

“Book-scale” means a structured, multi-part work with internal cross-references, not a feed of independent posts. “Continuous loop” means the system keeps running across open-ended cycles without manual re-triggering — not just a one-shot chain of handoffs, but an ongoing process that revisits and revises its own output over time. “Self-evaluating” means the system measures its own performance and rewrites its own procedures — not just producing content, but evolving how it produces content. Private systems may exist that match this profile; this comparison covers only what’s publicly documented.

The Loop

The engine follows a Steering Loop: observe the state of the book, pick the most useful thing to do next, do it, and loop back. Each cycle, it decides between several kinds of work — researching new topics, writing articles, editing existing ones, reorganizing structure, deploying to the live site, and a few others. The scheduling isn’t random. The engine tracks what it did last and when, then leans toward whatever’s been neglected longest, weighted by how much that kind of work matters right now. Writing and editing get priority over housekeeping, but nothing gets starved.

A writing cycle produces a complete article that didn’t exist 15 minutes earlier. The engine picks a topic it previously researched, consults the style guide, and drafts the piece from scratch. An editing cycle works retroactively — it picks an article that hasn’t been reviewed in a while, reads it against the prose standard, and fixes what it finds. A deploy cycle builds the site and pushes changes live.

The result is a book that grows, improves, and ships on its own schedule.

Its Own Patterns

Here’s where this gets self-referential. The engine is built from the same patterns it teaches. If you’ve read other chapters, you’ll recognize the pieces.

Before any cycle starts, the engine loads fresh context: the style guide, the article template, whatever’s relevant to the work at hand. That’s Feedforward — the agent doesn’t wing it; it reads the rules every time.

How does it decide what to work on? It checks persistent state that records what happened in previous cycles and what hasn’t been touched recently. That’s a Feedback Sensor.

After the work is done, the engine builds the site locally and checks for broken links. If the build fails, it fixes the problem before committing. That’s a Verification Loop.

The rules the engine follows are written in version-controlled files it reads at the start of every cycle — Instruction Files. Its knowledge persists between cycles through Memory: mechanical state in one place, editorial decisions in another. It evaluates its own articles against the prose standard using the same approach described in Eval. And the pattern it deliberately minimizes but doesn’t eliminate is Human in the Loop.

The Engine Watches Itself

The most unusual part isn’t that the engine writes and edits. It’s that the engine evaluates its own process and changes it.

Periodically, the engine steps back from content work and looks at how it’s performing. It reads its own activity log, checks whether different kinds of work are balanced, and looks for signs of trouble — backlogs building up, articles churning without stabilizing, certain tasks running dry. When it finds a problem, it diagnoses the cause and rewrites the procedures it follows in future cycles.

There’s a guardrail here: the engine can modify its own workflow, but it can’t modify the criteria it uses to evaluate that workflow. That would be the fox guarding the henhouse. The evaluation standards and the outer operational boundaries require the owner’s hand.

The Meta Report is the engine’s lab notebook. Each entry records what it measured, what it learned, and what it changed. It’s written by the engine itself, for readers who want to see self-evaluation in action.

Stories From the Engine’s History

The engine running today isn’t the one that launched. It has rewritten its own procedures, shifted its own priorities, and fixed its own bugs across dozens of self-evaluation cycles. A few stories from that history:

The research binge. Early on, the engine spent a disproportionate amount of its time researching new topics. Ideas piled up far faster than they could be written. The self-evaluation cycle spotted the imbalance, diagnosed it as a scheduling problem, and adjusted the priorities so that writing and editing got more of the engine’s attention. The backlog shrank. Then the pendulum swung too far: the idea pipeline dried up, and the engine had nothing new to write about. The next evaluation caught that too, and rebalanced. The system found equilibrium through two corrections, not one.

The bug that fixed itself. The engine noticed that freshly written articles weren’t getting their first editorial review. Drafts kept piling up while editing cycles chased other priorities. It wrote a rule: when too many articles are sitting unreviewed, drafts jump to the front of the editing queue. But the rule had a bug — a mislabeled reference that pointed back to the step the rule was supposed to skip. The override never fired. The next evaluation cycle caught the error, traced it to the mislabel, and rewrote the rule with correct references and a logging requirement so the same kind of mistake would be visible in the future.

Learning to ignore idle work. One category of work found nothing to do for several consecutive cycles. Rather than keep checking, the engine lowered that category’s priority, freeing time for work that actually had pending tasks.

None of these required anyone to intervene. The engine measured its own performance, identified what wasn’t working, changed its own procedures, and verified the fix had the intended effect. The patterns described elsewhere in this book — steering loops, feedback sensors, evals, instruction files — aren’t abstractions here. They’re the machinery that makes self-improvement possible.

The Human’s Role

The owner designed this system. He wrote the style guide, defined the article template, set the scheduling logic, and established what “done” looks like for each kind of work. Those decisions live in version-controlled documents the engine reads every cycle.

The engine operates within those bounds on its own. It doesn’t ask permission to write an article, edit a paragraph, or deploy the site. It does stop and ask for anything that requires credentials, external accounts, or spending money that wasn’t pre-authorized.

Everything is transparent. The git log shows every change, attributed to a specific cycle. If the engine makes a bad editorial call, the owner can see it and revert it. This is the Instruction File pattern in practice: autonomy within explicit, readable, version-controlled bounds. The agent doesn’t guess at the owner’s preferences. It reads them.

Note

The engine can also edit its own editorial process — rewriting procedures, adjusting priorities, adding rules to the style guide. What it can’t do is modify its own evaluation criteria or the operational boundaries that define what’s in and out of scope. Those require the owner’s hand.

The engine doesn’t just produce content. It watches how it produces content, diagnoses what’s working and what isn’t, and changes its own process to do better next time. What makes this unusual isn’t any one piece — it’s that all the pieces are running together, continuously, for a book.