Interactive Explanations
When an agent writes code you don’t yet understand, ask it to build a small interactive visualization that animates how that code actually behaves, and use the visualization to form the intuition a static description can’t give you.
Also known as: Explain-Yourself Visualization, Self-Explaining Artifact, Animated Walkthrough, Visual Code Narration.
Understand This First
- Verification Loop — verification asks “does it work?”; interactive explanations ask “do I understand it?”
- Agent — the agent is the thing that both generates the code and, in a second pass, renders it legible to you.
- Tool — the agent uses its normal file-writing and preview tools; no new infrastructure is required.
Context
At the agentic level, interactive explanations are the companion practice to reading code you didn’t write. The situation is familiar: you’ve asked an agent to implement something non-trivial, the code compiles, the tests pass, and you can see that the behavior is correct. You still don’t know why it’s correct. The algorithm inside, whether a placement heuristic, an allocation strategy, or a merge rule, is opaque. You have a working artifact and a hollow mental model.
Reading the code straight through sometimes closes the gap. For anything with a time dimension or a spatial one, it usually doesn’t. A paragraph describing “Archimedean spiral placement with per-word random angular offset” tells a practiced reader enough to nod; it tells most readers nothing they can picture. An interactive explanation closes that gap by letting the agent do the second thing it’s unusually good at: turn an algorithm into a visible, steerable demonstration.
Problem
How do you build real understanding of code that an agent wrote, without either reading every line carefully enough to reconstruct the author’s intent or just shrugging and trusting that the tests cover what matters?
Agents produce more code than any human can carefully read. That gap is where cognitive debt accumulates: the codebase is correct, the tests are green, and nobody on the team can confidently predict what any of it does on unfamiliar inputs. The usual remedies (code review, documentation, architecture notes) don’t scale to the pace at which agents ship, and they don’t help with the specific kind of blindness that algorithmic code produces. You can read a packing algorithm ten times and still not see what it looks like when it runs.
Forces
- Reading is linear; many algorithms are inherently spatial or temporal, and linear text is a poor medium for them.
- Comments and prose explanations describe the algorithm at one remove; they tell you what the author thought happened, not what happens.
- Building visualizations by hand used to be too expensive to justify for internal understanding, so people skipped it; agents have collapsed that cost.
- An explanation the agent writes about its own code can inherit the same blind spots as the code itself; the visualization has to render actual execution, not a narrated summary.
- Interactive controls (pause, step, scrub) cost little to add but change the asset from a one-read artifact into a reusable tool for the team and future readers.
Solution
After the agent finishes the implementation, ask it to build a small HTML or notebook page that animates the running code and exposes timeline controls: play, pause, step forward, step back, and a scrubbable slider.
The page is a companion artifact, not production code. It lives beside the feature, in a docs/ or explainers/ folder, and its only job is to make the algorithm’s behavior visible and pokable. A good interactive explanation has four properties:
- It runs the actual code, or a faithful reduction of it. The visualization renders the algorithm’s real steps, not a cartoon version. If the real code uses a spiral search, the animation shows the spiral; if it uses a priority queue, the animation surfaces the queue. A narration that glosses over the mechanism is worse than nothing because it creates confidence without understanding.
- It exposes time as a first-class control. Whatever the algorithm does, the reader can pause it, step by one iteration, and scrub backwards. This is what separates an interactive explanation from a GIF. You learn by replaying the moment just before the behavior surprised you.
- It invites input. Let the reader paste their own text into the word cloud, upload their own graph to the layout demo, or twist the parameter the algorithm is most sensitive to. The reader forms intuition by feeding the thing examples and watching what it does.
- It’s throwaway-cheap. The page is under two hundred lines of mostly generated code. If it ages out, rebuild it. The value is in the act of making it and using it during the week the feature is new, not in maintaining it as a polished deliverable.
Order of work matters. Don’t ask for the visualization before the code is right; you’ll end up animating a wrong algorithm and learning the wrong thing. Don’t fold the two requests into one prompt either, because the agent will either truncate the implementation or produce a shallow demo. Finish the code, get the tests green, then in a fresh turn say “now build an animated HTML page that shows how this algorithm actually runs, with step and scrub controls, accepting arbitrary input.”
When you ask for the explanation, pass the agent the module it just wrote as context, plus the specific algorithm you want animated. Be explicit that you want the visualization to execute the real logic, not a narrated approximation. “Animate the placement loop in word_cloud.py by running it and rendering each attempted position as the algorithm sees it” is more useful than “make me an animation of how the word cloud works.”
How It Plays Out
A developer uses an agent to build a word-cloud renderer. The agent produces a correct implementation in under a minute: it uses an Archimedean spiral to search for an empty place to drop each word, tries progressively larger radii, and rotates random words for better packing. The tests pass. The developer reads the code, understands the data flow, and still can’t picture what the algorithm does when words collide. The next prompt is “build a single-page HTML tool that animates the placement loop, accepts pasted text as input, and has pause, step, and a scrub bar.” Five minutes later the developer watches the word “language” get placed at the center, then watches subsequent words spiral outward, colliding, backing off, and settling. The spiral becomes obvious the moment it’s visible. Two follow-up changes to the real algorithm emerge directly from things the developer saw in the visualizer: a case where long words were getting pushed off-canvas, and an ordering issue that made the output depend on hash iteration order.
A backend engineer asks an agent to implement a two-level cache with a promotion heuristic. The code works, but the engineer can’t tell whether the heuristic is tuned reasonably without feeding it a week of real traffic. The engineer asks the agent to build a small page that replays a sample access log against the cache and draws the L1 and L2 contents over time, coloring each entry by how recently it was promoted. Watching the replay makes two things obvious: the promotion threshold is too aggressive (many entries bounce between levels), and there’s a class of access patterns where the heuristic pins the wrong entry in L1 for minutes. Both of these would have required careful log analysis to discover from code alone.
A team adopting an agent-written graph layout algorithm for their product documentation realizes nobody in the team understands the force-directed step well enough to review changes to it. Rather than block on review speed, they ask the agent to build an interactive explainer: the algorithm’s attract-and-repel forces rendered as arrows on each node, with a slider controlling the time step. The explainer becomes the team’s onboarding artifact for that corner of the codebase. New engineers spend fifteen minutes with it and can reason about the layout’s behavior afterwards; without the explainer, that same intuition used to take weeks of watching production bugs.
“You wrote src/packing.py in the previous turn. In a new docs/packing-explainer.html, build a self-contained animated explainer for the main placement loop in that file. Use the real algorithm from the module (vendored inline is fine) to generate the animation, not a narrated approximation. Include: a text input for the packing candidates, a timeline scrub bar with play/pause/step-forward/step-back, and on-screen labels showing which iteration is current and what the algorithm just decided. Keep the whole page under 300 lines.”
Consequences
Interactive explanations turn “the agent wrote code I don’t understand” from a slow-motion problem into a five-minute one. The reader’s mental model builds against real execution, not against a paraphrase, which means the intuitions they form are the correct ones. The artifact also outlives the session: a good explainer serves new team members, review conversations, and the next agent session that needs to reason about the same code.
Three costs are worth naming. The visualization is additional work, even when it’s agent-written; if the code is simple enough to read directly, the explainer is overhead. The explainer can drift from the implementation if the implementation changes and the explainer doesn’t, which produces a confident-looking but subtly wrong artifact (handle this by regenerating the explainer whenever the underlying module changes meaningfully). And a self-rendered explanation inherits the biases of the agent that built it: if the agent misread the algorithm, the visualization will obligingly misread it too. A quick sanity check, feeding the explainer a case where you already know the expected behavior, catches this cheaply.
Related Patterns
- Refines: Verification Loop — verification checks that code is correct; interactive explanations check that you understand why it’s correct.
- Uses: Agent — the agent that wrote the code is also the one best positioned to render it intelligible.
- Uses: Tool — the agent uses its standard file-writing and preview tools; no special infrastructure is required.
- Uses: Artifact — the explainer page is a durable artifact that lives in the repository alongside the feature it describes.
- Uses: Code Mode — the explainer is typically a small self-contained page the agent generates as code, rendered by the browser rather than narrated in chat.
- Countered by: Technical Debt (cognitive debt) — cognitive debt is what accumulates when agent-written code stays opaque; interactive explanations are one practical way to pay it down before it compounds.
- Contrasts with: Code Review — review reads the code as written; interactive explanations watch the code as it runs.
- Related: Observability — both practices expose internal behavior that is otherwise invisible; observability targets running systems, interactive explanations target algorithms.
- Related: Agentic Manual Testing — both delegate understanding-producing work to the agent; manual testing asks whether a user-visible flow works, interactive explanations ask whether a developer-visible algorithm is intelligible.
- Related: Reflexion — reflexion is the agent explaining itself to itself for its next attempt; interactive explanations are the agent explaining itself to the human for theirs.
Sources
The practice of rendering an algorithm visually to build intuition is old: Bret Victor’s “Learnable Programming” essay (2012) and the broader “explorable explanations” movement popularized by Nicky Case and others in the 2010s established the core claim that static text is a poor medium for understanding systems with time or space in them. These pre-date agents entirely.
What agents changed is the cost. Hand-building an interactive explainer for internal use used to cost a day or more, which is why most teams skipped it. With an agent writing the visualization in minutes, the economics flip: it becomes cheap enough to produce for any algorithm where the team’s intuition is thin, which in practice means most algorithms in a new codebase. The pattern emerged in the agentic coding practitioner community over 2025–2026 as practitioners noticed they could ask the same agent that wrote the code to produce a companion animation, and that the animation was usually more useful than the code comments it replaced.
The framing in terms of cognitive debt, the gap between code that ships and code that any human genuinely understands, was sharpened by Margaret-Anne Storey’s 2026 writing and the Triple Debt Model arXiv paper, which separate technical, cognitive, and intent debt as distinct categories with different repayment strategies. Interactive explanations are one of the cheaper payment methods available in the agentic era for the cognitive variant specifically.
Further Reading
- Bret Victor, Learnable Programming — the essay that argued visible execution is a prerequisite for genuine understanding; almost every subsequent interactive-explainer is a descendant of this piece.
- Nicky Case, Explorable Explanations — a curated gallery of interactive explainers across many domains; a useful source of format ideas when you’re stuck on what controls to expose.