Feedback Loop
A feedback loop is any arrangement where the output of a process becomes an input to the same process, allowing the system to self-correct, self-reinforce, or drift.
Understand This First
- Metric – you need a quantified signal before you can feed it back.
- Observability – you can’t close a loop around a system you can’t see into.
What It Is
A feedback loop exists whenever a system’s output circles back to influence its next action. A thermostat reads the room temperature (output of the heating system), compares it to the setpoint, and turns the furnace on or off. The loop closes because the furnace’s effect on the room is the very thing the thermostat measures next.
Software is full of these loops. A CI pipeline runs tests on every commit; when tests fail, developers fix the code before the next push. A linter flags style violations and the developer adjusts. An on-call rotation pages an engineer when error rates spike, the engineer ships a fix, and the pages stop. All closed loops: measure, compare, act, measure again.
Two things distinguish a feedback loop from a one-time check. First, it’s continuous or recurring. A single test run is a check. Running tests on every commit is a loop. Second, the output actually influences the next input. If nobody reads the test results, the loop is open. Information flows out, but nothing flows back.
Why It Matters
Feedback loops are the architectural primitive that makes software systems adaptive. Without them, a system can only execute its initial instructions, oblivious to whether those instructions are producing good results. With them, the system converges on a goal because each cycle corrects the errors of the previous one.
In agentic workflows, the stakes compound. When a coding agent generates code, runs tests, reads the failures, and regenerates, it’s operating inside a feedback loop. The quality of that loop determines whether the agent converges on correct code or spins in circles. Short loops (type-checking during generation, linting after each file) catch errors early and cheaply. Long loops (integration tests after a full feature, user bug reports after deployment) catch errors the short loops missed, but at higher cost.
The concept also explains why some teams improve steadily and others stagnate. Teams with tight feedback loops between deployment and monitoring, between code review and coding standards, correct course continuously. Add a loop between user complaints and product decisions and you close the gap between what shipped and what matters. Teams without those loops fly blind. An agent operating inside a well-designed loop can iterate faster than any human team, but an agent inside a poorly designed loop generates waste at the same accelerated speed.
How to Recognize It
Look for four components. A sensor that measures something about the system’s output. A comparator that evaluates the measurement against a goal or threshold. An actuator that changes the system’s behavior based on the comparison. And a delay, the time between the action and the next measurement. Every feedback loop has all four, though they aren’t always labeled.
When the loop corrects deviations from a goal, it’s a negative feedback loop. The thermostat is the classic example: too hot, turn off; too cold, turn on. Negative feedback stabilizes. In software, test suites, linters, code review, alerting, and Verification Loops are all negative feedback mechanisms. They push the system back toward a desired state.
When the loop amplifies deviations, it’s a positive feedback loop. A product that attracts users attracts more users because of network effects. Technical debt that makes code harder to change leads to more shortcuts, which creates more debt. In agentic workflows, an agent that generates low-quality code triggers more review cycles, which consumes context window space, which degrades the agent’s next attempt. Positive feedback loops are powerful when they compound good outcomes and destructive when they compound bad ones.
How It Plays Out
A team builds a deployment pipeline with three feedback loops layered by speed. The fastest loop runs unit tests and linting in under a minute; the agent sees results before it finishes the next file. The middle loop runs integration tests in five minutes after each commit, catching interface mismatches between components. The slowest loop monitors production error rates daily, generating tickets when thresholds are crossed. A bug in a payment calculation slips past the fast and middle loops (the unit tests don’t cover a specific currency conversion edge case), but the production loop catches it within hours when the error rate for transactions involving Japanese yen spikes. The team adds a unit test for that case, tightening the fast loop so the same class of bug won’t reach production again. Each failure makes the inner loop smarter.
An engineering manager notices that code review turnaround has ballooned to three days. Developers context-switch away from the review’s feedback, so the comments don’t improve the next pull request. She shortens the loop: reviews must happen within four hours, and the team adopts a pairing rotation for complex PRs. Within a month, the same review comments stop recurring because developers absorb the feedback while the code is still fresh in their heads. Reviews weren’t bad before. A three-day delay just made the loop too slow to change behavior.
When configuring an agent’s workflow, make the fastest feedback loop as fast as possible. Type-checking during generation, linting after each file, and test execution after each logical change all close loops that catch errors before they compound. The cheapest bug to fix is the one the agent catches in the same turn it introduced.
Consequences
Understanding feedback loops gives you a framework for diagnosing system behavior. When something drifts, ask: is there a loop that should be correcting this? If the loop exists, check whether the sensor is measuring the right thing, the comparator has the right threshold, the actuator can act effectively, and the delay is short enough. If the loop doesn’t exist, that’s your answer: build one.
Feedback loops carry costs. Each loop requires instrumentation, monitoring, and maintenance. The sensor needs to be accurate. The comparator needs a well-chosen threshold: too sensitive and you get noise, too loose and you miss real problems. And the actuator has to actually work, because an alert that nobody responds to is a broken loop. Loops also interact. Two loops operating at different speeds on the same system can interfere with each other, each “correcting” the other’s corrections in a pattern engineers call hunting or oscillation.
The biggest risk is the illusion of control. A green dashboard full of passing metrics can convince a team that everything is fine, while the things that matter most aren’t being measured at all. Feedback loops only correct what they measure. The gaps between your loops are where surprises live.
Related Patterns
- Depends on: Metric – metrics provide the quantified signals that loops compare against goals.
- Depends on: Observability – you can only close a loop around an observable system.
- Refined by: Verification Loop – the agent-specific feedback loop: generate, test, read results, regenerate.
- Refined by: Steering Loop – a feedback loop at the governance level: observe agent behavior, decide whether to continue or intervene.
- Uses: Feedback Sensor – the sensor component that collects signals for the loop to act on.
- Uses: Feedforward – feedforward guides prevent errors before they happen; feedback loops correct errors after they happen. The two are complementary.
- Related: Regression – regressions are what feedback loops detect when a previous fix stops working.
- Related: Test – tests are the most common sensor in a development feedback loop.
- Related: Continuous Integration – CI is a feedback loop with a specific trigger (code commit) and sensor (automated test suite).
- Contrasts with: Silent Failure – a silent failure is what happens when no feedback loop covers a particular failure mode.
Sources
Norbert Wiener’s Cybernetics (1948) established feedback loops as the central concept of control theory, showing that self-correcting behavior in machines, organisms, and organizations all share the same structure: a sensor, a comparator, and an actuator connected in a closed circuit.
W. Edwards Deming applied feedback loops to organizational improvement through the Plan-Do-Check-Act (PDCA) cycle, demonstrating that continuous quality improvement depends on closing the loop between action and measurement.
Martin Fowler’s treatment of Continuous Integration describes CI as a feedback loop that gives developers rapid signals about integration problems, with loop speed as the critical design parameter.
The LangChain State of Agent Engineering report (2026) documents the emerging practice of layered feedback loops in agentic systems, where offline evaluation (test sets), online monitoring (production telemetry), and human review operate as three loops at different speeds and granularities.