Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Protocol

Pattern

A reusable solution you can apply to your work.

Understand This First

  • API – a protocol governs behavior over the surface that an API defines.

Context

At the architectural level, once you have an API (a surface where two systems meet) you still need rules for how the conversation unfolds over time. A protocol is that set of rules. It defines who speaks first, what messages are valid at each step, how errors are signaled, and when the interaction is complete.

Protocols are what make distributed systems possible. The internet runs on layered protocols: TCP ensures reliable delivery, HTTP structures request-response exchanges, and TLS encrypts the channel between them. But protocols aren’t limited to networking. Any structured interaction between components follows a protocol, whether it’s a database transaction, a file transfer, an authentication handshake, or an AI agent calling a tool through MCP. Some protocols are formally specified in RFCs; others are implicit conventions that live only in code.

Problem

Two systems need to interact reliably, but they don’t share memory, may not share a clock, and either one could fail at any moment. Without agreed-upon rules, communication degenerates into guesswork: one side sends a message the other doesn’t expect, timeouts are ambiguous, and failures cascade silently.

Forces

  • Reliability vs. simplicity: A protocol that handles retries, acknowledgments, and error recovery is more reliable but also more complex.
  • Flexibility vs. predictability: A protocol that allows many optional behaviors is flexible but harder to implement correctly.
  • Performance vs. safety: Handshakes and confirmations add latency but prevent data loss and confusion.
  • Standardization vs. custom fit: Using a standard protocol (HTTP, MQTT, gRPC) gets you broad tooling support but may not fit your interaction model perfectly.

Solution

Define the valid sequence of messages between participants, including how each side should respond to normal messages, errors, and timeouts. A good protocol specifies:

  • Message format: What each message looks like and what fields it contains.
  • State transitions: What messages are valid given the current state of the conversation (you can’t send data before authenticating, for example).
  • Error handling: How failures are reported and what recovery looks like (retry? abort? ask again?).
  • Termination: How both sides know the interaction is complete.

In practice, you’ll usually build on established protocols rather than inventing new ones. HTTP gives you request-response semantics. WebSockets give you bidirectional streaming. OAuth defines the authentication dance. The skill is in choosing the right protocol for your interaction pattern and implementing it correctly.

In agentic coding, protocols are pervasive. Every tool call follows one: the agent sends a request in a specified format, the tool processes it, and returns a structured response. The Model Context Protocol standardizes how agents discover and invoke tools across providers. The A2A protocol defines how agents communicate with each other. Multi-step agent workflows, where an agent plans, executes, observes, and replans, are themselves protocols, even when nobody has written them down as such.

How It Plays Out

An agent needs to authenticate with a third-party service using OAuth 2.0. This involves multiple steps: redirect the user to the provider, receive an authorization code, exchange it for an access token, then use that token on subsequent requests. Each step must happen in order, with specific data passed at each stage. Getting the protocol wrong (sending the token request before receiving the code, for example) means authentication fails.

Note

Many bugs in distributed systems are protocol violations: sending a message the other side doesn’t expect in the current state. When debugging integration failures, checking whether both sides agree on the protocol state is often the fastest path to the root cause.

A team designs a webhook system where their service notifies external applications when data changes. They must define a protocol: What does the notification payload look like? Should the receiver acknowledge receipt? What happens if the receiver is down? Does the sender retry, and how many times? These decisions shape the reliability of the entire integration.

Example Prompt

“Implement the OAuth 2.0 authorization code flow for our app. Handle each step in order: redirect to the provider, receive the callback with the authorization code, exchange it for an access token, and store the token securely.”

Consequences

A well-defined protocol makes interactions between systems predictable and debuggable. When both sides follow the rules, failures are detectable and recoverable. Standard protocols also unlock tooling: HTTP debugging proxies, gRPC code generators, OAuth libraries, all of which save enormous effort.

The cost is rigidity. Protocols are hard to change once deployed because both sides must upgrade in coordination. Complex protocols get implemented incorrectly more often than simple ones. Every protocol also bakes in assumptions about timing, ordering, and reliability that may not hold in all environments.

  • Depends on: API — a protocol governs behavior over the surface that an API defines.
  • Uses: Event — event-driven architectures rely on protocols to define how events are published, delivered, and acknowledged.
  • Enables: Concurrency — protocols that handle concurrent messages correctly make concurrent systems feasible.
  • Contrasts with: Determinism — protocols must account for nondeterministic factors like network latency and partial failure.
  • Implemented by: MCP — the Model Context Protocol standardizes how agents discover and call tools.
  • Implemented by: A2A — the Agent-to-Agent Protocol defines how agents communicate and delegate tasks to each other.

Sources

  • Vint Cerf and Bob Kahn defined the Transmission Control Protocol in “A Protocol for Packet Network Intercommunication” (1974), establishing the foundational model for reliable, layered internet communication that this article’s examples build on.
  • J. H. Saltzer, D. P. Reed, and D. D. Clark articulated the end-to-end argument in “End-to-End Arguments in System Design” (1984), the design principle that shaped how protocol responsibilities are allocated between network endpoints and the infrastructure between them.
  • Tim Berners-Lee designed HTTP as part of the World Wide Web project at CERN (1989-1991), creating the request-response protocol that became the dominant interaction model for web applications and APIs.
  • Brian Carpenter edited RFC 1958, “Architectural Principles of the Internet” (1996), which codified the IETF’s design philosophy for protocol simplicity, modularity, and the end-to-end principle.
  • Anthropic introduced the Model Context Protocol (MCP) in November 2024 as an open standard for connecting AI agents to external tools and data sources, applying protocol design principles to the agentic domain.
  • Google released the Agent-to-Agent Protocol (A2A) in 2025, defining how AI agents discover capabilities and delegate tasks to each other across organizational boundaries.