AI Agents
05·AI Agents·updated 2026-04-19

Agent Protocols (MCP, A2A, AG-UI)

Three protocols standardize how agents plug into the world. **MCP** (agent-to-tool, Anthropic 2024), **A2A** (agent-to-agent, Google 2025), **AG-UI** (agent-to-user, CopilotKit 2025). They are not competing, they are complementary layers of the same stack. Learn the three, choose which ones fit your app.

Agent Protocols (MCP, A2A, AG-UI)

Watch or read first

TL;DR

Three protocols standardize how agents plug into the world. MCP (agent-to-tool, Anthropic 2024), A2A (agent-to-agent, Google 2025), AG-UI (agent-to-user, CopilotKit 2025). They are not competing, they are complementary layers of the same stack. Learn the three, choose which ones fit your app.

The historical problem

Before these protocols, every agent system reinvented three wheels:

  1. Tool integration: every framework had its own format for tools. Porting a GitHub integration from LangChain to CrewAI meant rewriting it.
  2. Agent-to-agent: agents could not talk across frameworks. A LangGraph agent could not hand work to a CrewAI agent.
  3. Agent-to-UI: streaming agent state to a frontend meant custom WebSocket logic, JSON adapters per framework. Migrating from LangGraph to CrewAI meant rewriting the UI.

The fragmentation wasted time and slowed adoption. By 2024-2025, the industry converged on three open protocols that solve each axis.

How it works: the three protocols

1. MCP (Model Context Protocol) - Agent to Tool

Released by Anthropic in November 2024.

  • Standardizes how agents (clients) connect to tools (servers).
  • An MCP server exposes tools, resources, and prompts via JSON-RPC over stdio or HTTP.
  • Any MCP client (Claude Desktop, Claude Code, ChatGPT, Cursor, custom apps) can connect and use any MCP server.
Agent (MCP client) <--JSON-RPC--> MCP server (exposes tools)
                                   /       \
                                  /         \
                            [GitHub API]  [Slack API]

See [[../06-mcp/README]] for the full deep dive.

2. A2A (Agent-to-Agent Protocol)

Announced by Google in April 2025, quickly adopted.

  • Standardizes how agents communicate with other agents.
  • Each agent publishes an "AgentCard" (JSON metadata: capabilities, auth, endpoints).
  • Clients discover agents via the card, send tasks, receive structured updates.
  • Agents from different frameworks (LlamaIndex, CrewAI, LangGraph) interoperate.

Daily Dose DS summary:

  • A2A: agents talk to agents
  • MCP: agents talk to tools
  • Complementary. An agent using A2A to collaborate with another agent may also use MCP to access tools.
   Agent A  <--A2A--> Agent B
     |                   |
     MCP                 MCP
     |                   |
   Tools                Tools

Key features:

  • Secure collaboration (auth, token scoping)
  • Task and state management
  • Capability discovery via AgentCard
  • Cross-framework interop

3. AG-UI (Agent-User Interaction Protocol)

Open-source, released by CopilotKit in 2025.

  • Standardizes how backend agents communicate with frontend UIs.
  • Uses Server-Sent Events (SSE) to stream structured JSON events.
  • Each event has a defined payload type.

Example event types:

  • TEXT_MESSAGE_CONTENT - token streaming
  • TOOL_CALL_START / TOOL_CALL_END - show tool execution progress
  • STATE_DELTA - update shared state (code, data, docs) without resending everything
  • AGENT_HANDOFF - pass control between agents

The problem it solves: streaming agent outputs, showing tool progress, handling user interruptions, syncing large state, etc. All generic. Before AG-UI, every team rebuilt this per framework.

LangGraph / CrewAI / Mastra --AG-UI events--> React UI (CopilotKit components)

Once your backend speaks AG-UI, you can swap LangGraph for CrewAI without changing the frontend.

The protocol stack (Daily Dose DS)

These three are layers, not alternatives:

  +-----------------------------+
  |       Your UI (React)       |
  +-----------------------------+
              |
           AG-UI           (agent -- user)
              |
  +-----------------------------+
  |     Agentic backend         |
  |   (LangGraph / CrewAI)      |
  +-----------------------------+
         /              \
       A2A            MCP
       |               |
  Other agents     Tools / APIs

CopilotKit sits above all three as the "Agentic Application Framework". Practical layer hiding the protocols.

All three protocols are open-source.

Relevance today (2026)

MCP won the tool-integration battle fast

Released Nov 2024. By mid-2026, hundreds of MCP servers exist:

  • Anthropic maintains first-party servers (GitHub, Slack, Google Drive, Postgres)
  • Third-party: Stripe, Linear, Notion, Figma, Zapier, n8n
  • Communities: awesome-mcp-servers lists thousands

OpenAI initially resisted but adopted MCP by late 2025. ChatGPT, Cursor, Windsurf, Claude Code, Claude Desktop all support MCP.

A2A adoption is accelerating

A2A (Google, April 2025) is newer. By 2026 it's in:

  • Major frameworks (LangGraph A2A bridge, CrewAI A2A module)
  • Enterprise platforms building multi-agent systems
  • Vertex AI agent builder

Still earlier in its adoption curve than MCP. Watch this space.

AG-UI is pragmatic, niche but strong

AG-UI from CopilotKit matters for teams shipping agentic UI to real users. Most agentic demos are CLI-based or bolt on a custom UI. When you build a production chat app with streaming thoughts, tool progress, and multi-agent handoffs visible, AG-UI saves weeks.

The key question: do you need all three?

  • Building a CLI tool or backend service: probably only MCP.
  • Building a multi-agent system: add A2A.
  • Building a real user-facing product: add AG-UI.

Most teams start with MCP because their first problem is "my agent needs tools". Add A2A and AG-UI as the system grows.

Does MCP replace function calling?

No. Under the hood, MCP tools still get called via native function calling. MCP is the transport/discovery/auth layer above function calling. See function calling and [[../06-mcp/README]].

The consolidation matters

Before 2024-2025:

  • Each framework was an island
  • Migration meant rewriting everything
  • No marketplace of tools

Now:

  • Tools live in MCP servers once, used by any agent
  • Agents collaborate across frameworks via A2A
  • UIs decouple from backends via AG-UI

This is the plumbing layer getting built that makes the ecosystem commercial-grade.

Critical questions

  • Is MCP secure? (It has auth via tokens. Servers must implement proper scoping. Badly configured MCP servers are a security hole.)
  • Can A2A work without MCP? (Yes. They are complementary but independent. An agent can use A2A without ever touching tools.)
  • Is AG-UI only for React? (Primarily, but SDKs exist for other frameworks. The protocol itself is UI-agnostic, just SSE events.)
  • Why not just use WebSockets for everything? (AG-UI uses SSE because it is simpler, firewall-friendly, and sufficient for one-way streaming.)
  • Should I build my own protocol? (No. Unless you have a very specific reason, adopt MCP/A2A/AG-UI. The network effects are the point.)
  • What about OpenAI's Assistants API? (It's a proprietary alternative. Works well inside OpenAI's ecosystem. Does not interop with MCP/A2A natively, but bridges exist.)

Production pitfalls

MCP

  • Leaking sensitive tool access to untrusted clients. Use scoped tokens.
  • No rate limiting on MCP server -> abuse. Enforce per-client limits.
  • Tool output too large crashes context. Truncate or paginate.
  • Stdio MCP servers crash and the agent hangs. Add timeouts.

A2A

  • Discovery spam. Every agent discovers every other agent. Use namespaces.
  • No framework enforces A2A semantics strictly; bugs propagate.
  • Cross-agent loops (A calls B calls A). Add loop detection.

AG-UI

  • SSE reconnection logic matters. Clients that drop connection mid-stream need to resume.
  • Too many STATE_DELTAs overwhelm the UI. Throttle.
  • Large payloads in TEXT_MESSAGE_CONTENT fragment token-by-token. Batch.

Alternatives / Comparisons

AxisProtocolAlternatives
Tool accessMCPOpenAI function calling direct, Zapier, LangChain tools
Agent-to-agentA2AOpenAI Swarm handoffs, custom APIs
Agent-to-UIAG-UICustom WebSockets, LangChain Streaming, Vercel AI SDK

MCP is clearly the winner for tool access. A2A is leading but not yet dominant. AG-UI is the most practical for React-based UIs.

Mental parallels (non-AI)

  • HTTP / REST / WebSocket stack: MCP = RPC (like gRPC). A2A = messaging (like Kafka topics between services). AG-UI = SSE for live frontends.
  • USB-C for agents: before USB-C, every device had a different connector. MCP is USB-C for agent-tool connections.
  • Email protocols: SMTP (send) + IMAP (read) + MIME (format). MCP + A2A + AG-UI analogously covers three axes of agent communication.
  • City infrastructure: MCP = plumbing (agents access resources). A2A = mail/phone (agents talk to each other). AG-UI = windows/doors (agents meet the outside world).

Mini-lab

labs/agent-protocols/ (to create):

  1. MCP hands-on: build a simple MCP server (e.g., a custom weather tool) and connect Claude Code to it.
  2. A2A hands-on: take two small agents in different frameworks (LangGraph + CrewAI), expose them via A2A, have them delegate a task to each other.
  3. AG-UI hands-on: build a minimal React UI consuming AG-UI events from a Python backend. Show streaming thoughts and tool progress.
  4. Compose all three: a CrewAI agent running behind AG-UI, using A2A to delegate research to a LangGraph agent, both using MCP for tools.

Stack: uv, mcp Python SDK, langgraph, crewai, copilotkit (React), Next.js.

Further reading

Canonical

Related in this KB

Tools and frameworks

agentsprotocolsmcpa2aag-uiinteropcopilotkit