Model Context Protocol (MCP)
What if every AI model could access every tool through the same interface?
Model Context Protocol (MCP) gives AI agents structured access to external tools and data. Originated by Anthropic, now a Linux Foundation project. Where A2A connects agent to agent, MCP connects agent to tool.
The Problem MCP Solves
Before MCP, every AI integration was bespoke. Each model needed custom code to access each tool. N models x M tools = N*M integrations. MCP reduces this to N + M — each model implements the client once, each tool implements the server once.
| Without MCP | With MCP |
|---|---|
| Custom integration per model per tool | One protocol, universal access |
| Tool vendor locks in to one model | Tool works with any MCP client |
| Model can only use hardcoded tools | Model discovers tools at runtime |
| N x M integrations | N + M implementations |
Architecture
MCP defines four primitives. Each is a different kind of thing a server can expose:
| Primitive | Direction | What It Does | Example |
|---|---|---|---|
| Resources | Server → Client | Expose data the model can read | Files, database records, API responses |
| Tools | Client → Server | Functions the model can call | Search, compute, deploy, query |
| Prompts | Server → Client | Reusable prompt templates | "Summarize this PR", "Debug this error" |
| Sampling | Server → Client | Request model completions | Server asks the model to reason about data |
Resources are read. Tools are called. Prompts are templates. Sampling is the reverse direction — the server asks the model to think.
Transport
Three transport options, each for a different deployment context:
| Transport | When to Use | How It Works |
|---|---|---|
| stdio | Local tools, CLI integrations | Server runs as a subprocess. Input/output via stdin/stdout. Simplest. |
| SSE | Remote servers, web-based tools | Server-Sent Events over HTTP. Client sends requests, server streams responses. |
| Streamable HTTP | Production APIs, scalable deployments | Stateless HTTP with optional streaming. Replaces SSE for new implementations. |
stdio is for local. SSE is for remote. Streamable HTTP is the future default.
Server Lifecycle
CLIENT SERVER
│ │
│── initialize ──────────────► │ Client sends capabilities
│◄── initialize response ───── │ Server responds with its capabilities
│── initialized ─────────────► │ Handshake complete
│ │
│── tools/list ──────────────► │ Client discovers available tools
│◄── tool list ─────────────── │ Server returns tool schemas
│ │
│── tools/call ──────────────► │ Client invokes a tool
│◄── tool result ──── ───────── │ Server returns structured result
│ │
The client discovers what the server offers, then calls what it needs. No hardcoded tool lists. The server declares its capabilities at connection time.
Where MCP Sits
| A2A | MCP | |
|---|---|---|
| Connects | Agent to agent | Agent to tool |
| Pattern | Peer-to-peer coordination | Client-server tool access |
| Use case | "Research this, then delegate analysis" | "Query this database, read this file" |
| Governance | Linux Foundation | Linux Foundation |
| Discovery | Agent Cards (/.well-known/agent.json) | Tool schemas at connection time |
An agent uses MCP to access data, then A2A to delegate work to other agents, then UCP/AP2 to transact. Same agent, three protocols, one workflow.
Adoption
MCP has moved from proposal to industry standard:
- Clients: Claude, VS Code (Copilot), Cursor, Windsurf, JetBrains IDEs, Sourcegraph
- Servers: GitHub, Postgres, Slack, Google Drive, Brave Search, file system, and hundreds of community servers
- Governance: Linux Foundation (2025), same home as A2A
- Spec: Open (latest specification at spec.modelcontextprotocol.io)
Context
- A2A Protocol — Agent-to-agent coordination (MCP's peer protocol)
- Agent Protocols — The full protocol stack
- Protocols — Algorithms decide the route; protocols enable the handshake
- Smart Contracts — On-chain tools MCP can access
Links
- MCP Specification — The protocol spec
- MCP GitHub — Reference implementations and SDKs
- MCP Server Registry — Community server directory
- A2A vs MCP — Google's comparison guide
Questions
If every model can access every tool through the same protocol, does the model matter — or does the tool ecosystem become the moat?
- When MCP servers expose both read (resources) and write (tools), what authorization layer prevents an agent from writing when it should only read?
- Does sampling (server asking the model to think) create a feedback loop where the tool shapes the reasoning — and is that a feature or a risk?
- At what point does MCP server discovery need its own protocol, the way A2A has Agent Cards?