Skip to main content

Model Context Protocol (MCP)

What if every AI model could access every tool through the same interface?

Model Context Protocol (MCP) gives AI agents structured access to external tools and data. Originated by Anthropic, now a Linux Foundation project. Where A2A connects agent to agent, MCP connects agent to tool.

The Problem MCP Solves

Before MCP, every AI integration was bespoke. Each model needed custom code to access each tool. N models x M tools = N*M integrations. MCP reduces this to N + M — each model implements the client once, each tool implements the server once.

Without MCPWith MCP
Custom integration per model per toolOne protocol, universal access
Tool vendor locks in to one modelTool works with any MCP client
Model can only use hardcoded toolsModel discovers tools at runtime
N x M integrationsN + M implementations

Architecture

MCP defines four primitives. Each is a different kind of thing a server can expose:

PrimitiveDirectionWhat It DoesExample
ResourcesServer → ClientExpose data the model can readFiles, database records, API responses
ToolsClient → ServerFunctions the model can callSearch, compute, deploy, query
PromptsServer → ClientReusable prompt templates"Summarize this PR", "Debug this error"
SamplingServer → ClientRequest model completionsServer asks the model to reason about data

Resources are read. Tools are called. Prompts are templates. Sampling is the reverse direction — the server asks the model to think.

Transport

Three transport options, each for a different deployment context:

TransportWhen to UseHow It Works
stdioLocal tools, CLI integrationsServer runs as a subprocess. Input/output via stdin/stdout. Simplest.
SSERemote servers, web-based toolsServer-Sent Events over HTTP. Client sends requests, server streams responses.
Streamable HTTPProduction APIs, scalable deploymentsStateless HTTP with optional streaming. Replaces SSE for new implementations.

stdio is for local. SSE is for remote. Streamable HTTP is the future default.

Server Lifecycle

CLIENT                          SERVER
│ │
│── initialize ──────────────► │ Client sends capabilities
│◄── initialize response ───── │ Server responds with its capabilities
│── initialized ─────────────► │ Handshake complete
│ │
│── tools/list ──────────────► │ Client discovers available tools
│◄── tool list ─────────────── │ Server returns tool schemas
│ │
│── tools/call ──────────────► │ Client invokes a tool
│◄── tool result ───────────── │ Server returns structured result
│ │

The client discovers what the server offers, then calls what it needs. No hardcoded tool lists. The server declares its capabilities at connection time.

Where MCP Sits

A2AMCP
ConnectsAgent to agentAgent to tool
PatternPeer-to-peer coordinationClient-server tool access
Use case"Research this, then delegate analysis""Query this database, read this file"
GovernanceLinux FoundationLinux Foundation
DiscoveryAgent Cards (/.well-known/agent.json)Tool schemas at connection time

An agent uses MCP to access data, then A2A to delegate work to other agents, then UCP/AP2 to transact. Same agent, three protocols, one workflow.

Adoption

MCP has moved from proposal to industry standard:

  • Clients: Claude, VS Code (Copilot), Cursor, Windsurf, JetBrains IDEs, Sourcegraph
  • Servers: GitHub, Postgres, Slack, Google Drive, Brave Search, file system, and hundreds of community servers
  • Governance: Linux Foundation (2025), same home as A2A
  • Spec: Open (latest specification at spec.modelcontextprotocol.io)

Context

Questions

If every model can access every tool through the same protocol, does the model matter — or does the tool ecosystem become the moat?

  • When MCP servers expose both read (resources) and write (tools), what authorization layer prevents an agent from writing when it should only read?
  • Does sampling (server asking the model to think) create a feedback loop where the tool shapes the reasoning — and is that a feature or a risk?
  • At what point does MCP server discovery need its own protocol, the way A2A has Agent Cards?