What is MCP?
MCP (Model Context Protocol) is an open standard that defines how AI applications connect to external data sources and tools. Think of it as USB-C for AI: instead of every application building its own custom integration with every tool, MCP provides a single protocol that any compliant client and server can speak.
Before MCP, every AI-powered IDE, chat interface, or workflow tool had to build bespoke connectors for every external resource it wanted to access — filesystem, databases, APIs, code interpreters. This was duplicated effort and a maintenance nightmare. MCP solves this with a shared protocol layer.
Governed by the Linux Foundation (not proprietary to Anthropic), MCP is a community standard with official SDKs in 10 languages: TypeScript, Python, Java, Kotlin, C#, Go, PHP, Ruby, Rust, and Swift.
Architecture
MCP follows a client-server model with three roles:
- MCP Host: the application that wants to use AI + external tools (e.g., an IDE, a chat app)
- MCP Client: the component inside the host that speaks the MCP protocol
- MCP Server: an independent process exposing specific capabilities (filesystem, database, GitHub API, etc.)
┌─────────────────────────┐
│ Host (e.g. IDE) │
│ ┌───────────────────┐ │
│ │ LLM │ │
│ │ MCP Client │◄─┼──► MCP Server A (filesystem)
│ └───────────────────┘ │◄──► MCP Server B (GitHub)
└─────────────────────────┘◄───► MCP Server C (database)
The LLM never directly calls external tools. It asks the MCP Client, which routes the request to the right server. This separation keeps the model isolated from transport details.
The three primitives
Every MCP server exposes up to three types of capabilities:
Resources — read-only data the model can access
- Files, database records, API responses
- The model can browse and read, but not modify
- Example:
file://project/src/main.py
Tools — functions the model can call to take action
- Write to a file, run a shell command, send an API request
- These have side effects — they change the world
- Example:
run_tests(),create_github_issue()
Prompts — reusable, parameterized prompt templates
- Predefined workflows the user can invoke
- Example: a "code review" prompt template that automatically loads the diff + runs relevant tests
Why it matters
For developers building AI features
Before MCP, adding "AI that can read your codebase" to your product meant writing a custom file reader, chunker, search index, and wiring it all to your model. With MCP, you expose a filesystem MCP server and any compliant AI client immediately has access — no custom integration.
For the agent ecosystem
The arxiv study "How are AI agents used? Evidence from 177,000 MCP tools" (2026) analyzed 177,436 real-world MCP tools created between November 2024 and February 2026. Key findings:
- 67% of all tools are for software development (90% of downloads)
- Action tools grew from 27% to 65% of the ecosystem over 16 months — agents are increasingly modifying external state, not just reading it
- Most action tools handle medium-stakes tasks (file editing), but high-stakes tasks (financial transactions) are emerging
This tells you where the ecosystem is right now: primarily dev tooling, but moving fast toward general-purpose agents with real-world consequences.
For AI governance
The same study notes that monitoring at the tool layer is more tractable for regulation than monitoring at the model output layer. If you want to understand what AI agents are actually doing in the world, MCP's explicit primitives (resources/tools/prompts) give you clear audit points.
In practice
Setting up a minimal MCP server in Python:
from mcp.server import MCPServer
from mcp.types import Tool, TextContent
server = MCPServer("my-server")
@server.tool("read_file")
def read_file(path: str) -> TextContent:
"""Read a file from the filesystem."""
with open(path) as f:
return TextContent(text=f.read())
@server.tool("write_file")
def write_file(path: str, content: str) -> TextContent:
"""Write content to a file."""
with open(path, "w") as f:
f.write(content)
return TextContent(text=f"Written to {path}")
server.run()
The MCP client (your AI host) discovers these tools automatically via the protocol handshake. The LLM receives their descriptions and can call them by name.
The MCP ecosystem
The community registry at modelcontextprotocol.io lists hundreds of production MCP servers. Common categories:
- Developer tools: filesystem, Git, GitHub, terminal, code execution
- Data sources: databases (SQLite, PostgreSQL), APIs (Slack, Notion, Google Drive)
- Web: browser automation, web scraping, search
- Observability: logs, metrics, traces
The MCP Inspector (included in the org) lets you visually test and debug any MCP server before wiring it to a real AI client.