Model Context Protocol (MCP) is a standardized approach for structuring context information when working with Large Language Models (LLMs). It was introduced by Anthropic but is standardised and an open protocol.
Yeah, great. But what does that even mean?
The problem that MCP is trying to solve is one of getting access to local/specialist information. When using Claude or ChatGPT etc, the model has access to its built in knowledge from training, but this only contains public information and will have a cut off date when the model was trained. To get around this, some tools have integrated web search that can add extra knowledge and/or the ability to import documents. But what if you want to talk to a database? Or files on disk? Or a knowledge base? Or …
There’s an infinite variety of random data sources that people might want to access, and it’s not possible for any provider to implement them all. Even with a large budget, some may be behind firewalls, on local disk or just legacy and obscure.
MCP creates a way for anyone to create a custom tool that a model can call to get more information or data. That’s it (*). To get an idea of what’s possible, it’s worth skimming the large list of examples in the official github
Conceptually, it’s simple. And, as we’ll see, it’s also pretty simple to implement.
The Simplest Possible Python Implementation
So how do we write one of these things? As a toy example, we’ll implement a server that allows a model to get a directory listing from the local machine.
We’ll use the official python library, and the complete implementation is ~15 lines
import subprocess
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Code Directory Lister")
@mcp.tool()
def ls(path: str) -> str:
"""List files in the specified directory"""
result = subprocess.run(
["ls", path],
capture_output=True,
text=True,
check=True
)
return result.stdout.strip()
“But Arthur”, you say, “the library must be doing lots of work with that decorator”. And yes, it is. But as we’ll see, the basics of what it is doing isn’t complex. The library is doing the hard work of setting up a callable endpoint, but what passes back and forth to that endpoint is pretty simple.
Next, we need to integrate this with the Claude desktop app. That’s also easy:
$ mcp install src/mcp_server.py
And again, it’s not doing much at all. It writes json config to ~/Library/Application Support/Claude/claude_desktop_config.json
which
tells the Claude App how to run the MCP server. And that really is it this time
{
"mcpServers": {
"Code Directory Lister": {
"command": "uv",
"args": [
"run",
"--with",
"mcp[cli]",
"mcp",
"run",
"/Users/arthur/code/MCP/simple_mcp_example/src/mcp_server.py"
]
}
}
}
So if this has worked, you should be able to install the code with uv
, get it integrated with the Desktop app, and then
ask Claude questions like “What are the files in /Users?”
So what does all this actually do?
To more fully understand what’s going on, let’s fire up an inspector and look at the actual traffic:
$ mcp dev src/mcp_server.py
Now we can connect to http://localhost:5173/#tools and trace through to see how the protocol works.
First the client asks the server to list the tools available:
{
"method": "tools/list",
"params": {}
}
and the server responds, listing what it can do:
{
"tools": [
{
"name": "ls",
"description": "List files in the specified directory",
"inputSchema": {
"type": "object",
"properties": { "path": { "title": "Path", "type": "string" } },
"required": ["path"],
"title": "lsArguments"
}
}
]
}
The text description will be interpreted by the LLM, and the MCP will be called when the tool might be able to answer.
The call is, no shock, just more json:
{
"method": "tools/call",
"params": {
"name": "ls",
"arguments": {
"path": "/Users"
},
"_meta": {
"progressToken": 0
}
}
}
And hopefully we get the result
{
"content": [
{
"type": "text",
"text": "Shared\narthur"
}
],
"isError": false
}
All of which is a long and overly detailed way of saying that MCP is a pretty simple protocol based on passing json messages back and forth in a defined format
(*) This is a bit of a white lie. It can also do more. But we’ll ignore that for now