In this post we’ll walk through setting up a simple Model Context Protocol (MCP) server. MCP is, for now, the defacto way to communicate between LLM models and developer tools. You can read our deeper developer primer to MCP for more details, but this post doesn’t assume that knowledge.
I am going to assume you have installed Claude Code, although I’m just using it as an LLM that sits in the terminal — making it easy to play with. You can still follow along regardless.
Why MCP?
The basic idea is to keep the connection between the AI…








