🧱 Key Components of MCP: Clients, Tools, Servers & Resources
To understand the power of the Model Context Protocol (MCP), you need to get familiar with its foundational components. MCP is not just a protocol—it’s an architectural framework built around four core concepts:
Clients, Tools, Servers, and Resources.
Together, these components provide the structure that makes modular, agentic AI possible. Let’s explore each one in depth.
🧑💻 Clients
A client is typically an LLM-powered agent, chatbot, or orchestration layer that interacts with the MCP server.
Responsibilities:
- Initiate sessions
- Fetch and update context
- Call tools through the server
Examples:
- A chatbot UI built with React that sends requests to the MCP server
- A LangGraph node that acts as a tool-aware memory-driven LLM client
Why Clients Matter:
Clients are the active agents. They drive the session by interacting with tools and managing context.
🧰 Tools
Tools are functions or services exposed to the client via the MCP server.
Responsibilities:
- Provide callable APIs (e.g.
run_python
,search_web
,summarize_text
) - Return structured responses with metadata
Requirements:
- Must expose methods following JSON-RPC 2.0 conventions
- Optionally include schema for inputs and outputs
Example Tool Entry:
{
"tool_id": "run_python",
"description": "Execute Python code and return the result.",
"methods": ["execute", "get_logs"]
}
Why Tools Matter:
They extend the capabilities of the model—enabling computation, search, and more without hardcoding those skills into the LLM.
🧠 Resources
Resources are contextual state objects used to store and share memory.
Types of Resources:
- Logs (chronological interaction history)
- Memory (long-term notes, embeddings)
- Scratchpads (temporary computation space)
Stored As:
- JSON objects associated with session IDs
- Addressable via API endpoints (e.g.,
/resource/{session_id}/memory
)
Why Resources Matter:
They are the long-term memory and shared blackboard for agentic systems. Agents can read, write, and summarize them across sessions.
🧭 The MCP Server
The MCP server is the backbone that connects clients, tools, and resources.
Responsibilities:
- Manage authentication and routing
- Store and retrieve context
- Proxy tool calls and return results
Common Implementations:
- FastAPI servers with a
/context
,/tool
, and/resource
route structure - Docker containers deployed in cloud or edge settings
Why Servers Matter:
The server is the middleware brain—handling the plumbing of modular AI interactions.
🔄 How They Work Together
Example Workflow:
- Client initiates session and sends query
- Server retrieves session context from resources
- Client selects a tool (e.g.
search_web
) to use - Server invokes the tool via JSON-RPC
- Tool returns result → server updates resource
- Client receives result and continues reasoning
This is a context loop—the foundation of memory-driven, multimodal, and multi-tool AI workflows.
📊 Visual Summary
[Client] ⇄ [Server] ⇄ [Tool]
⇅
[Resources]
- Client: The brain
- Tool: The muscle
- Server: The spine
- Resource: The memory
✅ Final Thoughts
MCP’s architecture separates concerns elegantly:
- Clients reason
- Tools act
- Servers route
- Resources remember
This modularity enables the plug-and-play AI future—where developers can swap models, scale infrastructure, and evolve workflows without rewriting the whole stack.
👉 Up next: “Why MCP Matters: Enhanced Contextual Understanding for LLMs”