The Random Walk Blog

2025-06-28

Beyond simple scripts: Building your first agentic MCP with Python

Beyond simple scripts: Building your first agentic MCP with Python

The Model Context Protocol (MCP) is an open, vendor-neutral standard for connecting AI models to external data and tools. In effect, MCP acts like a web API built for LLMs. Developers can define Resources (data endpoints) and Tools (callable functions) that the AI can access during a conversation. For example, an MCP server might expose a database as a resource or a function to query that database as a tool. This makes the AI far more capable and context-aware than a standalone script. As Anthropic puts it, MCP “standardizes how applications provide context to LLMs” – think of it like giving your AI a USB-C port to plug into any data source or tool. Axios notes that MCP “lets users with modest technical skills give…bots the keys to their other digital tools” and that it’s already supported by “OpenAI, Google and Microsoft”, underlining its broad industry backing. In practice, MCP follows a typical client– server design: a host (e.g. a chatbot UI or IDE) runs an LLM and an MCP client, which connects to one or more MCP servers. Each server exposes specific data or actions.

MCP uses a client–server architecture with three main components: a host (running the AI model), one or more clients, and the MCP servers. In practice, the host might be a chat interface (like Claude or ChatGPT) that launches an MCP client to connect with MCP servers. The figure illustrates how an MCP client (middle) connects the host on the left to multiple MCP servers on the right, each server linking to external data sources or tools at the bottom. This design lets the AI interact with diverse data and tools through a standardized interface, much like how a USB-C port allows different devices to connect via a common plug.

In MCP terminology:

  • Resources– read-only data endpoints that load information into the model’s context (analogous to HTTP GET).

  • Tools – executable functions the model can call to perform actions or computations (analogous to HTTP POST).

  • Prompts – reusable conversation templates or instructions guiding the model’s behavior.

By organizing your data and functionality this way, you avoid writing brittle, one-off scripts for each integration. Instead, the LLM can plan and choose which resources or tools to use. In other words, you’re building an agentic AI system. Instead of a fixed script, the AI acts as an agent that autonomously calls these MCP tools to answer the user’s query. For example, one guide describes “Agentic MCP” as giving the AI “a set of tools to fetch information or perform tasks during a conversation”. In practice this means the model can decide at runtime: “I need information from the library database, so I’ll call the tool with the user’s input.” This is more powerful than a find_books_by_authorsimple script because it adapts its actions based on the situation and can leverage many tools in sequence.

Setting up Python and the MCP SDK

Group 58, Grouped objectTo follow along, you’ll need Python 3.8+ and the MCP Python package. Install the official SDK (which includes command-line tools) via pip. For example:

pip install "mcp[cli]"

This provides the mcp command and the Python libraries. You can verify it by running mcp --help . Now you’re ready to code an MCP server in Python. We’ll use the FastMCP class from the SDK to expose some data and tools.

Example use-case: A Library Search Agent

tLet’s build a simple example: imagine a “Library” that has a small books database. We want an AI agent that can answer questions about the collection by querying this database via MCP. Our MCP server will expose: - A resource for the database schema or a list of books. - Tools for finding books by author or by title.

Below is a complete Python script libraray_mcp_server_PM. It initializes a SQLite database with a few books and then defines MCP endpoints using decorators. The code should be saved as a file and run with Python.

from mcp.server.fastmcp 
import FastMCP
import sqlite3 
import os 

 

# 1. Initialize a SQLite database if it doesn't exist. 
if not os.path.exists("library.db"): 
       conn = sqlite3.connect("library.db") 

conn.execute("CREATE TABLE books (id INTEGER PRIMARY KEY, title TEXT, author TEXT)") 
     conn.executemany("INSERT INTO books (title, author) VALUES (?, ?)", [ 
     ("1984", "George Orwell"), 
     ("To Kill a Mockingbird", "Harper Lee"),
     ("Brave New World", "Aldous Huxley"), 
]) 

     conn.commit()
     conn.close() 

# 2. Create an MCP server instance.
mcp = FastMCP("LibraryServer") 

 # 3. Define a resource endpoint for the database schema (as an example resource). 
@mcp.resource("schema://main") 
def get_schema() -> str: 
     """Return the SQL schema of the database.""" 
       conn = sqlite3.connect("library.db")
       rows = conn.execute("SELECT sql FROM sqlite_master WHERE   type='table'").fetchall() 
       conn.close() 
       return "\n".join(sql[0] for (sql,) in rows if sql) 

 # 4. Define a resource to list all books. @mcp.resource("books://all") 
def get_all_books() -> str: 
      """Return all books in the library.""" 
      conn = sqlite3.connect("library.db") 
      rows = conn.execute("SELECT title, author FROM books").fetchall()
      conn.close() 
       return "\n".join(f"{title} by {author}" for title, author in rows) 

# 5. Define a tool to find books by author substring.

@mcp.tool() 
def find_books_by_author(author: str) -> str: 
"""Return book titles by matching author.""" conn = sqlite3.connect("library.db") 
     rows = conn.execute("SELECT title FROM books WHERE author LIKE ?", (f"% {author}%",)).fetchall() 
     conn.close() 
     if not rows: 
              return f"No books found by author containing '{author}'."       return "\n".join(title for (title,) in rows) 

# 6. Define a tool to find books by title substring. 

@mcp.tool() 
def find_books_by_title(title: str) -> str:
"""Return authors of books matching title.""" 
    conn = sqlite3.connect("library.db") 
    rows = conn.execute("SELECT author FROM books WHERE title LIKE ?", (f"% {title}%",)).fetchall() 
    conn.close() 
    if not rows: 
             return f"No books found with title containing '{title}'." 
     return "\n".join(author for (author,) in rows) 

 # 7. Run the MCP server. 
 if  _name	 == " _main_ ": 
       mcp.run()

Here’s what this script does: - Lines 3–11: If library.db doesn’t exist, create it and insert some sample books.

-Line 14: Create the FastMCP server named “LibraryServer”.

-Lines 17–26: Register a resource at URI scheme schema://main that returns the database schema as An LLM client can read this resource to learn what tables exist.

-Lines 29–36: Register another resource books://all that returns a list of all books (titles and authors).

This simulates giving the AI some context data upfront.

- Lines 39–49: Register a toolfind_books_by_author(author) which runs a SQL query for books whose author name contains the input string.

- Lines 52–60: Register a tool find_books_by_title(title) which finds books by title substring.

- Line 63: Finally, mcp.run() starts the server so it can accept connections.

Each @mcp.resource or @mcp.tool function becomes an MCP endpoint. These behave like specialized API endpoints for the AI to call. Resources are _read-only_ (they return data to be loaded into the model’s context), and tools perform actions or queries. (This matches the official docs: resources like GET, tools like POST).

Running and testing the MCP server

Save the code above to library_mcp_server.py provided by the SDK. For example:

mcp dev library_mcp_server.py

This launches the server and opens the MCP Inspector UI, where you can manually call resources/tools to test them (e.g. “Read resource books://all ” or “Call tool find_books_by_author with argument ‘Orwell’”). The SDK docs show this pattern as well. Alternatively, you could install this server into an MCP-

capable client (like Claude for Desktop) or write a Python client.

While a full agentic client is beyond this intro, you could quickly test the endpoints with the Python SDK. For example, using a ClientSession with the Streamable HTTP client (or even stdio_client ), your code could call

await session.call_tool("find_books_by_author", {"author": "Orwell")} to invoke the tool we defined. In an actual chat scenario, an LLM agent might do this internally: _“The

user asked for Orwell, so call find_books_by_author('Orwell') and include the result.”_

Why go beyond scripts?

A key advantage of MCP is that the same server can be used by any compatible AI without rewriting code. Today you might hard-code a Python script to query a database, but with MCP you build an agnostic _“adapter”_ that any model can use. For instance, The Register notes that the MCP server repo already “counts dozens of official integrations… along with more than 200 community and demo servers”. Whether you later plug in your server to Claude, ChatGPT (via plugins), or another LLM, the integration remains standard.

Moreover, writing an _agentic_ solution means your model can orchestrate multiple calls. In our example, a single conversation can naturally use both our tools and resources: the AI could first “read” the schema://main or books://all resource to get context, then decide whether to call find_books_by_author or find_books_by_title based on the query. This kind of dynamic, two-way interaction is what MCP is built for, whereas a simple script would require you to manually manage inputs and outputs. As one guide explains, with MCP the AI effectively has “a set of tools to fetch information or perform tasks during a conversation”.

image (38).webp

Next steps and resources

This blog has outlined the basics of MCP and shown a simple Python example. From here, you could extend the example by adding more tools or connecting to real systems (e.g. an MCP server for Slack, Google Drive, or your own REST API – many exist open-source). Check the Model Context Protocol documentation and the Python SDK repo for guides and examples. The official docs describe MCP as exposing data/functions to LLMs in a “secure, standardized way – like a web API for LLM interactions”. By building your own MCP servers, you are essentially creating adapters that let any LLM plug into your data.

In summary, moving beyond scripts means treating your AI system as an agent that can autonomously choose from a toolbox of resources and actions. The code above is a starting point – a tiny “library agent” that can answer questions using an MCP server. We encourage beginners to experiment: try writing an MCP tool that calls a public API, or integrate the Python client session to see how a model might use it. With MCP’s open standard and growing ecosystem, you’re building on a foundation that’s shared by all major AI platforms.

Related Blogs

Langflow: The Next-Gen Visual Framework for Multi Agent AI & RAG Applications

In the ever - evolving landscape of AI development, Langflow emerges as a game changer. It is an open source, Python powered framework designed to simplify the creation of multi agent and retrieval augmented generation (RAG) applications.

Langflow: The Next-Gen Visual Framework for Multi Agent AI & RAG Applications

I Built an AI Agent From Scratch—Here’s What I Learned

I’ve worked with LangChain. I’ve played with LlamaIndex. They’re great—until they aren’t.

I Built an AI Agent From Scratch—Here’s What I Learned

How Can Enterprises Benefit from Generative AI in Data Visualization

It’s New Year’s Eve, and John, a data analyst, is finishing up a fun party with his friends. Feeling tired and eager to relax, he looks forward to unwinding. But as he checks his phone, a message from his manager pops up: “Is the dashboard ready for tomorrow’s sales meeting?” John’s heart sinks. The meeting is in less than 12 hours, and he’s barely started on the dashboard. Without thinking, he quickly types back, “Yes,” hoping he can pull it together somehow. The problem? He’s exhausted, and the thought of combing through a massive 1000-row CSV file to create graphs in Excel or Tableau feels overwhelming. Just when he starts to panic, he remembers his secret weapon: Fortune Cookie, the AI-assistant that can turn data into insightful data visualizations in no time. Relieved, John knows he doesn’t have to break a sweat. Fortune Cookie has him covered, and the dashboard will be ready in no time.

How Can Enterprises Benefit from Generative AI in Data Visualization

Streamlining File Management with MindFolder’s Intelligent Edge

Brain rot, the 2024 Word of the Year, perfectly encapsulates the overwhelming state of mental fatigue caused by endless information overload—a challenge faced by individuals and businesses alike in today’s fast-paced digital world. At its core, this term highlights the need for streamlined systems that simplify the way we interact with data and files.

Streamlining File Management with MindFolder’s Intelligent Edge

Refining and Creating Data Visualizations with LIDA and AI Fortune Cookie

Data visualization and storytelling are critical for making sense of today’s data-rich world. Whether you’re an analyst, a researcher, or a business leader, translating raw data into actionable insights often hinges on effective tools. Two innovative platforms that elevate this process are Microsoft’s LIDA and our RAG-enhanced data visualization platform using gen AI, AI Fortune Cookie. While LIDA specializes in refining and enhancing infographics, Fortune Cookie transforms disparate datasets into cohesive dashboards with the power of natural language prompts. Together, they form a powerful combination for visual storytelling and data-driven decision-making.

Refining and Creating Data Visualizations with LIDA and AI Fortune Cookie
Langflow: The Next-Gen Visual Framework for Multi Agent AI & RAG Applications

Langflow: The Next-Gen Visual Framework for Multi Agent AI & RAG Applications

In the ever - evolving landscape of AI development, Langflow emerges as a game changer. It is an open source, Python powered framework designed to simplify the creation of multi agent and retrieval augmented generation (RAG) applications.

I Built an AI Agent From Scratch—Here’s What I Learned

I Built an AI Agent From Scratch—Here’s What I Learned

I’ve worked with LangChain. I’ve played with LlamaIndex. They’re great—until they aren’t.

How Can Enterprises Benefit from Generative AI in Data Visualization

How Can Enterprises Benefit from Generative AI in Data Visualization

It’s New Year’s Eve, and John, a data analyst, is finishing up a fun party with his friends. Feeling tired and eager to relax, he looks forward to unwinding. But as he checks his phone, a message from his manager pops up: “Is the dashboard ready for tomorrow’s sales meeting?” John’s heart sinks. The meeting is in less than 12 hours, and he’s barely started on the dashboard. Without thinking, he quickly types back, “Yes,” hoping he can pull it together somehow. The problem? He’s exhausted, and the thought of combing through a massive 1000-row CSV file to create graphs in Excel or Tableau feels overwhelming. Just when he starts to panic, he remembers his secret weapon: Fortune Cookie, the AI-assistant that can turn data into insightful data visualizations in no time. Relieved, John knows he doesn’t have to break a sweat. Fortune Cookie has him covered, and the dashboard will be ready in no time.

Streamlining File Management with MindFolder’s Intelligent Edge

Streamlining File Management with MindFolder’s Intelligent Edge

Brain rot, the 2024 Word of the Year, perfectly encapsulates the overwhelming state of mental fatigue caused by endless information overload—a challenge faced by individuals and businesses alike in today’s fast-paced digital world. At its core, this term highlights the need for streamlined systems that simplify the way we interact with data and files.

Refining and Creating Data Visualizations with LIDA and AI Fortune Cookie

Refining and Creating Data Visualizations with LIDA and AI Fortune Cookie

Data visualization and storytelling are critical for making sense of today’s data-rich world. Whether you’re an analyst, a researcher, or a business leader, translating raw data into actionable insights often hinges on effective tools. Two innovative platforms that elevate this process are Microsoft’s LIDA and our RAG-enhanced data visualization platform using gen AI, AI Fortune Cookie. While LIDA specializes in refining and enhancing infographics, Fortune Cookie transforms disparate datasets into cohesive dashboards with the power of natural language prompts. Together, they form a powerful combination for visual storytelling and data-driven decision-making.

Additional

Your Random Walk Towards AI Begins Now