Forget juggling specific MCP clients like Windinsurf, Cursor, or Claude Desktop. A groundbreaking Python library has arrived, poised to revolutionize how you interact with your MCP servers. Imagine the power of directly integrating MCP communication into your code, leveraging the intelligence of any Large Language Model (LLM) you prefer. This isn’t just an incremental improvement; it’s a paradigm shift.
This innovative library introduces an agent-based approach, unlocking a suite of exciting features while simplifying the setup process. Whether you’re a seasoned coder or just starting your journey, this opens up a world of possibilities for building intelligent and modular applications.
Installation Made Easy
Getting started is surprisingly straightforward, especially if you have some basic coding familiarity. Being Python-based, the initial step involves ensuring Python is installed on your system.
Next, you’ll create and activate a virtual environment – a best practice for isolating project dependencies. Don’t worry if these steps sound unfamiliar; you can even ask ChatGPT for the exact commands for your operating system (Windows or macOS), which we’ve also conveniently included below:
Windows:
bash
python -m venv venv
.\venv\Scripts\activate
macOS:
bash
python3 -m venv venv
source venv/bin/activate
With your virtual environment active, installing the library itself is a breeze using pip
. If you’re using Python 3 (which you likely are – you can check with python --version
), make sure to use pip3
instead of pip
:
bash
pip3 install mcp-client
Depending on the LLM provider you intend to use, you might need to install additional libraries:
- OpenAI:
pip3 install langchain-openai
- Anthropic:
pip3 install langchain-anthropic
- Other Providers (Grock, Llama, etc.): Refer to the library’s documentation for specific installation instructions.
Vibe Coding: Integrating LLMs with Your MCP Servers
Now for the exciting part – connecting your LLM to your MCP server. Let’s walk through a basic example using OpenAI and an Airbnb MCP server.
- Open Your Project: Navigate to your project directory in your terminal.
- Launch Cursor (Optional but Recommended): For a smoother development experience, you can open the directory directly in Cursor using the command:
cursor .
- Create
.env
File: In your project root, create a new file named.env
. This file will store your API keys securely. Add the following line, replacingYOUR_OPENAI_API_KEY
with your actual OpenAI API key:OPENAI_API_KEY=YOUR_OPENAI_API_KEY
- Create Your Python Script (e.g.,
main.py
): Create a new Python file (e.g.,main.py
) and paste the following code:python import os from dotenv import load_dotenv from langchain_openai import ChatOpenAI from mcp_client.client import MCPClient from mcp_client.config import MCPConfig from langchain.agents import initialize_agent, AgentType from langchain.prompts import PromptTemplate load_dotenv() # Load MCP Configuration mcp_config = MCPConfig.from_yaml("airbnb_mcp.yaml") mcp_client = MCPClient(mcp_config) # Define the LLM llm = ChatOpenAI(model_name="gpt-3.5-turbo-1106") # Define the prompt for the agent prompt_template = PromptTemplate.from_template( "Find Airbnb listings with {preferences}." ) # Create the agent agent = initialize_agent( llm=llm, tools=[mcp_client], agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, max_iterations=5, prompt=prompt_template, verbose=True, ) # Run the agent result = agent.run({"preferences": "a pool and good ratings"}) print(result)
- Create MCP Configuration File (
airbnb_mcp.yaml
): Create a YAML file namedairbnb_mcp.yaml
with your Airbnb MCP server details. A basic example might look like this:yaml name: airbnb url: "YOUR_AIRBNB_MCP_SERVER_URL" # Add any necessary authentication details here
Note: Replace
"YOUR_AIRBNB_MCP_SERVER_URL"
with the actual URL of your Airbnb MCP server. - Breaking Down the Code:
- Imports: We import necessary modules from the MCP client library, Langchain (for LLM integration), and
dotenv
for managing environment variables. load_dotenv()
: This loads the API key from your.env
file.MCPConfig.from_yaml()
andMCPClient()
: These lines load the configuration for your Airbnb MCP server from theairbnb_mcp.yaml
file and create an MCP client instance.ChatOpenAI()
: We initialize the OpenAI LLM you want to use.PromptTemplate
: This defines the prompt that will be sent to the LLM to guide its interaction with the MCP server.initialize_agent()
: This crucial step creates an agent that connects the LLM with the MCP client (acting as a “tool”). We specify the agent type, the maximum number of steps it can take, and the prompt.agent.run()
: This executes the agent with your specified preferences.print(result)
: This displays the output from the MCP server, processed by the LLM.
- Imports: We import necessary modules from the MCP client library, Langchain (for LLM integration), and
- Run the Script: Execute your Python script from the terminal:
python3 main.py
You’ll see the agent in action, interacting with your Airbnb MCP server based on your preferences (in this case, listings with a pool and good ratings) and providing you with the relevant information.
Endless Possibilities
This basic example just scratches the surface of what’s possible. By combining the power of LLMs with direct MCP server communication, you can build incredibly sophisticated applications, such as:
- Autonomous Agents: Extend the concept to create fully autonomous agents that can interact with various MCP services (like the WhatsApp MCP server, as mentioned) to perform complex tasks.
- Personalized Experiences: Tailor MCP interactions based on user preferences and natural language input.
- Intelligent Automation: Automate workflows involving multiple MCP servers, leveraging the LLM’s reasoning capabilities.
Level Up Your Workflow with Context
To ensure your LLM (especially when using tools like Cursor for code generation) understands the specifics of this new MCP client library, you can provide it with the necessary context.
Option 1: Adding Documentation Context to Cursor:
- Navigate to the “Features” section in Cursor.
- Go to “Docs” and add a new document.
- In the “Link” field, paste the link to the
README.md
file of the MCP client library’s GitHub repository. This file typically contains comprehensive documentation. - Cursor will index this documentation and use it as context when you ask it questions or request code generation related to the library. You can then use the
@docs MCP use docs
command in the Cursor chat to specifically reference this context.
Option 2: Converting the Repository to an LLM-Ingestible Format:
- In the GitHub repository URL, replace
hub
withingest
. - This will open a page that converts the entire repository into readable text, which you can then feed into various LLMs for understanding and Q&A.
Explore Further
The possibilities with this new MCP library are truly exciting. The GitHub repository contains more example use cases, such as integrating with Playwright and the Blender MCP server. The framework also boasts features like:
- HTTP Connection Support: Connect to MCP servers running on
localhost
. - Multi-Server Support: Define and interact with multiple MCP servers within a single configuration file.
- Dynamic Server Selection: Let the agent intelligently choose the appropriate MCP server based on the task by setting
use_service_manager
toTrue
. - Tool Control: Precisely manage the tools (MCP server interactions) the agent has access to.
This library marks a significant step forward in simplifying and enhancing how we interact with MCP servers. By bridging the gap between LLM intelligence and direct MCP communication, it empowers developers to create innovative and powerful applications. Don’t just take our word for it – dive into the repository, explore the examples, and start building your own amazing creations!
If you found this introduction helpful, consider supporting the development through the donation link provided. And don’t forget to subscribe for more exciting tech insights!