WebSocket server for running and managing MCP protocol servers with process control and WebSocket integration
MCP Server Runner is a versatile WebSocket server implementation designed to facilitate interactions between WebSocket clients and Model Context Protocol (MCP) servers. It acts as an intermediary, enabling seamless communication and leveraging the MCP protocol suite to support a wide range of AI applications such as Claude Desktop, Continue, Cursor, and others.
The core capabilities of MCP Server Runner are centered around efficient WebSocket server management, bidirectional communication between clients and servers, process handling, graceful shutdown protocols, and comprehensive error logging. This application is built to ensure robust operation in both development and production environments, offering a solid foundation for developers building AI applications.
Environment variables play a crucial role in configuring MCP Server Runner:
PROGRAM= # Path to the MCP server executable (required if no config file)
ARGS= # Comma-separated list of arguments for the MCP server
HOST=0.0.0.0 # Host address to bind to (default: 0.0.0.0)
PORT=8080 # Port to listen on (default: 8080)
CONFIG_FILE= # Path to JSON configuration file
These variables allow for flexible and fine-grained control over the behavior of the WebSocket server and MCP servers it manages.
Alternatively, you can provide a JSON configuration file that defines multiple server configurations and selects one as the default:
{
"mcpServers": {
"[server-name]": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-[name]"],
"env": {
"API_KEY": "your-api-key"
}
}
},
"defaultServer": "[server-name]",
"host": "0.0.0.0",
"port": 8080
}
The environment variables have a specific order of precedence in case they conflict:
CONFIG_FILE
environment variablePROGRAM
, ARGS
, etc.)MCP Server Runner adheres to the Model Context Protocol architecture, designed to enable scalable and flexible connections between WebSocket clients and AI application servers.
graph TD
A[AI Application] -->|MCP Client| B[MCP Protocol]
B --> C[MCP Server]
C --> D[Data Source/Tool]
style A fill:#e1f5fe
style C fill:#f3e5f5
style D fill:#e8f5e8
graph TD
A(Data) -[:processed]-> B[Data Processor]
B --> C[Database]
C -[:stored]-> D[Traffic Log]
E[MCP Server] -->|Sends Requests To| J[Web Socket Client]
F[RPC Client] -->|Initiates Requests To| G[MCP Server]
style A fill:#e8f5e8
style B fill:#f3e5f5
style C fill:#f3e5f5
style D fill:#d6eeef
style E fill:#d9edf7
style F fill:#e8f5e8
To get started with MCP Server Runner, follow these steps:
Install Rust 1.70 or Higher: Ensure that you have the latest version of Rust installed on your system.
Clone the Repository:
git clone https://github.com/ModelContextProtocol/mcp-server-runner.git
cd mcp-server-runner
Build and Run the Application with Environment Variables:
export PROGRAM=npx
export ARGS=-y,@modelcontextprotocol/server-github
export PORT=8080
cargo run
Using a Configuration File:
CONFIG_FILE
environment variable.# Either specify the config file as an argument
cargo run config.json
# Or use the CONFIG_FILE environment variable
CONFIG_FILE=config.json cargo run
Connect to the WebSocket Server:
const ws = new WebSocket("ws://localhost:8080");
Imagine a scenario where an AI application needs real-time data analysis from multiple sources. By integrating MCP Server Runner, the application can connect to various data providers and perform analyses on-the-fly through the established protocols.
import websocket
import json
url = "ws://localhost:8080"
ws = websocket.WebSocket()
ws.connect(url)
# Send a request for real-time data analysis
request_data = {"operation": "analysis", "data_source_ids": [1, 2]}
ws.send(json.dumps(request_data))
# Receive the results
response = ws.recv()
For model training and evaluation tasks, MCP Server Runner acts as a central hub, enabling seamless interactions between different tools. It can manage multiple training processes across various datasets, ensuring efficient data flow and reduced dependency on manual interventions.
import subprocess
from threading import Thread
# Launching an MCP server instance for model training
def train_model(config):
process = subprocess.Popen(["npx", "-y", "@modelcontextprotocol/server-training"], env={"CONFIG": config})
try:
# Perform some actions based on the output of the training process
with open("training_output.txt", "r") as file:
result = file.read()
print(result)
finally:
process.terminate()
# Start a thread for the training process
Thread(target=train_model, args=("config.json",)).start()
MCP Server Runner is designed to work seamlessly with various clients. The following table outlines compatibility and features:
MCP Client | Resources | Tools | Prompts | Status |
---|---|---|---|---|
Claude Desktop | ✅ | ✅ | ✅ | Full Support |
Continue | ✅ | ✅ | ✅ | Full Support |
Cursor | ❌ | ✅ | ❌ | Tools Only |
MCP Server Runner has been tested on multiple platforms and configurations, ensuring stability across various environments.
To enhance security, you can implement authentication mechanisms using environment variables or custom configurations. For example:
AUTH_USER=youruser
AUTH_PASSWORD=yourpassword
MCP Server Runner includes comprehensive logging features that ensure detailed logs for debugging purposes. Any errors encountered during the runtime are logged and managed gracefully.
Integrate by configuring the server to communicate with your chosen dataset or tool using the specified protocol.
The supported clients include Claude Desktop, Continue, and Cursor for full support; other clients may have limited features.
Yes, MCP Server Runner supports real-time data streams and can be used to integrate such functionality seamlessly into your application.
Currently, it supports one client connection at a time due to limitations in current implementation.
No built-in SSL/TLS support; use a reverse proxy like Nginx or Traefik for securing the connection.
git checkout -b feature/amazing-feature
git commit -m 'Add amazing feature'
git push origin feature/amazing-feature
Explore additional resources that can help you understand and leverage the Model Context Protocol (MCP):
With its comprehensive features, MCP Server Runner is an indispensable tool for developers looking to integrate Model Context Protocol into their AI applications. By providing a robust and flexible bridge between WebSocket clients and MCP servers, it enables seamless data exchange and process management.
This technical documentation aims to provide developers with the necessary information to effectively implement and utilize MCP Server Runner in their projects.
Analyze search intent with MCP API for SEO insights and keyword categorization
Learn to connect to MCP servers over HTTP with Python SDK using SSE for efficient protocol communication
Next-generation MCP server enhances documentation analysis with AI-powered neural processing and multi-language support
Connect your AI with your Bee data for seamless conversations facts and reminders
Learn how to use MCProto Ruby gem to create and chain MCP servers for custom solutions
Expose Chicago Public Schools data with a local MCP server accessing SQLite and LanceDB databases