Secure remote shell command execution server with command whitelisting and security features
The MCP Shell Server is a secure, flexible command execution server designed to work in conjunction with Model Context Protocol (MCP) clients like Claude Desktop, Continue, and Cursor. It acts as an intermediary between the user requests and the underlying shell commands, ensuring that only whitelisted operations can be executed while providing comprehensive feedback on each command’s outcome.
The core strength of this server lies in its ability to integrate seamlessly with various AI applications via the Model Context Protocol. It ensures secure command execution by allowing only specific shell commands to run, thereby maintaining a robust security posture. Additionally, it supports standard input for commands and returns detailed outputs, making it ideal for scenarios where complex data processing or analysis is required.
One of its prominent features is shell operator validation. Commands containing operators such as ;, &&, ||, and | are scrutinized against the allowed command list to prevent potential misuse. This feature ensures that even commands with intricate logic structures adhere to the defined security guidelines, offering a layer of safety against unauthorized access or malicious actions.
Moreover, the server includes timeout control capabilities, allowing users to set limits on how long each shell command can run before being halted. This prevents any single poorly designed or infinite loop command from bringing down the entire system, ensuring a stable and reliable environment for AI applications that rely on heavy computational tasks.
The architecture of the MCP Shell Server is built around the Model Context Protocol, which serves as a universal adapter layer between the client (AI application) and backend systems. The server itself handles command execution based on the whitelist defined by its environment configuration. It ensures that all commands are executed in a safe and controlled manner, aligning with the fundamental principles of data security and integrity.
The protocol flow is encapsulated within a complex series of interactions where an MCP client initiates a request to the server, which then processes the command based on the allowed list before sending back the results. This interaction can be visualized through the following Mermaid diagram:
graph TD
A[AI Application] -->|MCP Client| B[MCP Protocol]
B --> C[MCP Server]
C --> D[Data Source/Tool]
style A fill:#e1f5fe
style C fill:#f3e5f5
style D fill:#e8f5e8
This diagram illustrates the flow of data from an AI application through an MCP client, which then communicates with the MCP Shell Server. The server processes commands and interacts with the specified tool or data source before returning the result to the client.
To get started with deploying the MCP Shell Server, you can follow these steps:
Installation: You can install the MCP Shell Server via pip:
pip install mcp-shell-server
Configuration: Set up the necessary environment variables to define allowed commands and adjust other settings as needed.
~/Library/Application\ Support/Claude/claude_desktop_config.json with the given configuration:
{
"mcpServers": {
"shell": {
"command": "uv",
"args": [
"--directory",
".",
"run",
"mcp-shell-server"
],
"env": {
"ALLOW_COMMANDS": "ls,cat,pwd,grep,wc,touch,find"
}
},
}
}
Starting the Server: You can start the server with a command line instruction, such as:
ALLOWED_COMMANDS="ls,cat,echo" uvx mcp-shell-server
or use an alias for consistency and ease of usage.
In a typical machine learning (ML) workflow, preparing data is often the first and most crucial step. Imagine you are developing an ML model using Python and need to preprocess a dataset stored on a remote server. Leveraging the MCP Shell Server, you can securely execute shell commands directly from your AI application without exposing sensitive code or credentials.
To achieve this, connect the MCP client of your choice (e.g., Claude Desktop) to the MCP Shell Server running in another machine within your network. Use shell commands such as ls and cat to inspect files, or more complex commands for data preprocessing like find and basic text manipulation with grep. This setup allows you to seamlessly integrate data retrieval and transformation into your ML pipeline.
For large-scale data processing, file management plays a significant role. Consider an application that needs to perform batch processing on extensive datasets distributed across multiple servers. By utilizing the MCP Shell Server, you can orchestrate shell commands like ls, cat, and find from within your AI application.
An example scenario might involve running:
{
"command": ["find", "/path/to/data", "-type", "f"],
"directory": "/path/to/search",
"timeout": 60
}
This command ensures that the MCP Shell Server efficiently finds all files in /path/to/data and processes them within a specified timeframe, maintaining synchronization between your AI application and remote storage systems.
The MCP Shell Server is designed to provide robust support for various MCP clients, ensuring seamless integration across platforms and applications. The compatibility matrix below outlines the current status of different tools using the MCP Shell Server:
| MCP Client | Resources | Tools | Prompts |
|---|---|---|---|
| Claude Desktop | ✅ | ✅ | ✅ |
| Continue | ✅ | ✅ | ✅ |
| Cursor | ❌ | ✅ | ❌ |
For full integration, the MCP Shell Server supports all resources and tools needed for these clients. However, since prompt functionality is not currently supported by the server, only a subset of functionalities provided by certain clients can be leveraged.
To ensure reliability and efficiency in various environments, the MCP Shell Server undergoes rigorous testing to validate performance across different versions and setups. The table below provides a snapshot of compatibility and performance metrics:
| Version (MCP) | Supported Platforms | Maximum Allowed Commands per Request | Minimum Execution Time |
|---|---|---|---|
| 1.1.0 | macOS, Linux, Windows | 50 | 0.01 seconds |
This matrix highlights the broad support for diverse platforms while constraining the number of commands that can be executed in a single request to maintain system stability and speed.
For advanced setups, users have the flexibility to customize various aspects of the MCP Shell Server through environment variables. Here’s how you can configure it:
{
"mcpServers": {
"[server-name]": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-[name]"],
"env": {
"API_KEY": "your-api-key"
}
}
}
}
This allows you to tailor the server to specific needs, including setting up custom API keys for enhanced security.
How does the shell operator validation work?
Shell operators like ;, &&, ||, and | are validated against the allowed commands list. Only commands that pass this validation can be executed, ensuring robust security measures.
Can I use this server with any AI application or only specific ones? This MCP Shell Server is compatible with Claude Desktop, Continue, and Cursor. While other clients may function with basic shell operations, advanced features like prompt handling are currently unsupported.
What happens if a command exceeds its timeout limit? If a command runs longer than the set timeout period, it will be terminated to prevent system instability or potential crashes due to long-running processes.
How can I integrate custom commands into my setup?
You can extend the allowed commands list by modifying the ALLOW_COMMANDS environment variable in your MCP client configuration file.
What is the recommended protocol version for optimal performance? It's recommended to use the latest version of the Model Context Protocol (MCP) as it offers the best compatibility and features optimized for performance and security.
For developers who wish to contribute to this project, follow these steps:
Clone the Repository:
git clone https://github.com/yourusername/mcp-shell-server.git
cd mcp-shell-server
Install Dependencies:
pip install -e ".[test]"
Run Tests:
pytest
Contributions are highly encouraged, and you can submit issues or pull requests for improvements.
The MCP ecosystem comprises various tools, clients, and services that all interoperate seamlessly through the Model Context Protocol. For more information, visit the official documentation on ModelContextProtocol.org.
By understanding these resources and integrating them into your project, you can enhance the capabilities of your AI applications significantly.
This comprehensive guide outlines the capabilities, configuration options, and integration scenarios for MCP Shell Server, positioning it as a critical component in building robust, secure, and scalable AI application environments.
RuinedFooocus is a local AI image generator and chatbot image server for seamless creative control
Learn to set up MCP Airflow Database server for efficient database interactions and querying airflow data
Simplify MySQL queries with Java-based MysqlMcpServer for easy standard input-output communication
Explore CoRT MCP server for advanced self-arguing AI with multi-LLM inference and enhanced evaluation methods
Build stunning one-page websites track engagement create QR codes monetize content easily with Acalytica
Access NASA APIs for space data, images, asteroids, weather, and exoplanets via MCP integration