Web-based MCP Web UI simplifies managing LLM interactions with multi-provider support and real-time streaming features
MCP Web UI is a web-based user interface designed as a Host within the Model Context Protocol (MCP) architecture, providing a powerful and user-friendly interface for interacting with Large Language Models (LLMs). It streamlines interactions by offering a unified interface for multiple LLM providers, real-time chat experiences, flexible configuration options, and robust context handling using the MCP protocol. The server is built with Go and supports various LLM providers such as Anthropic, OpenAI, Ollama, and OpenRouter.
MCP Web UI leverages the Model Context Protocol to facilitate seamless integration between AI applications and data sources or tools, acting as a versatile adapter. Its core features include:
MCP Web UI is built using Go, making it highly efficient and capable of handling real-time data streams efficiently. It offers robust logging options and persistent chat history storage through BoltDB, enhancing user experience and maintaining conversation continuity.
The architecture of MCP Web UI is designed to adhere strictly to the Model Context Protocol (MCP), ensuring compatibility with various clients such as Claude Desktop, Continue, Cursor, and others. The protocol flow can be visualized using a Mermaid diagram:
graph TD
A[AI Application] -->|MCP Client| B[MCP Protocol]
B --> C[MCP Server]
C --> D[Data Source/Tool]
style A fill:#e1f5fe
style C fill:#f3e5f5
style D fill:#e8f5e8
MCP Client | Resources | Tools | Prompts | Status |
---|---|---|---|---|
Claude Desktop | ✅ | ✅ | ✅ | Full Support |
Continue | ✅ | ✅ | ✅ | Full Support |
Cursor | ❌ | ✅ | ❌ | Tools Only |
This compatibility matrix indicates that MCP Web UI is fully compatible with resources and tools through MCP clients but does not support prompts for the Cursor application.
MCP Web UI is easy to install and can be run locally or deployed using Docker. Here are the steps to get it up and running:
Clone the Repository:
git clone https://github.com/MegaGrindStone/mcp-web-ui.git
cd mcp-web-ui
Configure Environment Variables:
mkdir -p $HOME/.config/mcpwebui
cp config.example.yaml $HOME/.config/mcpwebui/config.yaml
export ANTHROPIC_API_KEY=your_anthropic_key
export OPENAI_API_KEY=your_openai_key
export OPENROUTER_API_KEY=your_openrouter_key
Run the Application:
go mod download
go run ./cmd/server/main.go
docker build -t mcp-web-ui .
docker run -p 8080:8080 \
-v $HOME/.config/mcpwebui/config.yaml:/app/config.yaml \
-e ANTHROPIC_API_KEY \
-e OPENAI_API_KEY \
-e OPENROUTER_API_KEY \
mcp-web-ui
A large software development company can use MCP Web UI to integrate various Large Language Models for code assistance and project management. Developers can switch between services like Anthropic and OpenAI, maintaining context throughout their workflow.
A content creation agency can leverage MCP Web UI to manage various AI tools for generating articles, social media posts, and marketing materials. The unified interface allows switching between providers like Anthropic and Ollama based on the task at hand.
MCP Web UI integrates seamlessly with various MCP clients, including Claude Desktop, Continue, and Cursor. This integration ensures that the server can interact with different AI applications, providing a standard interface for data exchange.
Here is an example configuration snippet that demonstrates how to set up MCP Web UI with specific services:
{
"mcpServers": {
"[server-name]": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-[name]"],
"env": {
"API_KEY": "your-api-key"
}
}
}
}
This configuration ensures that MCP Web UI is ready to connect with the specified services using environment variables and command-line arguments.
The performance of MCP Web UI is optimized for real-time data streaming, ensuring smooth user interactions. The compatibility matrix shows support for different clients:
This demonstrates that while both resources and tools are supported by Continue and Cursor, only tools are currently integrated with Cursor.
MCP Web UI offers advanced configuration options through its internal services and settings. It supports various LLM providers and allows for dynamic management of models and configurations. The server also includes robust logging and persistent chat history storage using BoltDB.
{
"logging": {
"level": "info",
"file": "./logs/webui.log"
},
"chatHistory": {
"enable": true,
"storePath": "/path/to/chat/history"
}
}
This example showcases how to enable detailed logging and store chat history, ensuring that user interactions are both secure and accessible.
How does MCP Web UI integrate with different LLM providers?
What are the requirements for running MCP Web UI locally?
Can I switch between LLM providers seamlessly during user sessions?
How does MCP Web UI ensure data security and privacy?
What is the process for contributing to MCP Web UI?
MCP Web UI is part of the broader Model Context Protocol ecosystem, which includes various clients like Claude Desktop, Continue, and Cursor. By leveraging this protocol, developers can build more interconnected AI applications that seamlessly exchange resources and tools.
This comprehensive documentation positions MCP Web UI as a valuable tool for developers looking to integrate multiple LLM providers and resources into their AI applications through the Model Context Protocol.
RuinedFooocus is a local AI image generator and chatbot image server for seamless creative control
Simplify MySQL queries with Java-based MysqlMcpServer for easy standard input-output communication
Build stunning one-page websites track engagement create QR codes monetize content easily with Acalytica
Learn to set up MCP Airflow Database server for efficient database interactions and querying airflow data
Explore CoRT MCP server for advanced self-arguing AI with multi-LLM inference and enhanced evaluation methods
Access NASA APIs for space data, images, asteroids, weather, and exoplanets via MCP integration