AI-Powered Workflow Orchestrator manages dynamic, adaptive workflows with LLM decision-making and Markdown definitions
The Workflow Orchestrator MCP Server implements an intelligent workflow management solution, leveraging a Large Language Model (LLM) to dynamically execute and adapt workflows based on real-time context. This server operates as part of the broader MCP (Model Context Protocol) infrastructure, extending its capabilities to AI applications such as Claude Desktop, Continue, Cursor, and more.
This MCP server specializes in handling complex, adaptive workflows by breaking them down into discrete steps defined in human-readable Markdown files. Key features include:
The server supports MCP tools for managing workflows, including list_workflows
, start_workflow
, get_workflow_status
, advance_workflow
, and resume_workflow
. These tools allow for flexible control over workflow execution via an API layer.
The architecture of the Workflow Orchestrator MCP Server is designed to be modular, enabling clear separation between components:
The protocol flow can be represented as follows:
graph TD
A[AI Application] -->|MCP Client| B[MCP Protocol]
B --> C[MCP Server]
C --> D[Data Source/Tool]
style A fill:#e1f5fe
style C fill:#f3e5f5
style D fill:#e8f5e8
The Workflow Orchestrator supports a variety of MCP clients, facilitating smooth integration and deployment:
MCP Client | Resources | Tools | Prompts | Status |
---|---|---|---|---|
Claude Desktop | ✅ | ✅ | ✅ | Full Support |
Continue | ✅ | ✅ | ✅ | Full Support |
Cursor | ❌ | ✅ | ❌ | Tools Only |
To get started, follow these steps:
Prerequisites:
uv
.WORKFLOW_DEFINITIONS_DIR
and WORKFLOW_DB_PATH
exist and are readable/writeable.Install Dependencies:
uv sync
Run the Server:
uv run python -m orchestrator_mcp_server
In a CI/CD pipeline, the Workflow Orchestrator can dynamically manage steps such as code checking, build validation, and deployment. Leveraging LLM services for intelligent decision-making ensures that problematic builds are identified early, allowing for immediate corrective actions.
def ci_CD_workflow():
steps = [
step('Check Code', inputs=['branch']), # LLM decides if further checks are needed
step('Build Project', inputs=['version']), # Builds project based on version number
decision('Deploy to Test Server', condition=lambda report: 'errors' in report), # Deployment is skipped if there were errors
step('Run Performance Tests', inputs=['url']), # LLM suggests which tests are necessary based on performance data
decision('Promote to Production', condition=lambda report: all([step.outcome == 'success' for step in workflow]), prompt="Promoteprod?"),
step('Deploy to Production', inputs=['tag']) # Deploying after positive evaluation from LLM and humans
]
The Workflow Orchestrator is compatible with various MCP clients, making it versatile for different applications. For instance, integration with Claude Desktop allows developers to seamlessly connect their projects to the server using standardized communication protocols.
{
"mcpServers": {
"[server-name]": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-[name]"],
"env": {
"API_KEY": "your-api-key"
}
}
}
}
The server’s performance and compatibility are optimized for various edge cases, ensuring robust operation across different MCP clients. The following matrix provides a high-level overview of the integration status:
MCP Client | Resources | Tools | Prompts |
---|---|---|---|
Claude Desktop | ✅ | ✅ | ✅ |
Continue | ✅ | ✅ | ✅ |
Configurable through environment variables and APIs, the server can be tailored for different security requirements. Key configurations include:
WORKFLOW_DEFINITIONS_DIR
: Directory where workflows are defined.WORKFLOW_DB_PATH
: Path to the SQLite database.LOG_LEVEL
: Logging level (default: info
).AI_SERVICE_ENDPOINT
: URL of the LLM service API.AI_SERVICE_API_KEY
: API key for the LLM service.AI_REQUEST_TIMEOUT_MS
: Timeout for AI requests in milliseconds (default: 30000
).The server uses an LLM to dynamically determine the next step based on context variables and real-time feedback.
Yes, interrupted workflows can be resumed using state persistence in the SQLite database.
It supports Claude Desktop, Continue, Cursor, among others as indicated in the compatibility matrix.
The API layer (FastMCP Server
) handles requests from MCP clients, ensuring seamless interaction with workflows and tools.
Key considerations include setting up environment variables correctly and defining workflow definitions in a consistent manner.
Contributions to enhance and support the Workflow Orchestrator are welcome. Guidelines for development, testing, and contribution can be found in docs/CONTRIBUTING.md
.
The broader MCP ecosystem includes resources such as documentation, community support forums, and additional tools that can expand the functionalities of this server.
This comprehensive technical documentation positions the Workflow Orchestrator MCP Server as a valuable tool for developers building AI applications with MCP integration capabilities.
Next-generation MCP server enhances documentation analysis with AI-powered neural processing and multi-language support
Learn to connect to MCP servers over HTTP with Python SDK using SSE for efficient protocol communication
Python MCP client for testing servers avoid message limits and customize with API key
Discover easy deployment and management of MCP servers with Glutamate platform for Windows Linux Mac
Explore community contributions to MCP including clients, servers, and projects for seamless integration
Learn how to use MCProto Ruby gem to create and chain MCP servers for custom solutions