WeGene MCP server utilizes LLM to analyze genetic reports and streamline user report access
The Wegene-Assistant MCP server is designed to facilitate the integration of genetic data analysis into various AI applications using the Model Context Protocol (MCP). This server connects to WeGene, a leading genomics service provider, enabling AI models to access and interpret users' genetic testing reports. By implementing tools like OAuth for authorization, report retrieval, and metadata extraction, this server ensures secure and efficient data handling.
The core features of the Wegene-Assistant MCP server include a robust set of tools designed to interact with WeGene's Open API platform. These tools are powered by MCP, ensuring seamless communication between the AI application and the genetic testing reports stored on WeGene servers. The key resources exposed by this server—custom URI schemes for accessing reports and JSON data representations—are essential for building integrations that require dynamic access to user-specific data.
wegene-oauth: This tool initiates an OAuth process, allowing users to authorize the MCP server to access their WeGene genetic testing reports. Users must complete this authorization within 120 seconds to enable subsequent report retrieval by the AI application.
wegene-get-profiles: Retrieves a list of user profiles from WeGene along with their associated IDs. This data is crucial for the AI model to identify and process relevant reports.
wegene-get-report-info: Provides metadata about available genetic reports, including names, descriptions, and endpoints. These details help in preparing the AI application to handle specific types of genetic analysis requests.
wegene-get-report: Fetches detailed results from a specified report within a user's profile. This tool accepts parameters for report_endpoint
, report_id
, and profile_id
to access the exact data required by the AI model.
The Wegene-Assistant MCP server leverages the Model Context Protocol (MCP) to establish a standardized framework for communication between the AI application and various external tools and data sources. The server is structured as follows:
MCP ensures that communication among these layers is consistent, enabling any compliant AI application to integrate seamlessly with Wegene-Assistant server and subsequent tools or data sources.
graph TD
A[AI Application] -->|MCP Client| B[MCP Server]
B --> C[WeGene API]
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
graph LR
subgraph AI Application
A[[MCP Client]] --> B[Data Source/Tool]
B --> C[MCP Server]
C -->|JSON Responses| A
end
subgraph WeGene API
D[Custom URI Schemes] --> E[Report Data & Metadata]
F[OAuth Authn] --> G[Profile Access Control]
H[API Endpoints] --> I[Genetic Report Results]
D -- Authentication --> G
G -- Authorization --> F, H, I
end
To get started with the Wegene-Assistant MCP server, follow these steps:
npx -y @smithery/cli install @xraywu/mcp-wegene-assistant --client claude
uv sync --dev --all-extras
under the project's root folder.Ensure you have a valid WeGene Open API key/secret to proceed with configuration.
The Wegene-Assistant MCP server can be integrated into various AI workflows, enhancing usability and data accessibility. Here are two practical use cases:
This scenario involves an AI application that receives a user's genetic report and instantly provides insights. The integration requires the following steps:
{
"mcpServers": {
"wegene-assistant": {
"command": "uv",
"args": [
"--directory",
"/path/to/wegene-assistant",
"run",
"wegene-assistant"
]
}
}
}
Another use case is generating custom summaries based on genetic test results. In this workflow, the AI application first retrieves profiles and report metadata using wegene-get-report-info before analyzing specific reports to compile a summary.
The Wegene-Assistant MCP server supports multiple MCP clients that can leverage its functionalities:
MCP Client | Resources | Tools | Prompts |
---|---|---|---|
Claude Desktop | ✅ | ✅ | ✅ |
Continue | ✅ | ✅ | ✅ |
Cursor | ❌ | ✅ | ❌ |
For advanced users, configuring the Wegene-Assistant server involves setting up environment variables and running the server with specific commands:
cp .env.example .env
Edit .env
to include your WeGene API credentials, then run setup and start scripts:
uv sync --dev --all-extras
uv run wegene-assistant
Security measures should be in place for handling sensitive data such as OAuth tokens.
Follow the installation instructions provided in the README, including using Smithery or running uv sync
locally.
Tools include wegene-oauth for OAuth initiation, wegene-get-profiles to retrieve user profiles, and more.
Yes, the server supports seamless integration using the MCP protocol as used by Claude Desktop and other compatible clients.
Security is handled via OAuth and API key/secret stored in environment variables to ensure only authorized access to user data.
Check the project's GitHub repository for additional resources, FAQs, and community support forums.
Contributions are welcome! To contribute, fork this repository and send a pull request. For detailed guidelines on development and bug reporting, refer to the Contributing Guide section in the README.
Explore more about the MCP ecosystem at Model Context Protocol. For additional resources and community support, visit relevant forums and documentation sites.
By implementing the Wegene-Assistant MCP server, developers can significantly enhance their AI workflows by integrating genetic data analysis capabilities into various applications, ensuring reliable and secure data handling.
RuinedFooocus is a local AI image generator and chatbot image server for seamless creative control
Learn to set up MCP Airflow Database server for efficient database interactions and querying airflow data
Simplify MySQL queries with Java-based MysqlMcpServer for easy standard input-output communication
Build stunning one-page websites track engagement create QR codes monetize content easily with Acalytica
Access NASA APIs for space data, images, asteroids, weather, and exoplanets via MCP integration
Explore CoRT MCP server for advanced self-arguing AI with multi-LLM inference and enhanced evaluation methods