You install an MCP server for GitHub. Great! But wait—it comes with 50+ tools. You only need 3.
Now your LLM's system prompt is bloated with 47 unnecessary tool descriptions. Result?
- 💸 Higher API costs (tokens aren't free)
- 🤯 More hallucinations (too many options confuse the model)
- 🐌 Slower responses (processing overhead)
- 😤 Worse accuracy (signal lost in noise)
Every. Single. Request.
Fusion MCP Hub lets you build custom MCP servers with only the tools you actually need.
Browse 300+ tools across 16+ servers. Click the ones you want. Download a lean, production-ready package. Done.
Need Jira issue tracking + GitHub PRs? Take 2 tools from Atlassian, 2 from GitHub. 4 tools instead of 60+. Your LLM will thank you.
Want AI recommendations? Describe your workflow in plain English—we'll suggest the perfect tool combo using semantic search and multi-factor scoring.
🚀 Try it live: fusion-mcp-hub.vercel.app
Fusion MCP Hub is a full ecosystem for managing MCP servers:
The main platform for discovering, composing, and testing MCP servers.
- Browse 300+ tools across 16+ servers
- Create custom server compositions
- Test with AI agents in real-time
- Validate configurations before deployment
Command-line tool for installing and managing custom servers locally.
# Install globally
npm install -g fusion-mcp-cli
# Login to your workspace
npx fusion-mcp-cli auth login -u https://fusion-mcp-hub.vercel.app
# Pull a custom server from the hub
npx fusion-mcp-cli server pull my-custom-server
# Auto-configure Claude Desktop / Bob IDE / Copilot Chat
npx fusion-mcp-cli config set --client claude
# Install community servers from the hub
npx fusion-mcp-cli server install github-serverKey Features:
- 🔄 Pull/push custom servers to/from workspaces
- ⚙️ Auto-configure Claude Desktop, Bob IDE, Copilot Chat
- 📦 Install pre-built servers from the gallery
- 🔧 Manage project-level and global configurations
WebSocket proxy for testing local stdio MCP servers from the web playground.
# Install globally
npm install -g fusion-mcp-proxy
# Start proxy (auto-detects your mcp.json from Claude/Bob/Copilot)
fmcp-proxy start
# Or specify a custom config
fmcp-proxy start --config ./my-mcp-config.jsonKey Features:
- 🌉 Bridge browser clients to local stdio servers
- 🤖 Run AI agents with direct MCP server access (eliminates timing issues)
- 🔍 Auto-detect mcp.json from Claude, Bob, or Copilot
- 📡 Real-time streaming of agent execution
Use Cases:
- Test local stdio servers in the web playground
- Run ReAct agents that need synchronous tool execution
- Debug MCP servers during development
Solve the Token Bloat Problem: Create lean, purpose-built MCP servers with only the tools you need.
The Problem: Full MCP servers expose 20-50+ tools, flooding LLM context with unnecessary information. This increases costs, causes hallucinations, and reduces accuracy.
The Solution: Compose custom servers by selecting specific tools from multiple servers:
- Manual Selection: Browse 300+ tools across 16+ servers and cherry-pick exactly what you need
- AI-Powered Recommendations: Describe your workflow in plain English, get intelligent tool suggestions
- Visual Workflow Builder: See server dependencies and tool relationships organized by source
- One-Click Download: Export as a complete, production-ready NPM package
- CLI Integration: Install and configure custom servers directly via
fusion-mcp-cli
Example: Need Jira + GitHub? Don't load 40+ tools from both servers. Compose a custom server with just:
jira_get_issue,jira_search_issues(2 tools from Atlassian server)github_create_pull_request,github_list_commits(2 tools from GitHub server)
Result: 4 focused tools instead of 40+ unnecessary ones = lower costs, better accuracy.
Accelerate custom server creation with intelligent recommendations:
- Natural Language Intent: Describe your workflow in plain English
- Hybrid Search: Vector embeddings + keyword matching for intelligent tool discovery
- Multi-Factor Scoring: Intent alignment, pattern matching, and semantic similarity
- Confidence Scores: See reasoning behind each recommendation with transparency
Test MCP servers and AI agents in real-time:
- Multi-Model Support: OpenAI (GPT-4, GPT-3.5), Anthropic (Claude 3.5 Sonnet), IBM Watsonx
- Three Execution Modes:
- Browser Mode: Direct browser-to-server connections (HTTP/WebSocket)
- Proxy Mode: Connect to local stdio servers via WebSocket proxy
- Proxy Agent Mode: Run LangGraph ReAct agents on proxy with synchronous tool execution
- Real-Time Streaming: See agent reasoning, tool calls, and results as they happen
- Server Management: Add, configure, and test multiple servers simultaneously
Validate MCP configurations before deployment:
- Multiple Input Methods: Upload file, drag-and-drop, or paste JSON
- Real-Time Validation: Instant feedback on configuration issues
- Smart Detection: Identify missing or misconfigured servers
- Auto-Fix Suggestions: Ready-to-use config snippets
- Multi-Client Support: Claude Desktop, Bob IDE, and Copilot Chat configurations
Explore the complete MCP ecosystem:
- 16+ Pre-Built Servers: Specialized servers for different use cases
- 300+ Tools: Comprehensive tool library across all categories
- Category Filtering: Browse by Development, Business, Data, AI, and more
- Detailed Documentation: Complete guides for each server and tool
Organize your work by project or team:
- Multiple Workspaces: Separate environments for different projects
- Custom Server Storage: Store your composed servers in workspaces
- Team Collaboration: Share configurations and servers with team members
- CLI Integration: Pull/push servers via command-line tools
Web Application:
- Next.js 15.5.6 with App Router
- TypeScript 5
- Tailwind CSS v4 with shadcn/ui
- MongoDB for server metadata
- LanceDB + Xenova Transformers for semantic search
- LangChain + LangGraph for agent orchestration
- Node.js: 18.x or higher
- MongoDB: 4.x or higher (local or Atlas)
- API Keys (optional, for AI features):
- OpenAI API key
- Anthropic API key
- IBM Watsonx credentials
-
Clone the repository:
git clone https://github.com/KirtiJha/fusion-mcp-hub.git cd fusion-mcp-hub -
Install dependencies:
npm install
-
Set up environment variables:
cp .env.production.example .env.local
Configure in
.env.local:# MongoDB MONGODB_URI=mongodb://localhost:27017/mcp-hub MONGODB_DB_NAME=mcp-hub # NextAuth (generate with: openssl rand -base64 32) NEXTAUTH_SECRET=your-secret-key-here NEXTAUTH_URL=http://localhost:3000 # AI Provider API Keys (optional) OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... WATSONX_API_KEY=your-watsonx-key WATSONX_PROJECT_ID=your-project-id
-
Start development server:
npm run dev
-
Open browser: http://localhost:3000
Manual Composition Workflow:
-
Browse the Server Gallery:
- Explore 16+ MCP servers across different categories
- Filter by Development, Business, Data, AI, and more
- View detailed tool documentation for each server
-
Select Tools:
- Click individual tools from multiple servers
- Build your custom tool collection
- See real-time preview of selected tools organized by source server
-
Download & Deploy:
- Export as production-ready NPM package
- Install locally:
npm install ./my-custom-server.tgz - Configure with:
npx fusion-mcp-cli server install my-custom-server - Use in Claude Desktop, Bob IDE, or any MCP client
Smart Composition Workflow (AI-Powered):
-
Describe Your Intent:
"I need to analyze Jira issues, fetch related Confluence pages, and generate a summary report" -
AI Analysis:
- Breaks down intent into tasks (fetch, analyze, generate)
- Identifies required actions (query, read, write)
- Extracts keywords and entities
-
Hybrid Search:
- Vector search using Xenova Transformers embeddings
- Full-text search (BM25) for keyword matching
- Reciprocal Rank Fusion (RRF) for result reranking
-
Multi-Factor Scoring:
- Intent alignment: Semantic similarity to task description
- Pattern matching: Common workflow patterns
- Tool compatibility: Task-specific capabilities
- Confidence scoring: Aggregate score with transparency
-
Visual Workflow:
- See recommended tools organized by server
- Review confidence scores and reasoning
- Customize tool selection
- Download production-ready configuration
- Connect to deployed HTTP/WebSocket MCP servers
- Best for: Testing remote/production servers
- Limitations: Can't access local stdio servers
- Connect to proxy, proxy spawns local stdio servers
- Best for: Testing local development servers
- Limitations: Async tool execution may cause agent issues
- Entire agent runs on proxy with direct server access
- Synchronous tool execution eliminates timing issues
- Best for: Reliable agent testing with local servers
- Features: Streams agent reasoning and tool results in real-time
fusion-mcp-hub/
├── src/
│ ├── app/ # Next.js App Router pages
│ │ ├── api/ # API routes
│ │ │ ├── composition/ # Smart composition endpoints
│ │ │ ├── vector-search/ # Vector search & reindex
│ │ │ ├── playground/ # Playground agent execution
│ │ │ └── auth/ # Authentication
│ │ ├── composition/ # Composition page
│ │ ├── playground/ # Interactive playground
│ │ ├── servers/ # Server gallery
│ │ └── validate/ # Configuration validator
│ ├── components/ # React components
│ │ ├── composition/ # Composition UI
│ │ ├── playground/ # Playground UI
│ │ └── ui/ # shadcn/ui components
│ ├── lib/ # Utilities and services
│ ├── services/ # Business logic
│ │ ├── workflowAnalysisService.ts # Intent analysis
│ │ ├── vectorSearchService.ts # Vector search
│ │ └── toolCompatibilityService.ts # Pattern matching
│ └── types/ # TypeScript types
├── bridges/ # MCP WebSocket bridge scripts
├── scripts/
│ └── deployment/ # Deployment utilities
├── public/ # Static assets
└── docs/ # Documentation
Frontend: Next.js 15.5.6, TypeScript 5, Tailwind CSS v4, shadcn/ui, Framer Motion
Backend: Next.js API Routes, MongoDB, LanceDB, Redis (optional)
AI/ML: LangChain, LangGraph, Xenova Transformers, OpenAI/Anthropic/Watsonx models
Real-Time: WebSocket, Server-Sent Events (SSE), Proxy server for stdio bridges
# Install dependencies
npm install
# Start dev server with hot reload
npm run dev
# Run linter
npm run lint
# Build for production
npm run build
# Start production server
npm startVector search auto-indexes on first request in serverless environments:
# Trigger reindex
curl -X POST http://localhost:3000/api/vector-search/reindex
# Check index stats
curl http://localhost:3000/api/vector-search- Push to GitHub
- Import repository in Vercel
- Configure environment variables from
.env.production.example - Deploy!
- MongoDB: Use MongoDB Atlas (required for serverless)
- Vector Search: LanceDB uses ephemeral
/tmpstorage, auto-reindexes on cold starts - Function Size: Routes using LanceDB may exceed 250MB limit (see
next.config.tsfor optimizations)
- Advanced Recommendation System - Deep dive into AI-powered recommendations
- Vercel Deployment Guide - Troubleshooting and optimization
We welcome contributions! Whether you're fixing bugs, adding features, or improving documentation, your help is appreciated.
Please read our Contributing Guide to get started.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and commit:
git commit -m 'feat: add amazing feature' - Push to your fork:
git push origin feature/amazing-feature - Open a Pull Request
- Follow TypeScript and React best practices
- Write meaningful commit messages (Conventional Commits)
- Add tests for new features
- Update documentation as needed
This project is licensed under the MIT License - see the LICENSE file for details.
We are committed to providing a welcoming and inclusive experience for everyone. Please read our Code of Conduct.
- Built with Next.js, TypeScript, and Tailwind CSS
- UI components from shadcn/ui
- Vector search powered by LanceDB and Xenova Transformers
- Agent orchestration with LangChain and LangGraph
- Inspired by the amazing Model Context Protocol community
- 🐛 Bug Reports: Open an issue
- 💡 Feature Requests: Start a discussion
- 📧 Contact: Open an issue for any questions
Built with ❤️ by the MCP community