MCP Server Mode
Using MCP Guard as an MCP server for AI agents.
Configuration
Add MCP Guard to your IDE's MCP configuration:
File: ~/.claude/mcp.jsonc (or %APPDATA%\Claude Code\User\globalStorage\mcp.jsonc on Windows)
Available MCP Tools
Transparent Proxy Tools
When MCPGuard discovers your other MCPs, their tools become available with namespaced names:
- Schemas are loaded on-demand when tools are called
- All tool calls route through secure isolation
- Results returned transparently to the AI
MCP Prompts Support
MCPGuard also supports MCP Prompts - pre-defined message templates that appear as slash commands in your IDE (e.g., /mcpguard/github:AssignCodingAgent).
What are prompts? Prompts are read-only templates that return messages to inject into the chat context. Unlike tools (which execute actions), prompts simply provide pre-formatted text, making them useful for common workflows like "assign a coding agent to this issue" or "create a fix workflow."
Why don't prompts need worker isolation? Prompts don't execute any code - they just return pre-defined messages. This makes them safe by design, so MCPGuard proxies them directly to the underlying MCP without the overhead of worker isolation. Only tools that execute actions require the security of worker isolates.
Example prompts:
The mcpguard/github: prefix shows these are GitHub MCP prompts routed through MCPGuard's transparent proxy.
MCPGuard Management Tools
| Tool | Description |
|---|---|
call_mcp | Call MCP tools by running TypeScript code in a secure sandbox (auto-connects MCPs if needed) |
guard | Guard MCP servers by routing them through MCPGuard's secure isolation |
search_mcp_tools | Discover which MCPs are configured in your IDE |
connect | Manually connect to an MCP server |
list_available_mcps | List all currently connected MCP servers |
get_mcp_by_name | Find a connected MCP server by name |
get_mcp_schema | Get TypeScript API definition for a connected MCP |
disconnect | Disconnect from an MCP server |
import_configs | Import MCP configurations from IDE config files |
get_metrics | Get performance metrics |
Transparent Proxy Mode
By default, MCPGuard operates in transparent proxy mode:
- Discovers all MCPs configured in your IDE (even disabled ones)
- Lazy-loads tool schemas only when tools are actually called
- Routes all tool calls through secure Worker isolation
- Auto-loads MCPs when their tools are first used
No Config Changes Needed
Once MCPGuard is running, all your existing MCP tool calls automatically go through secure isolation. The AI doesn't need to know about the isolation layer.
Example Flow
When the AI calls github::search_repositories:
- MCPGuard intercepts the namespaced call
- If GitHub MCP isn't loaded, it's automatically loaded
- The call executes in a secure Worker isolate
- Results return to the AI transparently
Direct Code Execution
For complex operations, use the call_mcp tool:
Benefits:
- Process large datasets in the sandbox
- Return only summarized results
- Reduce context window usage by up to 98%
Development Mode
For local development:
Then configure your AI agent to use the local server:
Best Practices
Disable Other MCPs
For maximum efficiency and security:
- MCPGuard still discovers disabled MCPs
- Tool calls route through secure isolation
- No duplicate tool loading in context window
Use Code Mode for Complex Tasks
Instead of multiple tool calls:
Use code mode:
Monitor with Metrics
Check performance with get_metrics: