Booting WASM… DuckDB idle Embeddings idle Backend: OpenAI No API key 0 tools
▶️ Take the tour
🏠 Show welcome screen
Show tour on startup
Generate
Registry
Test
Agents 0
Agent Chat
Data (SQL)
Knowledge (RAG)
MCP Console
Settings

1 Describe your MCP tool

Pick a runtime, describe the tool in plain English, and the LLM will write the code. Each runtime has its own sandbox.

Example specifications

2 Generated Python code 0 lines

Code must define TOOL dict and run(args) function. Static analysis blocks unsafe calls.

Registered tools

Registration is metadata-only: the TOOL manifest is parsed via AST (zero code execution) and the source is stored. Python runs only when a tool is called — that's JIT. The LLM sees only the enabled tools.

📦 Docker cheatsheet — build & run the exported server

The export includes a production-ready Dockerfile + .dockerignore. After you download and unzip:

cd agent-mcp-server-YYYY-MM-DD
docker build -t agent-mcp-export .
docker run --rm -i \
  -e MCP_ALLOWED_HOSTS='api.github.com,*.githubusercontent.com' \
  agent-mcp-export
{
  "mcpServers": {
    "agent-mcp-studio-export": {
      "command": "docker",
      "args": ["run","--rm","-i","-e","MCP_ALLOWED_HOSTS=api.github.com","agent-mcp-export"]
    }
  }
}
docker tag agent-mcp-export your-registry/agent-mcp-export:v1
docker push your-registry/agent-mcp-export:v1
# Deploy to Fly.io, Railway, Render, Cloud Run, ECS, etc.

The image defaults to stdio. For HTTP deployment, see the README.md in the export.

Call a tool

Result

(no call yet)

Agent playground

OpenAI

Real OpenAI function-calling loop. The model may call multiple tools across multiple turns before answering.

Agentic mode Strategy Pinned
Tools exposed to LLM 0 / 0

🦆 DuckDB data sources

Upload CSV or Parquet files. Each becomes a queryable DuckDB table. SQL tools can query these tables at runtime.

Ad-hoc query

Try queries directly; the SQL tool generator will see these table names.

(nothing yet)

🧠 Local embeddings + RAG

Loads Xenova/all-MiniLM-L6-v2 (~23 MB, ONNX) via Transformers.js. Embeds documents locally in-browser. Auto-registers a semantic_search MCP tool the LLM can call.

Test search

Local cosine similarity. No network call.

(no search yet)

🧠 Agentic layer · pick a strategy

Pick how your experts collaborate. The graph below adapts to the topology. Personas auto-register as MCP tools (ask_<name>) so Claude Desktop can call them through the bridge.

🎯 Supervisor
A router LLM picks the single best expert and delegates.
user → router → 1 expert
🧩 Mixture of Experts
Fan out to multiple experts in parallel, then synthesize their answers.
user → all experts → synth
⛓️ Sequential Pipeline
Each expert hands its output to the next in a fixed order.
user → A → B → C
📋 Plan & Execute
Planner LLM breaks the task into steps, dispatches each to the right expert.
user → planner → workers
🐝 Swarm
Each expert can hand off to any other; the conversation flows organically.
expert ↔ expert (mesh)
⚖️ Debate
Two contestants argue opposing positions; a judge rules. Great for decisions.
A vs B → judge
🪞 Reflection
Actor drafts, critic reviews, actor revises — looped until critic approves.
actor ⇌ critic
🏛️ Hierarchical
Top manager picks the work, delegates to specialist sub-experts as needed.
manager → workers
🔄 Round-Robin
Each persona contributes in turn over N rounds; moderator summarizes.
A → B → C → … → mod
🗺️ Map-Reduce
Splitter decomposes the task; workers run in parallel; aggregator merges.
split → parallel → aggregate
Supervisor A router agent reads the user's message and delegates to one expert.

Tools (drag onto a persona)

0 personas tool link flow

Inspector

Select a persona to edit it, or click + New Persona.

MCP JSON-RPC 2.0 console

This simulates the MCP protocol. Every tool call through the registry is logged here in JSON-RPC format, the same shape a real MCP server would emit.

Chat backend

Pick where agent chat + tool generation run. OpenAI is the default. Local runs a small WASM LLM in your browser with no data leaving your machine — no key required.

OpenAI configuration

Your key is stored in this browser's localStorage only. It is never sent anywhere except api.openai.com.

Runtime

Warm: first call compiles the tool into Pyodide; subsequent calls reuse the compiled function.
Cold: every call exec's the source in a fresh namespace — fully deterministic, zero reuse.

About

Runtime: Pyodide (CPython compiled to WebAssembly).
Protocol: simulated MCP (Model Context Protocol) JSON-RPC 2.0 — initialize, tools/list, tools/call.
Registration: AST-parsed manifest, source stored lazily. No Python runs until a tool is invoked.
Sandbox: each tool exec's in an isolated namespace. Static analysis blocks eval, exec, __import__, open, network, subprocess, and DOM access.

Reset everything

🌐 Tool network access

Python tools can call external APIs via the injected http_request() helper — but only to hosts on this allowlist. Leave empty to block all network calls from tools. Wildcards supported: *.openai.com, *.github.io. One host per line.

CORS note

Browser fetch is subject to the target server's CORS policy. APIs that return Access-Control-Allow-Origin work directly; those that don't will fail with a network error — that's a server decision, not a bug here. Most public APIs (GitHub, OpenAI, weather.gov, MDN, JSONPlaceholder, etc.) allow CORS. For private APIs you control, add CORS headers server-side.

📦 Project pack (export & import)

Bundle your tools, personas, strategy, and required external services into a single shareable JSON file. API keys and secrets are never included — only the schema of what credentials are needed. The recipient is prompted to fill them in on import.

🔐 Connected services

Credentials your tools use via os.environ.get(...). Stored only in this browser's localStorage. Add manually here, or let the import wizard collect them when you install a project pack.

🌉 MCP Bridge (expose browser tools to Claude Desktop)

Run the bridge script on your machine. It speaks MCP over stdio to Claude Desktop and WebSocket to this browser tab. Your tools then appear inside Claude Desktop as if they were a real MCP server.

Setup instructions
1. Save bridge.js from this project folder (it's alongside index.html).
2. Install WebSocket deps:   npm install ws
3. In Claude Desktop settings, add an MCP server:
     Command: node
     Args:    /absolute/path/to/bridge.js
4. Open this page in your browser and click Connect.
5. Restart Claude Desktop — your browser tools are now listed.

Under the hood: Claude → stdio → bridge.js → WebSocket → this tab → Pyodide / DuckDB / RAG.
        
Powered by WebAssembly

Welcome to Agent MCP Studio, your AI playground.

Build MCP tools in your browser. Organize them into expert personas. Orchestrate them with 10 collaboration strategies. Export as a real Python MCP server. Pick how you'd like to power your agents:

☁️
Use OpenAI
requires API key
Best quality and tool-use reliability. Your key stays in this browser only.
⏭️
Skip for now
decide later
Explore the studio first. You can configure a backend any time from Settings.
Loading model…
First time here? Take the 60-second tour after you pick.