Build an MCP server from your FastAPI app in 10 minutes
Expose your internal APIs and agents as Model Context Protocol tools that Claude, Cursor, Zed, and other MCP clients can call — without writing a separate server.
MCP — the Model Context Protocol — is how modern AI tools (Claude Desktop, Cursor, Zed, Continue, Codeium, Windsurf, and basically every new IDE-integrated assistant) call external capabilities. If your team ships internal APIs, turning them into MCP tools is usually the fastest way to let your whole engineering org use them from inside their assistants.
The conventional advice is: “stand up a separate MCP server.” That’s wrong. If you already have a FastAPI app, you don’t need a second server — you need a second surface on the first one. FastAgentic gives you exactly that.
The goal
Expose a FastAPI endpoint as both a REST route and an MCP tool, with the same schema, zero duplication, in under ten minutes.
Step 1: install
pip install 'fastagentic[mcp]'
Step 2: define the endpoint once
# app.py
from fastagentic import App, agent_endpoint
from pydantic import BaseModel
app = App(title="DevTools")
class SearchQuery(BaseModel):
query: str
limit: int = 10
class SearchResult(BaseModel):
results: list[dict]
@agent_endpoint("/search-incidents", tags=["ops"])
async def search_incidents(q: SearchQuery) -> SearchResult:
"""Search internal incident reports by keyword."""
hits = await db.search("incidents", q.query, limit=q.limit)
return SearchResult(results=hits)
That’s it. You now have:
POST /search-incidents— a standard REST endpoint with OpenAPI docs at/docs.- An MCP tool named
search_incidentswith the same input and output schema.
Step 3: run it
fastagentic run --reload
You’ll see two surfaces come up:
✓ REST server listening on http://localhost:8000
✓ MCP server listening on http://localhost:8000/mcp
✓ A2A skills advertised at http://localhost:8000/.well-known/agent.json
Step 4: connect Claude Desktop
Add this to your Claude Desktop config:
{
"mcpServers": {
"devtools": {
"url": "http://localhost:8000/mcp"
}
}
}
Restart Claude Desktop. Type: “Search incidents for ‘database timeout’.” Claude will discover the search_incidents tool, call it, and render the result.
Step 5: authenticate
In production, you don’t want anonymous MCP access to internal APIs. FastAgentic supports OAuth2/OIDC on the MCP surface with the same config as the REST surface:
app = App(
title="DevTools",
auth="oidc",
oidc_issuer="https://id.neullabs.com",
)
MCP clients negotiate tokens via OAuth2 Dynamic Client Registration (RFC 7591). Claude Desktop supports this natively.
Step 6: resources and prompts
MCP has three primitive types: tools, resources, and prompts. FastAgentic has decorators for all three:
from fastagentic import resource, prompt
@resource("incidents://{id}")
async def get_incident(id: str) -> str:
"""Full incident markdown."""
return await db.fetch_markdown(id)
@prompt("post-mortem")
async def post_mortem_prompt(incident_id: str) -> str:
"""Template for writing a post-mortem from an incident."""
return f"Write a post-mortem for incident {incident_id}..."
These are automatically advertised over MCP alongside your tools.
Step 7: deploy
Point your clients at a production URL instead of localhost. Any ASGI host works — Docker, Kubernetes, CapRover, Fly, etc. See the deployment guide for specifics.
Why this beats standalone MCP servers
- One codebase. Changes to the REST API automatically propagate to MCP.
- One auth story. Your existing OIDC provider works everywhere.
- One deployment. No second service, no second DNS record, no second security review.
- One audit log. Every call — REST or MCP — flows through the same logging pipeline.
- One cost tracker. MCP calls count against the same tenant budgets as REST calls.
If you’re already running FastAPI, you are ten minutes away from having an MCP server. Don’t over-engineer it.
Need FastAPI, LangGraph, or agent platform expertise?
Neul Labs — the team behind FastAgentic — takes on a limited number of consulting engagements each quarter. We help teams ship agents to production, fix broken LangGraph pipelines, and design governance for multi-tenant LLM platforms.