MCP Server for AI Agents

A code knowledge graph your AI can actually navigate.

CodeGraph indexes your codebase into a queryable graph — functions, classes, relationships, knowledge — and exposes it through MCP tools your AI agent can use to search, understand, and remember.

0.969
MRR (internal)
0
MCP persona tools
0+
languages
Claude Desktop · Graph Explorer panel

AI agents are flying blind in your codebase

Embeddings miss structure

Vector search returns lexically similar tokens, not the call graph or import chain that explains how code actually fits together.

Grep misses semantics

String match doesn't know what a symbol means or which call site is the relevant one. Your agent reads ten files to answer one question.

Context windows lose the thread

Large codebases blow past the limit. Sessions lose architectural understanding mid-task and re-read the same files every time.

Without CodeGraph
grep -r 'processPayment' ./src
? callers unknown
? what does this depend on
? no structural answer
With CodeGraph
search({ action: 'context', symbol: 'processPayment' })
callers: 3 (checkout.ts, orders.ts, subscriptions.ts)
callees: 7 (validateCard, ledger.write, ...)
linkedKnowledge: 2 entities, 1 SAID fact

Parse. Graph. Query.

Three pipelines, one graph. Your AI agent queries it through MCP tools.

Step 01

Parse

Tree-sitter extracts every function, class, type, and import. 5 tier-1 plugins (TypeScript, Python, Go, Rust, Markdown) plus generic coverage for ~30 more languages.

Terminal
$ pnpm --filter @codegraph/cli start extract ./src
Click "Run Demo" to see indexing in action
Step 02

Graph

Nodes and edges land in FalkorDB (Docker) or FalkorDBLite (embedded, no Docker). Vector indexes for semantic search; structural edges for CALLS, IMPORTS, EXTENDS, IMPLEMENTS.

FalkorDB
Sample relationships:
  • (:Function) -[:CALLS]-> (:Function)
  • (:File) -[:IMPORTS]-> (:File)
  • (:Class) -[:EXTENDS]-> (:Class)
  • (:Function) -[:ABOUT]-> (:Entity)
  • (:Person) -[:SAID]-> (:Fact)
Step 03

Query

Four MCP persona tools — search, knowledge, codebase, query — give your AI agent vector search, knowledge recall, project management, and raw Cypher. Cross-encoder reranking and graph enrichment included.

MCP Tool Call
search({ action: "context", symbol: "processPayment" })

Click to expand result

Everything an agent needs to navigate your codebase.

Six capabilities, one graph. Each one exposed through MCP, the AI SDK, or both.

Search pipeline

Vector embeddings → cross-encoder reranking → graph enrichment. Returns symbols with their callers, callees, complexity, and linked knowledge.

search({
  action: "find",
  query: "authentication"
})

Bitemporal knowledge

Every fact carries valid_at and invalid_at. Query the graph as it existed on a past date, see full timelines, watch supersession happen.

knowledge({
  action: "recall",
  text: "AuthModule",
  at: "2026-03-01T00:00:00Z"
})

Document ingestion

Drop a PDF, DOCX, HTML, CSV, or URL into knowledge.add(). It chunks, embeds, extracts entities, and links them into the same graph as the code.

knowledge({
  action: "add",
  input: "/path/to/spec.pdf"
})

Speaker entities

Ingest a multi-turn conversation; CodeGraph creates Person nodes with SAID edges to facts. Ask 'what has Alice said about retries?' and get an answer.

knowledge({
  action: "ingest_conversation",
  text: "Alice: let's use Redis...",
  source: "standup"
})

MCP App UI panel

The graph_explorer MCP tool ships as an App UI panel that renders the Graph Explorer canvas inside Claude Desktop or Cursor — interactive, in-conversation.

// Surfaced automatically when CodeGraph
// is configured as an MCP server

Drop-in middleware

Wrap any Vercel AI SDK model with withCodeGraph(); register a Mastra processor with createCodeGraphProcessor(). Your existing agent gets graph-aware context.

import { withCodeGraph }
  from "@codegraph/tools/vercel"

const model = withCodeGraph(
  openai("gpt-4o")
)

Drops into your existing setup.

MCP for hosts; middleware for AI SDKs; hook scripts for Claude Code.

claude_desktop_config.json
{
  "mcpServers": {
    "codegraph": {
      "command": "node",
      "args": ["~/codebase-graph/packages/mcp-server/dist/index.js"],
      "env": { "CODEGRAPH_DRIVER": "embedded" }
    }
  }
}

MCP server is not yet published to npm — paths above point at a local build of the repo. When @codegraph/mcp ships, these snippets will use npx @codegraph/mcp instead.

Architecture

FalkorDB-backed graph, tree-sitter parsers, pluggable embeddings and reranker, MCP at the edge.

Your codebase
any language
Tree-sitter parser
5 tier-1 + ~30 generic
FalkorDB / Lite
vector + graph
MCP router
4 persona tools
Your AI host
Claude, Cursor, etc.

Stack

  • FalkorDB (Docker) or FalkorDBLite (embedded — no Docker, requires redis-server).
  • Embeddings: Voyage, OpenRouter, or local @huggingface/transformers (nomic-embed-text-v1.5).
  • Reranker: Jina or Voyage. Cross-encoder, MRR-aware.
  • Optional cloud APIs are pluggable; the embedded path is fully local.

Internal benchmark

0.969
MRR
94%
S@1
100%
S@5
447ms
Latency

2,310-node test set, v6 Chunk 1 baseline (2026-04-26). CGBench v1 methodology and results.

5 tier-1 languages + ~30 via tree-sitterMRR 0.969 on internal benchmarkBitemporal knowledge graphMCP App UI panelVercel AI SDK + Mastra middlewareMIT licensed