Skip to content

Graphiti MCP Server

Overview

The Graphiti MCP server provides knowledge graph operations for entity extraction, relationship mapping, and semantic search. It's built on FalkorDB (Redis-compatible graph database) and integrates with Aegis memory systems for persistent learning.

Tool Prefix: mcp__graphiti__

Backend: FalkorDB on localhost:6379

Embeddings: Ollama nomic-embed-text (384 dimensions)

LLM: GLM-4.7 via Z.ai (for entity extraction)

Core Concepts

Episodes

Episodes are the primary input to Graphiti. Each episode contains: - Name: Unique identifier - Content: The actual text/data - Source Description: Where it came from - Timestamp: When it occurred

Episodes are automatically processed to extract: - Entities: Named objects (people, places, concepts, errors) - Relationships: Connections between entities - Facts: Discrete statements about entities

Entities

Extracted from episode content. Types include: - Person - Organization - Location - Concept - Error - Tool - Technology - Event

Relationships

Connections between entities with semantic meaning: - deployed_to (service → environment) - caused_by (error → action) - implemented_by (feature → person) - integrates_with (service → service)

Facts

Standalone statements stored as nodes: - "Nginx deployed to production on 2026-01-25" - "ModuleNotFoundError occurs during dashboard startup" - "GLM-4.7 provides 90% of Aegis LLM operations"

Operations

add_episode

mcp__graphiti__add_episode(
    name: str,
    content: str,
    source_description: str,
    source_type: str = "general",
    timestamp: str = None
)

Add an episode to the knowledge graph. Automatically extracts entities, relationships, and facts.

Episode Types: - decision - Strategic/architectural decisions - implementation - Code changes, deployments - error - Errors and resolutions - learning - Lessons learned, insights - milestone - Significant achievements - research - Research findings - general - Miscellaneous events

Example:

await mcp__graphiti__add_episode(
    name="nginx-deployment-2026-01-25",
    content="""
    Deployed nginx service to production environment.
    Used Docker Compose with custom configuration.
    Service is accessible at https://aegis.rbnk.uk.
    No errors during startup.
    """,
    source_description="deployment-log",
    source_type="implementation"
)

What Gets Extracted:

Entities:
  - nginx (Technology)
  - production (Environment)
  - Docker Compose (Tool)
  - aegis.rbnk.uk (Location)

Relationships:
  - nginx → deployed_to → production
  - nginx → uses → Docker Compose
  - nginx → accessible_at → aegis.rbnk.uk

Facts:
  - "nginx service deployed on 2026-01-25"
  - "Deployment completed without errors"

search_nodes

mcp__graphiti__search_nodes(
    query: str,
    limit: int = 10,
    entity_types: list = None
)

Semantic search for entities using embeddings.

Example:

# Search for deployment-related entities
results = await mcp__graphiti__search_nodes(
    query="nginx deployment production",
    limit=5,
    entity_types=["Technology", "Environment"]
)

Returns:

{
  "nodes": [
    {
      "name": "nginx",
      "entity_type": "Technology",
      "summary": "Web server deployed to production",
      "created_at": "2026-01-25T10:00:00Z",
      "similarity": 0.89
    },
    {
      "name": "production",
      "entity_type": "Environment",
      "summary": "Production deployment environment",
      "similarity": 0.82
    }
  ]
}

search_facts

mcp__graphiti__search_facts(
    query: str,
    limit: int = 10,
    after_date: str = None
)

Semantic search for facts.

Example:

# Find facts about errors
facts = await mcp__graphiti__search_facts(
    query="ModuleNotFoundError import error",
    limit=10,
    after_date="2026-01-20"
)

Returns:

{
  "facts": [
    {
      "fact": "ModuleNotFoundError occurred during dashboard startup",
      "episode_name": "dashboard-error-2026-01-25",
      "created_at": "2026-01-25T08:30:00Z",
      "similarity": 0.91
    },
    {
      "fact": "Import error resolved by fixing module path",
      "episode_name": "dashboard-fix-2026-01-25",
      "similarity": 0.85
    }
  ]
}

get_entity

mcp__graphiti__get_entity(entity_name: str)

Get detailed information about an entity including all relationships.

Example:

entity = await mcp__graphiti__get_entity("nginx")

Returns:

{
  "entity": {
    "name": "nginx",
    "entity_type": "Technology",
    "summary": "Web server used for reverse proxy",
    "created_at": "2026-01-20T10:00:00Z",
    "updated_at": "2026-01-25T10:00:00Z",
    "observations": 15,
    "relationships": [
      {
        "type": "deployed_to",
        "target": "production",
        "created_at": "2026-01-25T10:00:00Z"
      },
      {
        "type": "uses",
        "target": "Docker Compose",
        "created_at": "2026-01-20T12:00:00Z"
      }
    ]
  }
}

mcp__graphiti__get_related_entities(
    entity_name: str,
    relationship_type: str = None,
    max_depth: int = 2
)

Get entities related to a given entity, optionally filtered by relationship type.

Example:

# Find all services deployed to production
related = await mcp__graphiti__get_related_entities(
    entity_name="production",
    relationship_type="deployed_to",
    max_depth=1
)

Integration with Aegis

Transcript Digester

The Transcript Digester automatically ingests Claude session history and journals into Graphiti:

from aegis.memory import TranscriptDigester

digester = TranscriptDigester()
await digester.initialize()

# Ingest all sources
results = await digester.ingest_all()

# Or selectively
await digester.ingest_claude_history()
await digester.ingest_journals(after_date="2026-01-01")

Sources: - ~/.claude/history.jsonl - Claude Code session commands - ~/memory/journal/*.md - Daily journal entries

Processing: - Groups session commands by session ID - Parses journal sections (## headings) - Classifies episode types (decision, implementation, error, etc.) - Extracts entities, relationships, facts

Cron Job: Runs daily at 3:00 AM UTC

Error Tracker

Error tracking integrates with Graphiti for pattern detection:

from aegis.memory.error_tracker import ErrorTracker

tracker = ErrorTracker()
await tracker.initialize()

# Record error (creates episode + entities)
await tracker.record_error(
    error_type="ModuleNotFoundError",
    context="Dashboard startup",
    error_message="No module named 'aegis.workflows.storage'",
    resolution="Fixed import path"
)

# Search for similar errors
similar = await tracker.find_similar_errors(
    query="ModuleNotFoundError import",
    limit=5
)

Morning Routine

Knowledge graph queries are wired into the morning routine:

# In /home/agent/.claude/commands/morning.md
# Query recent decisions
decisions = await mcp__graphiti__search_facts(
    query="architectural decision strategic",
    after_date="2026-01-18"
)

# Query recent errors
errors = await mcp__graphiti__search_nodes(
    query="error critical failure",
    entity_types=["Error"]
)

Query Patterns

Find Deployment History

# Get deployment-related episodes
deployments = await mcp__graphiti__search_facts(
    query="deployed service production",
    after_date="2026-01-01"
)

# Get services deployed to production
services = await mcp__graphiti__get_related_entities(
    entity_name="production",
    relationship_type="deployed_to"
)

Error Pattern Detection

# Find recurring errors
errors = await mcp__graphiti__search_nodes(
    query="ModuleNotFoundError ConnectionError",
    entity_types=["Error"]
)

# Get error resolution history
for error in errors.nodes:
    entity = await mcp__graphiti__get_entity(error.name)
    # Look for "resolved_by" relationships

Knowledge Exploration

# Explore a concept's relationships
concept = await mcp__graphiti__get_entity("HTN planning")
related = await mcp__graphiti__get_related_entities(
    "HTN planning",
    max_depth=2
)

Time-Based Queries

# What happened in the last week?
recent = await mcp__graphiti__search_facts(
    query="",  # No query = get all
    after_date="2026-01-18"
)

Performance Considerations

Embedding Generation

Embeddings are generated using Ollama (local, unlimited): - Model: nomic-embed-text - Dimensions: 384 - Speed: ~500ms per episode

Entity Extraction

Entity extraction uses GLM-4.7 (Z.ai): - Rate limit: ~8 requests/minute - Cost: Minimal (~$0.0001 per episode) - Fallback: Heuristic extraction if LLM unavailable

Graph Size

FalkorDB performance scales well: - <10,000 nodes: Instant queries - <100,000 nodes: Sub-second queries - >100,000 nodes: Consider indexing

Query Optimization

# Optimize semantic search with filters
results = await mcp__graphiti__search_nodes(
    query="deployment",
    limit=5,  # Limit results
    entity_types=["Technology"]  # Filter by type
)

# Use after_date to reduce search space
facts = await mcp__graphiti__search_facts(
    query="error",
    after_date="2026-01-20",  # Only recent
    limit=10
)

Configuration

FalkorDB Connection

{
  "mcpServers": {
    "graphiti": {
      "command": "python",
      "args": ["-m", "graphiti_mcp.server"],
      "env": {
        "FALKORDB_HOST": "localhost",
        "FALKORDB_PORT": "6379",
        "FALKORDB_DB": "0"
      }
    }
  }
}

LLM Configuration

Entity extraction uses GLM-4.7 via Aegis LLM module:

# In aegis/memory/graphiti_client.py
from aegis.llm import query

# GLM-4.7 used for extraction
response = await query(
    prompt=f"Extract entities from: {episode_content}",
    model="glm"  # Routes to Z.ai
)

Embedding Model

Ollama nomic-embed-text (local):

# Ensure model is installed
ollama pull nomic-embed-text

# Test embedding
ollama run nomic-embed-text "Test embedding"

Data Management

Backup

FalkorDB data is stored in Redis RDB format:

# Backup graph
redis-cli -p 6379 SAVE

# Export to file
cp /var/lib/redis/dump.rdb /backups/graphiti-$(date +%Y%m%d).rdb

Clear Graph

# WARNING: Destructive operation
from aegis.memory.graphiti_client import GraphitiClient

client = GraphitiClient()
await client.initialize()

# Delete all data
await client._client.execute_command("FLUSHDB")

Re-Index

If queries become slow, rebuild embeddings:

from aegis.memory import TranscriptDigester

digester = TranscriptDigester()
await digester.initialize()

# Re-ingest everything
await digester.ingest_all()

Troubleshooting

FalkorDB Not Running

# Check Redis/FalkorDB status
docker ps | grep falkordb

# Start FalkorDB
docker start falkordb

# Check logs
docker logs falkordb

Entity Extraction Fails

# Check GLM-4.7 availability
from aegis.llm import query

try:
    result = await query("Test", model="glm")
    print("GLM-4.7 available")
except Exception as e:
    print(f"GLM-4.7 unavailable: {e}")
    # Falls back to heuristic extraction

Embeddings Not Generated

# Check Ollama status
ollama list

# Pull model if missing
ollama pull nomic-embed-text

# Test embedding
curl http://localhost:11434/api/embeddings -d '{
  "model": "nomic-embed-text",
  "prompt": "Test"
}'

Search Returns No Results

# Check graph has data
from aegis.memory.graphiti_client import GraphitiClient

client = GraphitiClient()
await client.initialize()

# Count entities
entities = await client.search_nodes("", limit=1000)
print(f"Total entities: {len(entities)}")

Best Practices

1. Structured Episodes

Use clear, descriptive episode content:

# GOOD
await add_episode(
    name="api-auth-implementation",
    content="""
    Implemented OAuth authentication for API endpoints.
    - Added JWT token generation
    - Integrated with Google OAuth provider
    - Created middleware for token validation
    Tested with Postman, all endpoints working.
    """,
    source_type="implementation"
)

# BAD (too vague)
await add_episode(
    name="update",
    content="Made some changes to auth",
    source_type="general"
)

2. Consistent Naming

Use consistent entity names for better relationship mapping: - "nginx" not "Nginx" or "NGINX" - "production" not "prod" or "Production"

3. Regular Ingestion

Run transcript digestion daily to keep graph current:

# Via cron
0 3 * * * /usr/bin/python /home/agent/scripts/ingest_transcripts.py

4. Query Specificity

Use specific queries for better results:

# Specific query
results = await search_nodes("nginx docker deployment")

# Too broad
results = await search_nodes("server")

Next Steps

References