Skip to content

MCP Servers Documentation

Complete documentation for Model Context Protocol (MCP) servers used in Project Aegis.

Overview

Model Context Protocol (MCP) is Anthropic's standard for connecting AI systems to external tools and data sources. Aegis leverages MCP extensively for orchestration, integration, and automation across 26+ servers providing 300+ tools.

Key Features: - Dynamic tool discovery with 99% token reduction - Task-aware tool loading via profiles - Consistent tool naming conventions - Integration with Aegis core systems

Read the full overview →

Core MCP Servers

Aegis MCP

Custom orchestration and automation server for Aegis.

50+ Tools Across 24 Modules: - Agent spawning and coordination - HTN planning and task decomposition - Workflow execution with checkpointing - Error tracking and pattern detection - Beads task management - Critic agent for output validation - Journal and memory integration

Key Tools: spawn_agent, decompose_task, run_workflow, error_record, critique_output

View documentation →

Docker MCP

Comprehensive container lifecycle management.

25+ Tools: - Container management (start, stop, restart, logs) - Image operations (pull, build, push, tag) - Network administration - Volume management - Docker Compose integration - System operations and cleanup

Key Tools: list_containers, run_container, compose_up, system_prune

View documentation →

PostgreSQL MCP

Database operations with pgvector support.

10+ Tools: - Query execution (SELECT, INSERT, UPDATE, DELETE) - Schema introspection - Transaction management - pgvector semantic search - Performance optimization

Key Tools: query, execute, vector_search, describe_table

Use Cases: Workflow checkpointing, agent state persistence, knowledge storage

View documentation →

Graphiti MCP

Knowledge graph operations for entity extraction and semantic search.

5+ Tools: - Episode ingestion with auto-extraction - Entity and relationship mapping - Semantic search (nodes and facts) - Relationship exploration

Key Tools: add_episode, search_nodes, search_facts, get_entity

Backend: FalkorDB + Ollama embeddings + GLM-4.7 extraction

View documentation →

StackWiz MCP

Docker deployment automation to rbnk.uk with Traefik and DNS management.

10+ Tools: - Service stack deployment - Automatic Traefik routing - Cloudflare DNS integration - Health monitoring - Blue-green deployments

Key Tools: create_stack, update_stack, create_dns_record, health_check_all

Domain: rbnk.uk | Infrastructure: Dockerhost (10.10.10.10)

View documentation →

Additional MCP Servers

Communication

  • Discord - Guild messaging, webhooks, forum posts
  • Telegram - Bot messaging, channel management
  • Vonage - WhatsApp (WABA), SMS, Voice, RCS
  • Google Workspace - Gmail, Calendar, Drive

Development

  • GitHub - Repos, issues, PRs, code search
  • Filesystem - File operations (read, write, search)

Research & Automation

  • Playwright - Browser automation, screenshots
  • Ollama - Local LLM inference (vision, reasoning)
  • NotebookLM - Gemini research partner (37 tools)
  • Twitter - User info, tweets, search, trends
  • Anna's Archive - Book search and download

Financial

  • Starling Bank - Balance, transactions, spaces, card controls

AI/LLM

  • Z.ai Vision - UI analysis, OCR, diagram understanding
  • Model Intelligence - Model database (693 models), pricing, benchmarks

Dynamic Discovery

The Problem

Loading all 300+ MCP tools at session start consumes ~234,000 tokens, leaving less context for reasoning and increasing API costs.

The Solution

mcp-cli enables on-demand tool discovery:

# List all servers
mcp-cli

# Show tools for specific server
mcp-cli docker

# Search by pattern
mcp-cli grep "*deploy*"

Result: 99% token reduction (234,000 → ~2,000 tokens)

Tool Profiles

Task-aware profiles for efficient discovery:

Profile Servers Token Budget
infrastructure docker, stackwiz, postgres 8,000
code github, filesystem, memory 12,000
communication discord, telegram, vonage, gmail 8,000
research notebooklm, twitter, gdrive 10,000
monitoring docker, postgres, starling, aegis 6,000

Python API

from aegis.mcp import discover_tools, execute_tool

# Discover tools
result = await discover_tools(["docker", "stackwiz"])
print(f"Saved {result.token_savings} tokens")

# Execute tool
containers = await execute_tool("docker", "list_containers", {"all": False})

Read more about dynamic discovery →

Tool Naming Conventions

All MCP tools follow: mcp__{server}__{tool_name}

Examples: - mcp__docker__list_containers - mcp__aegis__spawn_agent - mcp__graphiti__add_episode

Benefits: - Clear server identification - Cache-friendly consistent ordering - Easy pattern matching

Read naming conventions guide →

Common Patterns

Agent-First Workflow

# Decompose complex goal
tree = await mcp__aegis__decompose_task(
    goal="Implement user authentication",
    description="OAuth with Google, GitHub, email"
)

# Execute with specialized agents
result = await mcp__aegis__execute_task_tree(tree.tree_id)

# Record outcome
await mcp__aegis__journal_entry(
    content=f"Completed auth: {result.summary}",
    section="implementations"
)

Deployment Pipeline

# Build locally
await mcp__docker__build_image(
    path="/home/agent/projects/aegis-core",
    tag="aegis/service:latest"
)

# Test locally
await mcp__docker__run_container(
    image="aegis/service:latest",
    name="test-service",
    ports={"8080/tcp": 8080}
)

# Deploy to production
await mcp__stackwiz__create_stack(
    name="aegis-service",
    image="aegis/service:latest",
    domain="service.rbnk.uk",
    health_check="/health"
)

Error Recovery

try:
    result = deploy_service()
except Exception as e:
    # Record error (Strike 1)
    await mcp__aegis__error_record(
        error_type=type(e).__name__,
        context="Service deployment",
        error_message=str(e),
        strike_count=1
    )

    # Search for similar errors
    similar = await mcp__aegis__error_search(str(e))

    # Apply known resolution
    if similar.related_resolutions:
        apply_resolution(similar.related_resolutions[0])

Knowledge Exploration

# Add episode
await mcp__graphiti__add_episode(
    name="nginx-deployment",
    content="Deployed nginx to production via Docker",
    source_type="implementation"
)

# Search for related entities
entities = await mcp__graphiti__search_nodes(
    query="nginx deployment",
    entity_types=["Technology", "Environment"]
)

# Explore relationships
related = await mcp__graphiti__get_related_entities(
    "nginx",
    relationship_type="deployed_to"
)

MCP Server Development

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("my-server", json_response=True)

@mcp.tool()
def calculate(operation: str, a: int, b: int) -> int:
    """Perform calculation"""
    return a + b if operation == "add" else a * b

if __name__ == "__main__":
    mcp.run(transport="stdio")

Best Practices

  1. Use parameterized inputs with JSON schema
  2. Include detailed docstrings (used as descriptions)
  3. Return structured JSON responses
  4. Handle errors gracefully
  5. Use async for I/O operations

Read server development guide →

Configuration

MCP servers are configured in ~/.claude.json:

{
  "mcpServers": {
    "aegis": {
      "command": "python",
      "args": ["-m", "aegis_mcp.server"],
      "env": {
        "AEGIS_HOME": "/home/agent"
      }
    },
    "docker": {
      "command": "docker-mcp"
    },
    "postgres": {
      "command": "postgres-mcp",
      "env": {
        "PGHOST": "localhost",
        "PGDATABASE": "aegis"
      }
    }
  }
}

Performance Metrics

Metric Static Loading Dynamic Discovery
Token Usage ~234,000 ~2,000
Session Startup Slow Fast
Context Compactions Frequent Rare
Cost per Session High Low

Integration Guides

References

Documentation


Last Updated: 2026-01-25 Total Servers: 26 Total Tools: 300+ Token Savings: 99% with dynamic discovery