Start in 60 seconds.
Anchor is an OpenAI-compatible API proxy. Point your existing agent at Anchor's endpoint instead of OpenAI. No SDK changes. No framework migration. No re-architecture.
Get your API key
Join the waitlist to get early access. API keys are issued on approval — typically within 24 hours.
Join the waitlist →Change one line
Replace your OpenAI base URL with Anchor's endpoint. That's the complete integration — no other code changes are required.
Your agent is production-ready
Persistent sessions, exact replay, observability, anomaly detection, and NVIDIA acceleration activate immediately — for every framework you use.
Framework Support
Because Anchor uses the standard OpenAI client interface, it works with any framework that accepts a configurable LLM client — with zero additional code.
Pass base_url to ChatOpenAISet base_url in LLM configPass base_url to OAIWrapperConfigure ChatOpenAI with Anchor URLAny openai.OpenAI() usageLangChain example
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://anchor.maximlabs.co/v1",
api_key="your-anchor-key"
)
# All LangChain chains and agents work unchanged.Configuration
Anchor is configured entirely via environment variables — set them in a .env file or your shell. Copy .env.example to get started. All settings are optional except OPENAI_API_KEY.
# Core
OPENAI_API_KEY=sk-... # forwarded to LiteLLM for public model calls
DEFAULT_MODEL=gpt-4o-mini # model used when no routing rule matches
# NVIDIA NIM (optional — enables on-prem routing)
NIM_ENDPOINT=http://nim-host:8000
NIM_API_KEY=your-nim-key
# Session store
VALKEY_URL=redis://localhost:6379/0
# Postgres trace storage
DATABASE_URL=postgresql+asyncpg://anchor:anchor@localhost:5432/anchor
# Anomaly alerts (optional)
ALERT_WEBHOOK_URL=https://your-endpoint.com/anchor-alerts
ALERT_WEBHOOK_SECRET=your-hmac-secret
# Admin dashboard
ADMIN_SECRET_KEY=your-admin-secretOPENAI_API_KEYForwarded to LiteLLM for public model calls (OpenAI, Anthropic, etc.).
DEFAULT_MODELModel used when no routing rule matches. Defaults to gpt-4o-mini.
NIM_ENDPOINTNVIDIA NIM base URL. When set, PII-sensitive and complex steps are automatically routed to NIM.
VALKEY_URLValkey/Redis connection string for persistent session storage.
ALERT_WEBHOOK_URLPOST endpoint for anomaly alerts (cycle detection, cost spikes, tool loops). Signed with HMAC-SHA256.
ADMIN_SECRET_KEYProtects /admin/* endpoints. Required to access the admin dashboard and API key management.
Self-hosted Quickstart
Run the full Anchor stack on your own infrastructure — including NVIDIA DGX or RTX servers — with Docker Compose. Full data sovereignty. Nothing leaves your network.
# Clone the Anchor repository
git clone https://github.com/maximlabs/anchor.git
cd anchor
# Configure
cp .env.example .env
# Set OPENAI_API_KEY in .env
# Start the full stack
docker compose -f docker/docker-compose.yml up -d
# Anchor proxy is now live at http://localhost:8000/v1
# Grafana dashboards at http://localhost:3001 (admin / anchor)
# Point your agent at it:
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="your-openai-key",
default_headers={"x-anchor-session-id": "my-session"},
)The Docker Compose stack includes: Anchor proxy (FastAPI + LiteLLM), Valkey session store, PostgreSQL + pgvector for trace storage, Prometheus for metrics, and Grafana dashboards. Minimum specs: 2 CPU, 4 GB RAM, 20 GB disk.
Docker Compose Stack
docker/docker-compose.yml gives you the full production stack in a single command. Every service is pre-configured and ready to go.
Stack services
port 8000FastAPI proxy — the main entry point for all agent trafficport 6379Redis-compatible session store (Valkey Streams)port 5432Trace storage + API key management (pgvector enabled)port 9090Metrics collection — scrapes /metrics every 15sport 3001Pre-built dashboards (admin / anchor)API Endpoints
Anchor extends the standard OpenAI API with session management and observability endpoints. The proxy endpoint is fully OpenAI-compatible — the session endpoints are optional extensions. Authenticate all requests with Authorization: Bearer <key>.
/v1/chat/completionsOpenAI-compatible proxy. Pass x-anchor-session-id header to activate persistent sessions.
/sessionsList all sessions for the authenticated account, newest first.
/sessions/{id}Session metadata: step count, token totals, cost, and status.
/sessions/{id}/stepsFull ordered step list for a session.
/sessions/{id}/costToken usage and cost breakdown per session.
/sessions/{id}/traceOTel execution graph in OpenTelemetry GenAI format.
/sessions/{id}/replayRe-run steps through the LLM. Optionally swap model to compare outputs.
/sessions/{id}/simulateZero-cost shadow run using stored tool responses — no external calls or side effects.
/sessions/{id}Delete a session and all its steps.
/sessions/account/usageMonthly usage summary: steps, tokens, cost, and plan limits.
/healthLiveness check — returns Valkey and Postgres status.
/metricsPrometheus scrape endpoint.
Ready to build?
Join the waitlist for API access. Early access users get priority support and direct access to the Anchor team.
Join the waitlist →