Start in 60 seconds.

Anchor is an OpenAI-compatible API proxy. Point your existing agent at Anchor's endpoint instead of OpenAI. No SDK changes. No framework migration. No re-architecture.

Beforeyour_agent.py
client = OpenAI()
Afteryour_agent.py
client = OpenAI( base_url="https://anchor.maximlabs.co/v1", api_key="your-anchor-key" )
Persistent sessionsExact replayNIM accelerationFull observabilityAnomaly detection
1

Get your API key

Join the waitlist to get early access. API keys are issued on approval — typically within 24 hours.

Join the waitlist →
2

Change one line

Replace your OpenAI base URL with Anchor's endpoint. That's the complete integration — no other code changes are required.

3

Your agent is production-ready

Persistent sessions, exact replay, observability, anomaly detection, and NVIDIA acceleration activate immediately — for every framework you use.


Framework Support

Because Anchor uses the standard OpenAI client interface, it works with any framework that accepts a configurable LLM client — with zero additional code.

Framework
Integration
Sessions
Replay
Status
LangChain
Pass base_url to ChatOpenAI
Live
CrewAI
Set base_url in LLM config
Live
AutoGen
Pass base_url to OAIWrapper
Live
LangGraph
Configure ChatOpenAI with Anchor URL
Live
Custom agents
Any openai.OpenAI() usage
Live

LangChain example

langchain_agent.py
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="https://anchor.maximlabs.co/v1",
    api_key="your-anchor-key"
)

# All LangChain chains and agents work unchanged.

Configuration

Anchor is configured entirely via environment variables — set them in a .env file or your shell. Copy .env.example to get started. All settings are optional except OPENAI_API_KEY.

.env
# Core
OPENAI_API_KEY=sk-...          # forwarded to LiteLLM for public model calls
DEFAULT_MODEL=gpt-4o-mini      # model used when no routing rule matches

# NVIDIA NIM (optional — enables on-prem routing)
NIM_ENDPOINT=http://nim-host:8000
NIM_API_KEY=your-nim-key

# Session store
VALKEY_URL=redis://localhost:6379/0

# Postgres trace storage
DATABASE_URL=postgresql+asyncpg://anchor:anchor@localhost:5432/anchor

# Anomaly alerts (optional)
ALERT_WEBHOOK_URL=https://your-endpoint.com/anchor-alerts
ALERT_WEBHOOK_SECRET=your-hmac-secret

# Admin dashboard
ADMIN_SECRET_KEY=your-admin-secret
OPENAI_API_KEY

Forwarded to LiteLLM for public model calls (OpenAI, Anthropic, etc.).

DEFAULT_MODEL

Model used when no routing rule matches. Defaults to gpt-4o-mini.

NIM_ENDPOINT

NVIDIA NIM base URL. When set, PII-sensitive and complex steps are automatically routed to NIM.

VALKEY_URL

Valkey/Redis connection string for persistent session storage.

ALERT_WEBHOOK_URL

POST endpoint for anomaly alerts (cycle detection, cost spikes, tool loops). Signed with HMAC-SHA256.

ADMIN_SECRET_KEY

Protects /admin/* endpoints. Required to access the admin dashboard and API key management.


Self-hosted Quickstart

Run the full Anchor stack on your own infrastructure — including NVIDIA DGX or RTX servers — with Docker Compose. Full data sovereignty. Nothing leaves your network.

terminal
# Clone the Anchor repository
git clone https://github.com/maximlabs/anchor.git
cd anchor

# Configure
cp .env.example .env
# Set OPENAI_API_KEY in .env

# Start the full stack
docker compose -f docker/docker-compose.yml up -d

# Anchor proxy is now live at http://localhost:8000/v1
# Grafana dashboards at http://localhost:3001  (admin / anchor)

# Point your agent at it:
client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="your-openai-key",
    default_headers={"x-anchor-session-id": "my-session"},
)

The Docker Compose stack includes: Anchor proxy (FastAPI + LiteLLM), Valkey session store, PostgreSQL + pgvector for trace storage, Prometheus for metrics, and Grafana dashboards. Minimum specs: 2 CPU, 4 GB RAM, 20 GB disk.


Docker Compose Stack

docker/docker-compose.yml gives you the full production stack in a single command. Every service is pre-configured and ready to go.

Stack services

anchorport 8000FastAPI proxy — the main entry point for all agent traffic
valkeyport 6379Redis-compatible session store (Valkey Streams)
postgresport 5432Trace storage + API key management (pgvector enabled)
prometheusport 9090Metrics collection — scrapes /metrics every 15s
grafanaport 3001Pre-built dashboards (admin / anchor)

API Endpoints

Anchor extends the standard OpenAI API with session management and observability endpoints. The proxy endpoint is fully OpenAI-compatible — the session endpoints are optional extensions. Authenticate all requests with Authorization: Bearer <key>.

POST/v1/chat/completions

OpenAI-compatible proxy. Pass x-anchor-session-id header to activate persistent sessions.

GET/sessions

List all sessions for the authenticated account, newest first.

GET/sessions/{id}

Session metadata: step count, token totals, cost, and status.

GET/sessions/{id}/steps

Full ordered step list for a session.

GET/sessions/{id}/cost

Token usage and cost breakdown per session.

GET/sessions/{id}/trace

OTel execution graph in OpenTelemetry GenAI format.

POST/sessions/{id}/replay

Re-run steps through the LLM. Optionally swap model to compare outputs.

POST/sessions/{id}/simulate

Zero-cost shadow run using stored tool responses — no external calls or side effects.

DELETE/sessions/{id}

Delete a session and all its steps.

GET/sessions/account/usage

Monthly usage summary: steps, tokens, cost, and plan limits.

GET/health

Liveness check — returns Valkey and Postgres status.

GET/metrics

Prometheus scrape endpoint.

Ready to build?

Join the waitlist for API access. Early access users get priority support and direct access to the Anchor team.

Join the waitlist →