What's new

Product updates, releases, and improvements. We ship fast and document everything.

Open-source alpha release

  • Initial open-source release under Apache 2.0
  • OpenAI-compatible proxy (FastAPI + LiteLLM) with SSE streaming
  • Persistent sessions via Valkey Streams — x-anchor-session-id header
  • Replay engine: re-run any past session through the LLM, optionally swap model
  • Simulate mode: zero-cost shadow runs using stored tool responses
  • NVIDIA NIM routing with automatic PII keyword detection
  • Rule-based anomaly detection: cycle, runaway, error avalanche, cost spike, tool loop
  • Webhook alerts with HMAC-SHA256 signatures
  • Prometheus metrics + Grafana dashboards (bundled in Docker Compose)
  • OpenTelemetry spans (GenAI semantic conventions) + Postgres trace storage
  • Local Postgres API key management for self-hosted deployments
  • Admin dashboard — instance health, API key management
  • Docker Compose stack: anchor + valkey + postgres + prometheus + grafana
  • 112-test suite, examples/research_agent.py demo agent

Waitlist launch & public site

  • Published maximlabs.co with product landing, blog, and waitlist
  • Anchor product page with full feature documentation
  • FAQ section covering data, pricing, and competitive positioning

Replay engine & observability dashboard

  • Replay engine: re-execute any past agent run with different models or configs
  • Observability dashboard: execution graphs, token usage, cost forecasting
  • Automatic loop detection with configurable circuit breakers
  • OpenTelemetry integration for distributed tracing

Core proxy & session management

  • OpenAI-compatible API proxy (FastAPI + LiteLLM)
  • Persistent sessions backed by Redis Streams
  • Hybrid model routing: public APIs + NVIDIA NIM
  • Simulate mode: shadow-test with zero tool costs