OpenClaw + Terraphim LLM Proxy

Production setup for OpenAI Codex, Z.ai GLM-5, and MiniMax M2.5 with intelligent routing and fallback

OpenClaw to Terraphim proxy routing to OpenAI Codex, Z.ai GLM-5, and MiniMax M2.5

This guide shows the exact production configuration we used to run OpenClaw through a single Terraphim LLM Proxy endpoint with three real providers: OpenAI Codex, Z.ai GLM-5, and MiniMax M2.5.

  • Explicit model routing: openai-codex,gpt-5.2, zai,glm-5, minimax,MiniMax-M2.5
  • Keyword routing: natural language trigger to MiniMax
  • Runtime fallback: primary failure automatically falls through to next provider

Build-In-Public Validation

We validated this incrementally in production, not from a static design doc:

  • Confirmed each provider route independently
  • Forced provider failure and verified fallback in logs
  • Added and tested taxonomy-based MiniMax keyword routing
  • Fixed provider-specific endpoint mismatch for MiniMax Anthropic path

Core Router Configuration

[proxy]
host = "127.0.0.1"
port = 3456
api_key = "$PROXY_API_KEY"

[router]
default = "openai-codex,gpt-5.2-codex|zai,glm-5"
think = "openai-codex,gpt-5.2|minimax,MiniMax-M2.5|zai,glm-5"
strategy = "fill_first"

[[providers]]
name = "openai-codex"
api_base_url = "https://api.openai.com/v1"
transformers = ["openai"]

[[providers]]
name = "zai"
api_base_url = "https://api.z.ai/api/paas/v4"
transformers = ["openai"]

[[providers]]
name = "minimax"
api_base_url = "https://api.minimax.io/anthropic"
transformers = ["anthropic"]

MiniMax Keyword Route

# /etc/terraphim-llm-proxy/taxonomy/routing_scenarios/minimax_keyword_routing.md
route:: minimax, MiniMax-M2.5
priority:: 100
synonyms:: minimax, minimax keyword, m2.5, minimax-m2.5

OpenClaw Model Setup

  • Base URL: http://127.0.0.1:3456/v1
  • API style: openai-completions
  • Configured models: openai-codex,gpt-5.2, zai,glm-5, minimax,MiniMax-M2.5

Verification Matrix

curl -sS -X POST http://127.0.0.1:3456/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: <PROXY_API_KEY>' \
  -d '{"model":"openai-codex,gpt-5.2","messages":[{"role":"user","content":"Reply exactly: openai-ok"}],"stream":false}'

curl -sS -X POST http://127.0.0.1:3456/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: <PROXY_API_KEY>' \
  -d '{"model":"zai,glm-5","messages":[{"role":"user","content":"Reply exactly: zai-ok"}],"stream":false}'

curl -sS -X POST http://127.0.0.1:3456/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: <PROXY_API_KEY>' \
  -d '{"model":"minimax,MiniMax-M2.5","messages":[{"role":"user","content":"Reply exactly: minimax-ok"}],"stream":false}'

Observed Routing Logs

2026-02-13T17:32:00Z Routing decision made provider=minimax model=MiniMax-M2.5 scenario=Pattern("minimax_keyword_routing")
2026-02-13T17:32:29Z Routing decision made provider=openai-codex model=gpt-5.2 scenario=Pattern("think_routing")
2026-02-13T17:32:51Z Routing decision made provider=openai-codex model=gpt-5.2-codex scenario=Default

Fallback Drill (Primary Failure -> Z.ai)

During rollout we intentionally blocked Codex upstream and confirmed that the request stayed alive by moving to Z.ai on the next target in the route chain.

Primary target failed, attempting fallback target ... next_provider=zai

Production Checklist

  • Store API keys in environment variables only
  • Pin explicit route chains; avoid hidden defaults
  • Keep provider model lists current
  • Monitor fallback events as reliability signals

Try This Setup

Use the full validated guide and deployment notes in the Terraphim LLM Proxy repository. If you want the coding-plan route, sponsor access starts at $3/month.

Sponsor ($3/mo) Read Full Guide