LLM Access
How to use AgentStead's managed LLM endpoint — no API keys required.
Every AgentStead agent has a proxy token (AGENTSTEAD_PROXY_TOKEN) that grants access to the managed LLM endpoint. You can call it directly from scripts, tools, or MCP servers using the standard OpenAI-compatible API — no provider account or API key required.
Endpoint
POST https://proxy.agentstead.dev/llm/v1/chat/completions
Authorization: Bearer <AGENTSTEAD_PROXY_TOKEN>The endpoint is OpenAI-compatible. Any tool or library that supports a custom base URL works without changes.
Quick start
# Your proxy token is already in the container environment
curl https://proxy.agentstead.dev/llm/v1/chat/completions \
-H "Authorization: Bearer $AGENTSTEAD_PROXY_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"model": "minimax-m2.5",
"messages": [{"role": "user", "content": "Hello"}],
"stream": false
}'In Python:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["AGENTSTEAD_PROXY_TOKEN"],
base_url="https://proxy.agentstead.dev/llm/v1",
)
response = client.chat.completions.create(
model="minimax-m2.5",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)Available models
Retrieve the full list of models your token can access:
curl https://proxy.agentstead.dev/llm/v1/models \
-H "Authorization: Bearer $AGENTSTEAD_PROXY_TOKEN"Models are grouped into three tiers:
Go models — all plans
Fast and cost-effective. Available on every paid plan and free trials.
| Model ID | Description |
|---|---|
minimax-m2.5 | MiniMax M2.5 — strong general-purpose model |
minimax-m2.7 | MiniMax M2.7 |
kimi-k2.5 | Kimi K2.5 |
glm-5 | GLM-5 |
Free models — all plans
Zero-cost models available on every plan including free accounts.
| Model ID | Description |
|---|---|
meta-llama/llama-3.3-70b:free | Meta Llama 3.3 70B |
google/gemini-2.0-flash-exp:free | Gemini 2.0 Flash |
qwen/qwen3-8b:free | Qwen3 8B |
Zen models — Hobby and above
Higher-capability models included on Hobby, Builder, and Enthusiast plans.
| Model ID | Description |
|---|---|
claude-haiku-4-5-20251001 | Claude Haiku 4.5 |
claude-sonnet-4-6 | Claude Sonnet 4.6 |
gpt-4o-mini | GPT-4o Mini |
gpt-4o | GPT-4o |
gemini-2.0-flash | Gemini 2.0 Flash |
Zen model usage counts against a monthly budget included with your plan. Go and free models have no usage limit. If your Zen budget is exhausted, the endpoint returns a 402 with a clear message — Go and free models continue to work.
Setting the default model
The model used by your agent's desktop environment (OpenClaw, ZeroClaw, etc.) is set in the Settings tab of the agent panel. Pick any managed model from the AgentStead Managed groups at the top of the list.
This sets OPENCLAW_PRIMARY_MODEL at deploy time. To change it on a running agent without redeploying, use Sync Live or run from the Control UI terminal:
openclaw config set model.primary minimax-m2.5Using with OpenAI-compatible libraries
Any library that accepts a base_url and api_key works without changes:
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ["AGENTSTEAD_PROXY_TOKEN"],
base_url="https://proxy.agentstead.dev/llm/v1",
)import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.AGENTSTEAD_PROXY_TOKEN,
baseURL: 'https://proxy.agentstead.dev/llm/v1',
});curl https://proxy.agentstead.dev/llm/v1/chat/completions \
-H "Authorization: Bearer $AGENTSTEAD_PROXY_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model":"kimi-k2.5","messages":[{"role":"user","content":"Hello"}]}'Bring your own keys
If you have your own provider API keys and prefer to use them directly, add them to Env Vars in the dashboard — they're injected as environment variables at startup. Your keys are encrypted at rest and injected directly into the container.
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
OPENROUTER_API_KEY=sk-or-...BYOK and managed access can coexist — use whichever makes sense per tool.
OpenClaw desktop tools
Tools running inside the OpenClaw desktop — Claude Code, Codex, OpenCode — use OpenClaw's own model configuration rather than the managed proxy endpoint.
For Claude Code and Codex, which call provider APIs directly using the Anthropic and OpenAI SDKs, you need a provider key in OpenClaw's config:
# Claude Code — Anthropic key
openclaw config set models.providers.anthropic.apiKey <key>
# Codex — OpenAI key
openclaw config set models.providers.openai.apiKey <key>
# View what's currently configured
openclaw config list models.providersFor OpenCode, the managed endpoint can be used by pointing it at the proxy:
opencode # on first launch, select "Custom (OpenAI-compatible)" as provider
# base URL: https://proxy.agentstead.dev/llm/v1
# key: $AGENTSTEAD_PROXY_TOKEN
