Context Window
1.0M
Input price / 1M tokens
Free1M tokens
Output price / 1M tokens
Free1M tokens
Cached input / 1M tokens
Free1M tokens
Max Completion
4K
Input Modalities
text
Output Modalities
text
Function callingChatStreaming
Description
Available Providers
AllToken can route requests to the providers below based on route priority and policy.
ProviderContext LengthInput PriceOutput PriceCached / MLatency p50Throughput
How To Use This Model
Use the exact model ID shown below. This is the safest way to avoid call failures, variant mismatches, or incorrect route assumptions.
curl https://api.alltoken.ai/v1/chat/completions \
-H "Authorization: Bearer sk-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "glm-4.5",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'Supported Parameters
temperaturetop_pmax_tokenstoolsAPI Key Setup
Smart Routing
Let the platform choose the best provider path automatically.
Default Model
If a request does not specify a model, default the key to glm-4.5.
Forced Model
Always override incoming requests and lock the key to glm-4.5.
