Quickstart
Get started with AllToken.
Introduction
AllToken provides a unified API with access to 300+ AI models through a single endpoint, with automatic fallbacks and cost-effective routing built in.
Get started in minutes with your preferred SDK or HTTP client.
Base URL: https://api.alltoken.ai/v1
Auth: Bearer API key
Compatibility: OpenAI-compatible API
Get your API key
Before getting started, create an API key:
- Go to Settings → API Keys
- Click Create new key
- Copy and save the key securely — it's shown only once
Keep your API key secret. Do not expose it in client-side code or public repositories.
Install the SDK
Use the OpenAI SDK with AllToken. Install it with your preferred package manager:
$ npm install openaiThen set your environment variable:
$ export ALLTOKEN_API_KEY="your_alltoken_api_key"Send your first request
Create a client, pick a model, and send a chat completion:
| 1 | import OpenAI from 'openai'; |
| 2 | |
| 3 | const client = new OpenAI({ |
| 4 | apiKey: process.env.ALLTOKEN_API_KEY, |
| 5 | baseURL: 'https://api.alltoken.ai/v1', |
| 6 | }); |
| 7 | |
| 8 | const completion = await client.chat.completions.create({ |
| 9 | model: 'deepseek-chat', |
| 10 | messages: [ |
| 11 | { |
| 12 | role: 'user', |
| 13 | content: 'What is the meaning of life?', |
| 14 | }, |
| 15 | ], |
| 16 | }); |
| 17 | |
| 18 | console.log(completion.choices[0]?.message?.content); |
Python example
| 1 | from openai import OpenAI |
| 2 | import os |
| 3 | |
| 4 | client = OpenAI( |
| 5 | api_key=os.environ.get("ALLTOKEN_API_KEY"), |
| 6 | base_url="https://api.alltoken.ai/v1", |
| 7 | ) |
| 8 | |
| 9 | completion = client.chat.completions.create( |
| 10 | model="deepseek-chat", |
| 11 | messages=[ |
| 12 | {"role": "user", "content": "What is the meaning of life?"} |
| 13 | ], |
| 14 | ) |
| 15 | |
| 16 | print(completion.choices[0].message.content) |
Using the API directly
Call the API directly with cURL or any HTTP client:
| 1 | curl https://api.alltoken.ai/v1/chat/completions \ |
| 2 | -H "Authorization: Bearer $ALLTOKEN_API_KEY" \ |
| 3 | -H "Content-Type: application/json" \ |
| 4 | -d '{ |
| 5 | "model": "deepseek-chat", |
| 6 | "messages": [ |
| 7 | {"role": "user", "content": "Hello!"} |
| 8 | ] |
| 9 | }' |
Streaming responses
Add stream: true to get responses token-by-token via Server-Sent Events:
| 1 | const stream = await client.chat.completions.create({ |
| 2 | model: 'deepseek-chat', |
| 3 | messages: [{ role: 'user', content: 'Tell me a story' }], |
| 4 | stream: true, |
| 5 | }); |
| 6 | |
| 7 | for await (const chunk of stream) { |
| 8 | const content = chunk.choices[0]?.delta?.content; |
| 9 | if (content) process.stdout.write(content); |
| 10 | } |
For detailed streaming documentation, see Streaming.
Next steps
- Browse available models — compare pricing, capabilities, and context windows
- Authentication — API key management and security
- Streaming — real-time response handling
- Model Routing — automatic provider selection and fallbacks
- API Reference — full Chat Completions API documentation