Chinese LLMs through one OpenAI-compatible gateway

One API key,
route every model

Connect once and route requests across DeepSeek, Qwen, Kimi, GLM, Doubao and other leading Chinese models. Keep OpenAI-compatible payloads, request logs, and clear operational visibility.

200 OK
POST /v1/chat/completions
Request
curl -X POST "/v1/chat/completions" \
  -H "Authorization: Bearer sk-••••" \
  -d '{
    "model": "your-model",
    "messages": [
      { "role": "user", "content": "..." }
    ]
  }'
Response
{
  "choices": [{ "message": { "content": "Chat request routed." } }],
  "usage": { "total_tokens": 27 }
}
142 ms ·27 tokens ·$0.00081
STREAM ·SSE
Platform capabilities

A cleaner route
from prototype to production.

01

Model routing

Switch between leading Chinese providers without rebuilding client code.

DeepSeekQwenGLMKimi豆包Llama
02

Governed access

Manage channels, quotas and keys from one operational layer.

Load balancingRate limitsCost tracking
03

Stable delivery

Route traffic through available regions with latency and reliability in mind.

USCNAPACEU
04

Developer workflow

Keep familiar request formats while adding pricing, logs and provider choice.

APISDKCLIDocs
Workflow

Integrate once. Operate clearly.

1

Set your channel policy

Create keys, choose providers and define the routing behavior for each workload.

2

Call the unified endpoint

Use familiar OpenAI-style requests for chat, responses and provider-specific message APIs.

3

Track every request

Review latency, token usage and cost signals from the same console.

Latest articles

All articles →