Chat Completions API
创建模型响应,支持对话、函数调用等。
请求
http
POST /v1/chat/completions请求头
| 参数 | 类型 | 必填 | 说明 |
|---|---|---|---|
Authorization | string | ✅ | Bearer sk-你的密钥 |
Content-Type | string | ✅ | application/json |
请求体
json
{
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "你好!"}
],
"temperature": 0.7,
"max_tokens": 1000,
"stream": false
}参数说明
| 参数 | 类型 | 必填 | 说明 |
|---|---|---|---|
model | string | ✅ | 模型名称,如 gpt-4o-mini |
messages | array | ✅ | 对话消息列表 |
temperature | float | ❌ | 温度参数 (0-2),默认 1.0 |
max_tokens | int | ❌ | 最大生成 token 数 |
stream | bool | ❌ | 是否流式输出,默认 false |
top_p | float | ❌ | Top-p 参数 (0-1) |
frequency_penalty | float | ❌ | 频率惩罚 (-2 to 2) |
presence_penalty | float | ❌ | 存在惩罚 (-2 to 2) |
stop | array | ❌ | 停止词列表 |
tools | array | ❌ | 函数调用工具列表 |
tool_choice | string | ❌ | 函数调用选择 |
响应
成功响应
json
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"created": 1234567890,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "你好!有什么可以帮你的吗?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}流式响应
设置 stream: true 后,响应为 SSE (Server-Sent Events) 格式:
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1234567890,"model":"gpt-4o-mini","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1234567890,"model":"gpt-4o-mini","choices":[{"index":0,"delta":{"content":"你好"},"finish_reason":null}]}
data: [DONE]示例
Python
python
from openai import OpenAI
client = OpenAI(
api_key="sk-你的密钥",
base_url="https://api.nextapi.pro/v1"
)
# 普通请求
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "你好!"}]
)
print(response.choices[0].message.content)
# 流式请求
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "写一首诗"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")Node.js
javascript
const OpenAI = require('openai');
const openai = new OpenAI({
apiKey: 'sk-你的密钥',
baseURL: 'https://api.nextapi.pro/v1'
});
// 普通请求
async function chat() {
const completion = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: '你好!' }],
});
console.log(completion.choices[0].message.content);
}
// 流式请求
async function streamChat() {
const stream = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: '写一首诗' }],
stream: true,
});
for await (const chunk of stream) {
if (chunk.choices[0]?.delta?.content) {
console.log(chunk.choices[0].delta.content);
}
}
}
chat();cURL
bash
# 普通请求
curl https://api.nextapi.pro/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-你的密钥" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "你好!"}]
}'
# 流式请求
curl https://api.nextapi.pro/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-你的密钥" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "写一首诗"}],
"stream": true
}'错误处理
| HTTP 状态码 | 说明 | 解决方案 |
|---|---|---|
| 400 | 请求参数错误 | 检查请求格式 |
| 401 | 认证失败 | 检查 API Key |
| 403 | 权限不足 | 检查用户权限 |
| 429 | 请求过于频繁 | 降低请求频率 |
| 500 | 服务器错误 | 联系客服 |
