Skip to content
This repository was archived by the owner on Nov 27, 2025. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 34 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@
![](https://img.shields.io/github/forks/llm-red-team/qwen-free-api.svg)
![](https://img.shields.io/docker/pulls/vinlic/qwen-free-api.svg)

支持高速流式输出、支持多轮对话、支持无水印AI绘图、支持长文档解读、图像解析,零配置部署,多路token支持,自动清理会话痕迹。
支持高速流式输出、支持多轮对话、支持无水印AI绘图、支持长文档解读、图像解析、联网检索,零配置部署,多路token支持,自动清理会话痕迹。

与ChatGPT接口完全兼容。

还有以下九个free-api欢迎关注:
还有以下十个free-api欢迎关注:

Moonshot AI(Kimi.ai)接口转API [kimi-free-api](https://github.com/LLM-Red-Team/kimi-free-api)

Expand All @@ -25,6 +25,8 @@ Moonshot AI(Kimi.ai)接口转API [kimi-free-api](https://github.com/LLM-Red-

字节跳动(豆包)接口转API [doubao-free-api](https://github.com/LLM-Red-Team/doubao-free-api)

字节跳动(即梦AI)接口转API [jimeng-free-api](https://github.com/LLM-Red-Team/jimeng-free-api)

讯飞星火(Spark)接口转API [spark-free-api](https://github.com/LLM-Red-Team/spark-free-api)

MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-Team/hailuo-free-api)
Expand All @@ -35,26 +37,36 @@ MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-T

## 目录

* [免责声明](#免责声明)
* [在线体验](#在线体验)
* [效果示例](#效果示例)
* [接入准备](#接入准备)
* [Docker部署](#Docker部署)
* [Docker-compose部署](#Docker-compose部署)
* [Render部署](#Render部署)
* [Vercel部署](#Vercel部署)
* [原生部署](#原生部署)
* [推荐使用客户端](#推荐使用客户端)
* [接口列表](#接口列表)
* [对话补全](#对话补全)
* [AI绘图](#AI绘图)
* [文档解读](#文档解读)
* [图像解析](#图像解析)
* [ticket存活检测](#ticket存活检测)
* [注意事项](#注意事项)
* [Nginx反代优化](#Nginx反代优化)
* [Token统计](#Token统计)
* [Star History](#star-history)
- [Qwen AI Free 服务](#qwen-ai-free-服务)
- [目录](#目录)
- [免责声明](#免责声明)
- [效果示例](#效果示例)
- [验明正身Demo](#验明正身demo)
- [多轮对话Demo](#多轮对话demo)
- [AI绘图Demo](#ai绘图demo)
- [长文档解读Demo](#长文档解读demo)
- [图像解析Demo](#图像解析demo)
- [10线程并发测试](#10线程并发测试)
- [接入准备](#接入准备)
- [方法1](#方法1)
- [方法2](#方法2)
- [多账号接入](#多账号接入)
- [Docker部署](#docker部署)
- [Docker-compose部署](#docker-compose部署)
- [Render部署](#render部署)
- [Vercel部署](#vercel部署)
- [原生部署](#原生部署)
- [推荐使用客户端](#推荐使用客户端)
- [接口列表](#接口列表)
- [对话补全](#对话补全)
- [AI绘图](#ai绘图)
- [文档解读](#文档解读)
- [图像解析](#图像解析)
- [ticket存活检测](#ticket存活检测)
- [注意事项](#注意事项)
- [Nginx反代优化](#nginx反代优化)
- [Token统计](#token统计)
- [Star History](#star-history)

## 免责声明

Expand All @@ -68,12 +80,6 @@ MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-T

**仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!**

## 在线体验

此链接仅临时测试功能,长期使用请自行部署。

https://udify.app/chat/qOXzVl5kkvhQXM8r

## 效果示例

### 验明正身Demo
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "qwen-free-api",
"version": "0.0.20",
"version": "0.0.21",
"description": "Qwen Free API Server",
"type": "module",
"main": "dist/index.js",
Expand Down
147 changes: 140 additions & 7 deletions src/api/controllers/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ async function removeConversation(convId: string, ticket: string) {
async function createCompletion(
model = MODEL_NAME,
messages: any[],
searchType: string = '',
ticket: string,
refConvId = '',
retryCount = 0
Expand Down Expand Up @@ -129,7 +130,9 @@ async function createCompletion(
sessionType: "text_chat",
parentMsgId,
params: {
"fileUploadBatchId": util.uuid()
"fileUploadBatchId": util.uuid(),
"searchType": searchType,
"deepThink": model === "qwq",
},
contents: messagesPrepare(messages, refs, !!refConvId),
})
Expand Down Expand Up @@ -173,6 +176,7 @@ async function createCompletion(
async function createCompletionStream(
model = MODEL_NAME,
messages: any[],
searchType: string = '',
ticket: string,
refConvId = '',
retryCount = 0
Expand Down Expand Up @@ -220,7 +224,9 @@ async function createCompletionStream(
sessionType: "text_chat",
parentMsgId,
params: {
"fileUploadBatchId": util.uuid()
"fileUploadBatchId": util.uuid(),
"searchType": searchType,
"deepThink": model === "qwq",
},
contents: messagesPrepare(messages, refs, !!refConvId),
})
Expand Down Expand Up @@ -440,13 +446,22 @@ async function receiveStream(stream: any): Promise<any> {
choices: [
{
index: 0,
message: { role: "assistant", content: "" },
message: {
role: "assistant",
content: "",
reasoning_content: "" // 存储think类型内容
},
finish_reason: "stop",
},
],
usage: { prompt_tokens: 1, completion_tokens: 1, total_tokens: 2 },
created: util.unixTimestamp(),
};

// 用于跟踪已处理的think内容
let processedThinkContent = "";
let hasCompletedThinking = false; // 添加标志,标记是否已经完成了完整的思考内容处理

const parser = createParser((event) => {
try {
if (event.type !== "event") return;
Expand All @@ -457,12 +472,59 @@ async function receiveStream(stream: any): Promise<any> {
throw new Error(`Stream response invalid: ${event.data}`);
if (!data.id && result.sessionId && result.msgId)
data.id = `${result.sessionId}-${result.msgId}`;

// 处理think类型内容
if (result.contentType === "think") {
// 如果已经处理过完整的思考内容,则不再处理
if (result.msgStatus === "finished" && hasCompletedThinking) {
// 跳过重复的完整思考内容
} else {
const thinkContents = (result.contents || []).filter(part => part.contentType === "think");
for (const part of thinkContents) {
try {
const thinkContent = JSON.parse(part.content);
if (thinkContent && thinkContent.content) {
// 只添加新的内容,避免重复
const newContent = thinkContent.content;
if (!processedThinkContent.includes(newContent)) {
// 如果是增量更新,只添加新的部分
if (part.incremental && processedThinkContent) {
const uniquePart = newContent.substring(processedThinkContent.length);
if (uniquePart) {
data.choices[0].message.reasoning_content += uniquePart;
processedThinkContent = newContent;
}
} else {
data.choices[0].message.reasoning_content += newContent;
processedThinkContent = newContent;
}
}

// 如果状态是finished,标记思考内容已完成
if (part.status === "finished") {
hasCompletedThinking = true;
}
}
} catch (e) {
// 如果JSON解析失败,直接添加原始内容
if (!processedThinkContent.includes(part.content)) {
data.choices[0].message.reasoning_content += part.content;
processedThinkContent += part.content;
}
}
}
}
}

// 处理text类型内容
const text = (result.contents || []).reduce((str, part) => {
const { contentType, role, content } = part;
if (contentType != "text" && contentType != "text2image") return str;
if (role != "assistant" && !_.isString(content)) return str;
return str + content;
}, "");

// ... 其余代码保持不变 ...
const exceptCharIndex = text.indexOf("�");
let chunk = text.substring(
exceptCharIndex != -1
Expand Down Expand Up @@ -519,6 +581,10 @@ function createTransStream(stream: any, endCallback?: Function) {
// 创建转换流
const transStream = new PassThrough();
let content = "";
let reasoningContent = ""; // 存储已处理的think内容
let hasCompletedThinking = false; // 添加标志,标记是否已经完成了完整的思考内容处理

// 初始化响应
!transStream.closed &&
transStream.write(
`data: ${JSON.stringify({
Expand All @@ -528,13 +594,14 @@ function createTransStream(stream: any, endCallback?: Function) {
choices: [
{
index: 0,
delta: { role: "assistant", content: "" },
delta: { role: "assistant", content: "", reasoning_content: "" },
finish_reason: null,
},
],
created,
})}\n\n`
);

const parser = createParser((event) => {
try {
if (event.type !== "event") return;
Expand All @@ -543,19 +610,84 @@ function createTransStream(stream: any, endCallback?: Function) {
const result = _.attempt(() => JSON.parse(event.data));
if (_.isError(result))
throw new Error(`Stream response invalid: ${event.data}`);

// 处理think类型内容
if (result.contentType === "think") {
// 如果已经处理过完整的思考内容,则不再处理
if (result.msgStatus === "finished" && hasCompletedThinking) {
// 跳过重复的完整思考内容
} else {
const thinkContents = (result.contents || []).filter(part => part.contentType === "think");
let newReasoningChunk = "";

for (const part of thinkContents) {
try {
const thinkContent = JSON.parse(part.content);
if (thinkContent && thinkContent.content) {
const fullThinkContent = thinkContent.content;

// 确定新增的内容部分
let newPart = "";
if (part.incremental && reasoningContent && fullThinkContent.startsWith(reasoningContent)) {
// 如果是增量更新且内容是以前面内容开头,只取新增部分
newPart = fullThinkContent.substring(reasoningContent.length);
} else if (!reasoningContent.includes(fullThinkContent)) {
// 如果是全新内容
newPart = fullThinkContent;
}

if (newPart) {
newReasoningChunk += newPart;
reasoningContent = fullThinkContent; // 更新已处理内容
}

// 如果状态是finished,标记思考内容已完成
if (part.status === "finished") {
hasCompletedThinking = true;
}
}
} catch (e) {
// JSON解析失败,检查是否有新内容
const rawContent = part.content;
if (!reasoningContent.includes(rawContent)) {
newReasoningChunk += rawContent;
reasoningContent += rawContent;
}
}
}

// 如果有新的推理内容,发送增量更新
if (newReasoningChunk) {
const data = `data: ${JSON.stringify({
id: `${result.sessionId}-${result.msgId}`,
model: MODEL_NAME,
object: "chat.completion.chunk",
choices: [
{ index: 0, delta: { reasoning_content: newReasoningChunk }, finish_reason: null },
],
created,
})}\n\n`;
!transStream.closed && transStream.write(data);
}
}
}

// 处理text类型内容
const text = (result.contents || []).reduce((str, part) => {
const { contentType, role, content } = part;
if (contentType != "text" && contentType != "text2image") return str;
if (role != "assistant" && !_.isString(content)) return str;
return str + content;
}, "");

const exceptCharIndex = text.indexOf("�");
let chunk = text.substring(
exceptCharIndex != -1
? Math.min(content.length, exceptCharIndex)
: content.length,
exceptCharIndex == -1 ? text.length : exceptCharIndex
);

if (chunk && result.contentType == "text2image") {
chunk = chunk.replace(
/https?:\/\/[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=\,]*)/gi,
Expand All @@ -566,6 +698,7 @@ function createTransStream(stream: any, endCallback?: Function) {
}
);
}

if (result.msgStatus != "finished") {
if (chunk && result.contentType == "text") {
content += chunk;
Expand Down Expand Up @@ -603,15 +736,15 @@ function createTransStream(stream: any, endCallback?: Function) {
!transStream.closed && transStream.write(data);
!transStream.closed && transStream.end("data: [DONE]\n\n");
content = "";
reasoningContent = "";
endCallback && endCallback(result.sessionId);
}
// else
// logger.warn(result.event, result);
} catch (err) {
logger.error(err);
!transStream.closed && transStream.end("\n\n");
}
});

// 将流数据喂给SSE转换器
stream.on("data", (buffer) => parser.feed(buffer.toString()));
stream.once(
Expand Down Expand Up @@ -788,7 +921,7 @@ async function uploadFile(fileUrl: string, ticket: string) {
// 上传文件到OSS
await axios.request({
method: "POST",
url: "https://broadscope-dialogue.oss-cn-beijing.aliyuncs.com/",
url: "https://broadscope-dialogue-new.oss-cn-beijing.aliyuncs.com/",
data: formData,
// 100M限制
maxBodyLength: FILE_MAX_SIZE,
Expand Down
4 changes: 3 additions & 1 deletion src/api/routes/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,12 @@ export default {
const tokens = chat.tokenSplit(request.headers.authorization);
// 随机挑选一个ticket
const token = _.sample(tokens);
const { model, conversation_id: convId, messages, stream } = request.body;
const { model, conversation_id: convId, messages, search_type, stream } = request.body;
if (stream) {
const stream = await chat.createCompletionStream(
model,
messages,
search_type,
token,
convId
);
Expand All @@ -32,6 +33,7 @@ export default {
return await chat.createCompletion(
model,
messages,
search_type,
token,
convId
);
Expand Down
Loading