Skip to content

Commit

Permalink
feat: Introducing OpenRouter, supporting models such as Claude, PaLM2…
Browse files Browse the repository at this point in the history
…, Llama 2.
  • Loading branch information
Peek-A-Booo committed Aug 13, 2023
1 parent 8ae7dcb commit 6c4b020
Show file tree
Hide file tree
Showing 37 changed files with 743 additions and 133 deletions.
5 changes: 4 additions & 1 deletion .env.local.demo
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,7 @@ GOOGLE_SEARCH_ENGINE_ID=
GOOGLE_SEARCH_API_KEY=

# SERPER API KEY
SERPER_API_KEY=
SERPER_API_KEY=

# OpenRouter API KEY
NEXT_PUBLIC_OPENROUTER_API_KEY=
6 changes: 5 additions & 1 deletion CHANGE_LOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,27 @@

## v0.8.3

> 2023-08-12
> 2023-08-13
### Fixed

- Fixed mobile session content obscuring the bottom input box
- Refactored function calling invocation logic and fixed bugs
- Fixed the "drift" of the drop-down selection box when selecting a new session model

### Add

- Added function calling support
- Added plugin system
- Added support for Google search, which can call the Google API to search and return results when encountering problems that exceed the AI model training date
- Introduced OpenRouter to support Claude, PaLM2, Llama 2 and other models

### Changed

- Adjusted the text input box for editing chat content to Textarea
- Replaced Google search with [Serper API](https://serper.dev/), which is easier to configure
- All models use openai gpt-3.5-turbo to get conversation titles, saving token consumption
- When using the models provided by OpenRouter, the plugins are hidden because they do not support plugins at this time

## v0.8.2

Expand Down
6 changes: 5 additions & 1 deletion CHANGE_LOG.zh_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,27 @@

## v0.8.3

> 2023-08-12
> 2023-08-13
### 修复

- 修复移动端会话内容遮挡底部输入框的问题
- 重构 function calling 的调用逻辑,修复 bug
- 修复新会话选择模型时,下拉选择框出现“漂移”的情况

### 新增

- 新增 function calling 支持.
- 新增插件系统
- 新增支持谷歌搜索,在遇到超出 AI 模型训练日期的问题时能够调用谷歌 api 进行搜索并返回结果
- 引入 OpenRouter,支持 Claude、PaLM2、Llama 2 等模型

### 调整

- 调整编辑聊天内容的文本输入框为 Textarea
- 将谷歌搜索 由官方 API 更换为 [Serper API](https://serper.dev/),配置更方便
- 各个模型在获取会话标题时统一使用 openai gpt-3.5-turbo,节省 token 消耗
- 在使用 OpenRouter 提供的模型时,隐藏插件,因为他们暂不支持插件

## v0.8.2

Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"@react-email/components": "0.0.7",
"@react-email/render": "0.0.7",
"@svgr/webpack": "8.0.1",
"@types/node": "20.4.9",
"@types/node": "20.4.10",
"@types/react": "18.2.20",
"@types/react-dom": "18.2.7",
"@upstash/redis": "1.22.0",
Expand Down
10 changes: 5 additions & 5 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Binary file added public/claude.webp
Binary file not shown.
Binary file added public/palm.webp
Binary file not shown.
2 changes: 1 addition & 1 deletion src/app/api/azure/function_call.ts
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ export const function_call = async ({
}: IFunctionCall & { plugins: fn_call[] }) => {
try {
const temperature = isUndefined(p_temperature) ? 1 : p_temperature;
const max_tokens = isUndefined(p_max_tokens) ? 2000 : p_max_tokens;
const max_tokens = isUndefined(p_max_tokens) ? 1000 : p_max_tokens;

const response = await fetchAzureOpenAI({
fetchURL,
Expand Down
2 changes: 1 addition & 1 deletion src/app/api/azure/regular.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ const fetchAzureOpenAI = async ({
presence_penalty: 0,
stream: true,
temperature: isUndefined(temperature) ? 1 : temperature,
max_tokens: isUndefined(max_tokens) ? 2000 : max_tokens,
max_tokens: isUndefined(max_tokens) ? 1000 : max_tokens,
messages,
stop: null,
}),
Expand Down
84 changes: 84 additions & 0 deletions src/app/api/openRouter/regular.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
import { ResErr, isUndefined } from "@/lib";
import { stream } from "@/lib/stream";
import type { supportModelType } from "@/lib/calcTokens/gpt-tokens";
import type { IFetchOpenRouter } from "./types";

interface IRegular extends IFetchOpenRouter {
prompt?: string;
modelLabel: supportModelType;
userId?: string;
headerApiKey?: string;
}

const fetchOpenRouter = async ({
fetchURL,
Authorization,
model,
temperature,
max_tokens,
messages,
}: IFetchOpenRouter) => {
return await fetch(fetchURL, {
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${Authorization}`,
"HTTP-Referer": "https://chat.ltopx.com",
"X-Title": "L-GPT",
},
method: "POST",
body: JSON.stringify({
stream: true,
model,
temperature: isUndefined(temperature) ? 1 : temperature,
max_tokens: isUndefined(max_tokens) ? 1000 : max_tokens,
messages,
}),
});
};

export const regular = async ({
prompt,
messages,
fetchURL,
Authorization,
model,
modelLabel,
temperature,
max_tokens,
userId,
headerApiKey,
}: IRegular) => {
if (prompt) messages.unshift({ role: "system", content: prompt });

try {
const response = await fetchOpenRouter({
fetchURL,
Authorization,
model,
temperature,
max_tokens,
messages,
});

if (response.status !== 200) {
return new Response(response.body, { status: 500 });
}

const { readable, writable } = new TransformStream();

stream({
readable: response.body as ReadableStream,
writable,
userId,
headerApiKey,
messages,
model,
modelLabel,
});

return new Response(readable, response);
} catch (error: any) {
console.log(error, "openrouter regular error");
return ResErr({ msg: error?.message || "Error" });
}
};
75 changes: 75 additions & 0 deletions src/app/api/openRouter/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
import { headers } from "next/headers";
import { getServerSession } from "next-auth/next";
import { authOptions } from "@/utils/plugin/auth";
import { prisma } from "@/lib/prisma";
import { ResErr } from "@/lib";
import { PREMIUM_MODELS } from "@/hooks/useLLM";
import { regular } from "./regular";

export async function POST(request: Request) {
const session = await getServerSession(authOptions);
const headersList = headers();
const headerApiKey = headersList.get("Authorization") || "";
const API_KEY = process.env.NEXT_PUBLIC_OPENROUTER_API_KEY;

const {
// model 用于接口发送给 OpenRouter 的请求参数
model,
// modelLabel 用于 Token 计算
modelLabel,
temperature,
max_tokens,
prompt,
chat_list,
} = await request.json();

/**
* If not logged in, only the locally configured API Key can be used.
*/
if (!session && !headerApiKey) return ResErr({ error: 10001 });

if (!headerApiKey) {
const user = await prisma.user.findUnique({
where: { id: session?.user.id },
});
if (!user) return ResErr({ error: 20002 });

// audit user license
if (
user.license_type !== "premium" &&
user.license_type !== "team" &&
PREMIUM_MODELS.includes(modelLabel)
) {
return ResErr({ error: 20009 });
}

const { availableTokens } = user;
if (availableTokens <= 0) return ResErr({ error: 10005 });
}

// first use local
// then use env configuration
// or empty
const Authorization = headerApiKey || API_KEY || "";

if (!Authorization) return ResErr({ error: 10002 });

const fetchURL = "https://openrouter.ai/api/v1/chat/completions";

const messages = [...chat_list];

const userId = session?.user.id;

return await regular({
prompt,
messages,
fetchURL,
Authorization,
model,
modelLabel,
temperature,
max_tokens,
userId,
headerApiKey,
});
}
8 changes: 8 additions & 0 deletions src/app/api/openRouter/types.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
export interface IFetchOpenRouter {
messages: any[];
fetchURL: string;
Authorization: string;
model: string;
temperature?: number;
max_tokens?: number;
}
2 changes: 1 addition & 1 deletion src/app/api/openai/function_call.ts
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ export const function_call = async ({
}: IFunctionCall) => {
try {
const temperature = isUndefined(p_temperature) ? 1 : p_temperature;
const max_tokens = isUndefined(p_max_tokens) ? 2000 : p_max_tokens;
const max_tokens = isUndefined(p_max_tokens) ? 1000 : p_max_tokens;

const response = await fetchOpenAI({
fetchURL,
Expand Down
2 changes: 1 addition & 1 deletion src/app/api/openai/regular.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ const fetchOpenAI = async ({
stream: true,
model,
temperature: isUndefined(temperature) ? 1 : temperature,
max_tokens: isUndefined(max_tokens) ? 2000 : max_tokens,
max_tokens: isUndefined(max_tokens) ? 1000 : max_tokens,
messages,
}),
});
Expand Down
4 changes: 2 additions & 2 deletions src/app/api/openai/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ export async function POST(request: Request) {
const session = await getServerSession(authOptions);
const headersList = headers();
const headerApiKey = headersList.get("Authorization") || "";
const NEXT_PUBLIC_OPENAI_API_KEY = process.env.NEXT_PUBLIC_OPENAI_API_KEY;
const API_KEY = process.env.NEXT_PUBLIC_OPENAI_API_KEY;

const {
// model 用于接口发送给 OpenAI 或者其他大语言模型的请求参数
Expand Down Expand Up @@ -61,7 +61,7 @@ export async function POST(request: Request) {
// first use local
// then use env configuration
// or empty
const Authorization = headerApiKey || NEXT_PUBLIC_OPENAI_API_KEY || "";
const Authorization = headerApiKey || API_KEY || "";

if (!Authorization) return ResErr({ error: 10002 });

Expand Down
27 changes: 21 additions & 6 deletions src/components/chatSection/chatConfigure/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,10 @@ const renderLabel = (item: any) => {
const renderModelLabel = (item: any) => {
return (
<div className="flex gap-4 items-center">
<span>{item.label}</span>
<div className="flex items-center gap-1.5">
{!!item.icon && <span>{item.icon}</span>}
<span>{item.label}</span>
</div>
{!!item.premium && (
<span
className={cn(
Expand All @@ -45,7 +48,7 @@ const renderModelLabel = (item: any) => {
"dark:border-orange-500 dark:text-orange-500 dark:bg-orange-50/90"
)}
>
PREMIUM
PRO
</span>
)}
</div>
Expand All @@ -57,9 +60,17 @@ export default function ChatConfigure({ list, channel }: ChatConfigureProps) {
const tCommon = useTranslations("common");

const [isShow, setIsShow] = React.useState(true);
const [isAnimation, setIsAnimation] = React.useState(false);

const [openai, azure] = useLLMStore((state) => [state.openai, state.azure]);
const LLMOptions = React.useMemo(() => [openai, azure], [openai, azure]);
const [openai, azure, openRouter] = useLLMStore((state) => [
state.openai,
state.azure,
state.openRouter,
]);
const LLMOptions = React.useMemo(
() => [openai, azure, openRouter],
[openai, azure, openRouter]
);

const options =
LLMOptions.find((item) => item.value === channel.channel_model.type)
Expand Down Expand Up @@ -111,10 +122,12 @@ export default function ChatConfigure({ list, channel }: ChatConfigureProps) {
return (
<div className="flex flex-col h-full w-full pt-16 pb-24 top-0 left-0 gap-1 absolute">
<motion.div
className="mx-auto"
className={cn("mx-auto", { "pointer-events-none": isAnimation })}
initial={{ opacity: 0.0001, y: 50 }}
animate={{ opacity: 1, y: 0 }}
transition={softBouncePrest}
onAnimationStart={() => setIsAnimation(true)}
onAnimationComplete={() => setIsAnimation(false)}
>
<div
className={cn(
Expand Down Expand Up @@ -185,7 +198,9 @@ export default function ChatConfigure({ list, channel }: ChatConfigureProps) {
</div>
</div>
</div>
<Plugin channel={channel} />
{channel.channel_model.type !== "openRouter" && (
<Plugin channel={channel} />
)}
</motion.div>
</div>
);
Expand Down
Loading

1 comment on commit 6c4b020

@vercel
Copy link

@vercel vercel bot commented on 6c4b020 Aug 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.