Skip to content

Commit

Permalink
Several QoL contributions (huggingface#760)
Browse files Browse the repository at this point in the history
* allow customizing disclaimer as `PUBLIC_APP_DISCLAIMER_MESSAGE`

* support passing `defaultHeaders` to `openai` endpoint

* add azure openai, claude, mistral examples using `defaultHeaders` & `openai` endpoint

* fix streaming being buffered behind cloudflare tunnel

might help to relieve issue huggingface#598

* support new lines in model description

* don't automatically generate modelUrl to huggingface

fixes broken links for self-hosted or custom-named model

* add `PUBLIC_APP_DISCLAIMER_MESSAGE` in `.env`

* `npm run format`

---------

Co-authored-by: Nathan Sarrazin <[email protected]>
  • Loading branch information
flexchar and nsarrazin committed Feb 6, 2024
1 parent bbbedb7 commit 73a5c0d
Show file tree
Hide file tree
Showing 7 changed files with 98 additions and 15 deletions.
1 change: 1 addition & 0 deletions .env
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ PUBLIC_APP_COLOR=blue # can be any of tailwind colors: https://tailwindcss.com/d
PUBLIC_APP_DESCRIPTION=# description used throughout the app (if not set, a default one will be used)
PUBLIC_APP_DATA_SHARING=#set to 1 to enable options & text regarding data sharing
PUBLIC_APP_DISCLAIMER=#set to 1 to show a disclaimer on login page
PUBLIC_APP_DISCLAIMER_MESSAGE="Disclaimer: AI is an area of active research with known problems such as biased generation and misinformation. Do not use this application for high-stakes decisions or advice."
LLM_SUMMERIZATION=true

EXPOSE_API=true
Expand Down
1 change: 1 addition & 0 deletions .env.template
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,7 @@ PUBLIC_APP_NAME=HuggingChat
PUBLIC_APP_ASSETS=huggingchat
PUBLIC_APP_COLOR=yellow
PUBLIC_APP_DESCRIPTION="Making the community's best AI chat models available to everyone."
PUBLIC_APP_DISCLAIMER_MESSAGE="Disclaimer: AI is an area of active research with known problems such as biased generation and misinformation. Do not use this application for high-stakes decisions or advice."
PUBLIC_APP_DATA_SHARING=1
PUBLIC_APP_DISCLAIMER=1

Expand Down
69 changes: 69 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -316,6 +316,75 @@ MODELS=`[{
}]`
```

You may also consume any model provider that provides compatible OpenAI API endpoint. For example, you may self-host [Portkey](https://github.com/Portkey-AI/gateway) gateway and experiment with Claude or GPTs offered by Azure OpenAI. Example for Claude from Anthropic:

```
MODELS=`[{
"name": "claude-2.1",
"displayName": "Claude 2.1",
"description": "Anthropic has been founded by former OpenAI researchers...",
"parameters": {
"temperature": 0.5,
"max_new_tokens": 4096,
},
"endpoints": [
{
"type": "openai",
"baseURL": "https://gateway.example.com/v1",
"defaultHeaders": {
"x-portkey-config": '{"provider":"anthropic","api_key":"sk-ant-abc...xyz"}'
}
}
]
}]`
```

Example for GPT 4 deployed on Azure OpenAI:

```
MODELS=`[{
"id": "gpt-4-1106-preview",
"name": "gpt-4-1106-preview",
"displayName": "gpt-4-1106-preview",
"parameters": {
"temperature": 0.5,
"max_new_tokens": 4096,
},
"endpoints": [
{
"type": "openai",
"baseURL": "https://gateway.example.com/v1",
"defaultHeaders": {
"x-portkey-config": '{"provider":"azure-openai","resource_name":"abc-fr","deployment_id":"gpt-4-1106-preview","api_version":"2023-03-15-preview","api_key":"abc...xyz"}'
}
}
]
}]`
```

Or try Mistral from [Deepinfra](https://deepinfra.com/mistralai/Mistral-7B-Instruct-v0.1/api?example=openai-http):

> Note, apiKey can either be set custom per endpoint, or globally using `OPENAI_API_KEY` variable.
```
MODELS=`[{
"name": "mistral-7b",
"displayName": "Mistral 7B",
"description": "A 7B dense Transformer, fast-deployed and easily customisable. Small, yet powerful for a variety of use cases. Supports English and code, and a 8k context window.",
"parameters": {
"temperature": 0.5,
"max_new_tokens": 4096,
},
"endpoints": [
{
"type": "openai",
"baseURL": "https://api.deepinfra.com/v1/openai",
"apiKey": "abc...xyz"
}
]
}]`
```

##### Llama.cpp API server

chat-ui also supports the llama.cpp API server directly without the need for an adapter. You can do this using the `llamacpp` endpoint type.
Expand Down
9 changes: 6 additions & 3 deletions src/lib/components/DisclaimerModal.svelte
Original file line number Diff line number Diff line change
@@ -1,7 +1,11 @@
<script lang="ts">
import { base } from "$app/paths";
import { page } from "$app/stores";
import { PUBLIC_APP_DESCRIPTION, PUBLIC_APP_NAME } from "$env/static/public";
import {
PUBLIC_APP_DESCRIPTION,
PUBLIC_APP_NAME,
PUBLIC_APP_DISCLAIMER_MESSAGE,
} from "$env/static/public";
import LogoHuggingFaceBorderless from "$lib/components/icons/LogoHuggingFaceBorderless.svelte";
import Modal from "$lib/components/Modal.svelte";
import { useSettingsStore } from "$lib/stores/settings";
Expand All @@ -25,8 +29,7 @@
</p>

<p class="text-sm text-gray-500">
Disclaimer: AI is an area of active research with known problems such as biased generation and
misinformation. Do not use this application for high-stakes decisions or advice.
{PUBLIC_APP_DISCLAIMER_MESSAGE}
</p>

<div class="flex w-full flex-col items-center gap-2">
Expand Down
5 changes: 4 additions & 1 deletion src/lib/server/endpoints/openai/endpointOai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,14 @@ export const endpointOAIParametersSchema = z.object({
completion: z
.union([z.literal("completions"), z.literal("chat_completions")])
.default("chat_completions"),
defaultHeaders: z.record(z.string()).optional(),
});

export async function endpointOai(
input: z.input<typeof endpointOAIParametersSchema>
): Promise<Endpoint> {
const { baseURL, apiKey, completion, model } = endpointOAIParametersSchema.parse(input);
const { baseURL, apiKey, completion, model, defaultHeaders } =
endpointOAIParametersSchema.parse(input);
let OpenAI;
try {
OpenAI = (await import("openai")).OpenAI;
Expand All @@ -31,6 +33,7 @@ export async function endpointOai(
const openai = new OpenAI({
apiKey: apiKey ?? "sk-",
baseURL,
defaultHeaders,
});

if (completion === "completions") {
Expand Down
6 changes: 5 additions & 1 deletion src/routes/conversation/[id]/+server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -381,7 +381,11 @@ export async function POST({ request, locals, params, getClientAddress }) {
});

// Todo: maybe we should wait for the message to be saved before ending the response - in case of errors
return new Response(stream);
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
},
});
}

export async function DELETE({ locals, params }) {
Expand Down
22 changes: 12 additions & 10 deletions src/routes/settings/[...model]/+page.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -34,22 +34,24 @@
</h2>

{#if model.description}
<p class=" text-gray-600">
<p class="whitespace-pre-wrap text-gray-600">
{model.description}
</p>
{/if}
</div>

<div class="flex flex-wrap items-center gap-2 md:gap-4">
<a
href={model.modelUrl || "https://huggingface.co/" + model.name}
target="_blank"
rel="noreferrer"
class="flex items-center truncate underline underline-offset-2"
>
<CarbonArrowUpRight class="mr-1.5 shrink-0 text-xs " />
Model page
</a>
{#if model.modelUrl}
<a
href={model.modelUrl || "https://huggingface.co/" + model.name}
target="_blank"
rel="noreferrer"
class="flex items-center truncate underline underline-offset-2"
>
<CarbonArrowUpRight class="mr-1.5 shrink-0 text-xs " />
Model page
</a>
{/if}

{#if model.datasetName || model.datasetUrl}
<a
Expand Down

0 comments on commit 73a5c0d

Please sign in to comment.