Skip to content

Open WebUI Notes

j-sawn edited this page Apr 23, 2025 · 7 revisions

Installation and Set-up

NOTE: This assumes that you have downloaded Ollama beforehand. If not, please refer to the appropriate guide.

In a new terminal run (see below if this doesn't work):

docker run -it --name [INSERT CONTAINER NAME] -p 8080:8080 southerncrossai/modelworks:modelworks 

or, alternatively, to run container with GPU:

docker run -it --gpus all --runtime=nvidia --name [INSERT CONTAINER NAME] -p 8080:8080 southerncrossai/modelworks:modelworks /bin/bash

Use cd to move to the dir you want to work in, then run

python3 -m venv ./app/venv

source ./app/venv/bin/activate

pip install open-webui

After this is done, write:

open-webui serve

Then open a new terminal, and reopen your container:

docker exec -it "[INSERT CONTAINER NAME]" bash

And in your container write the following:

source ./app/venv/bin/activate

ollama serve

After this, you can open a browser and go to http://localhost:8080


Navigating Open WebUI

If this is your first time, you would need to create an admin account to access the rest of Open WebUI.
image

You can do so by entering your email, name, and any password you'd like.
image

Otherwise, just log in with your pre-existing credentials. image

After this, you should be greeted by the following page: image

If the appropriate instances are being run in the background, you can access your models around the top-left corner of the chat window: image image

Another important tab to visit is the Workspaces one located near the top-left corner of your whole screen: image

From there you can locate your models, knowledge, prompts, and tools, which can help with customising and improving your UI: image

Features

Response Ratings

When a model sends a response, Open WebUI gives the user the option to rate it: test

This can be used for features involving assessing the confidence or satisfaction score related to the model's responses.


Clone this wiki locally