- Execute
docker compose up
- Wait for the model setup (llama3 by default)
- Head to web ui (http://localhost:8080 by default)
- Repository is in progress. Only the basics have been implemented.
- For now, it is CPU only. Ollama docker image might be configured to use GPU, but it is not manageable in this repository yet.