llama.cpp
/
/ full-cuda--b1-04dc552
full-cuda--b1-04dc552
sha256:5b5312f1ea6b29e62a7963bdde4b0bb8441a96787fa42958d4baa540cc971545
Install from the command line
$ docker pull ghcr.io/rubra-ai/llama.cpp:full-cuda--b1-04dc552
Use as base image in Dockerfile:
FROM ghcr.io/rubra-ai/llama.cpp:full-cuda--b1-04dc552
linux/amd64
$ docker pull ghcr.io/rubra-ai/llama.cpp:full-cuda--b1-04dc552@sha256:a0c1fec3bf05820d4ae37f13dfc8704ca063acc20e2909025fc6daba0257c344
unknown/unknown
$ docker pull ghcr.io/rubra-ai/llama.cpp:full-cuda--b1-04dc552@sha256:146decd4b530ba2bee6a57f464dccbb00c4a593699a014758cf8ce399ecf3a8f
About this version
Loading
Sorry, something went wrong.
Details
- llama.cpp
-
rubra-ai
- rubra-ai/tools.cpp
- 14 days ago
Download activity
- Total downloads 0
- Last 30 days 0
- Last week 0
- Today 0