- llama.cpp from https://github.com/ggerganov/llama.cpp with CUDA enabled (found under
/opt/llama.cpp
) - Python bindings from https://github.com/abetlen/llama-cpp-python (found under
/opt/llama-cpp-python
)
Warning
Starting with version 0.1.79, the model format has changed from GGML to GGUF. Existing GGML models can be converted using the convert-llama-ggmlv3-to-gguf.py
script in llama.cpp
(or you can often find the GGUF conversions on HuggingFace Hub)
There are two branches of this container for backwards compatability:
llama_cpp:gguf
(the default, which tracks upstream master)llama_cpp:ggml
(which still supports GGML model format)
There are a couple patches applied to the legacy GGML fork:
- fixed
__fp16
typedef in llama.h on ARM64 (usehalf
with NVCC) - parsing of BOS/EOS tokens (see ggerganov/llama.cpp#1931)
You can use llama.cpp's built-in main
tool to run GGUF models (from HuggingFace Hub or elsewhere)
./run.sh --workdir=/usr/local/bin $(./autotag llama_cpp) /bin/bash -c \
'./main --model $(huggingface-downloader TheBloke/Llama-2-7B-GGUF/llama-2-7b.Q4_K_S.gguf) \
--prompt "Once upon a time," \
--n-predict 128 --ctx-size 192 --batch-size 192 \
--n-gpu-layers 999 --threads $(nproc)'
> the
--model
argument expects a .gguf filename (typically theQ4_K_S
quantization is used)
> if you're trying to load Llama-2-70B, add the--gqa 8
flag
To use the Python API and benchmark.py
instead:
./run.sh --workdir=/usr/local/bin $(./autotag llama_cpp) /bin/bash -c \
'python3 benchmark.py --model $(huggingface-downloader TheBloke/Llama-2-7B-GGUF/llama-2-7b.Q4_K_S.gguf) \
--prompt "Once upon a time," \
--n-predict 128 --ctx-size 192 --batch-size 192 \
--n-gpu-layers 999 --threads $(nproc)'
To use a more contemporary model, such as Llama-3.2-3B
, specify e.g. unsloth/Llama-3.2-3B-Instruct-GGUF/Llama-3.2-3B-Instruct-Q4_K_M.gguf
.
Model | Quantization | Memory (MB) |
---|---|---|
TheBloke/Llama-2-7B-GGUF |
llama-2-7b.Q4_K_S.gguf |
5,268 |
TheBloke/Llama-2-13B-GGUF |
llama-2-13b.Q4_K_S.gguf |
8,609 |
TheBloke/LLaMA-30b-GGUF |
llama-30b.Q4_K_S.gguf |
19,045 |
TheBloke/Llama-2-70B-GGUF |
llama-2-70b.Q4_K_S.gguf |
37,655 |
CONTAINERS
llama_cpp:0.3.1 |
|
---|---|
Aliases | llama_cpp |
Requires | L4T ['>=34.1.0'] |
Dependencies | build-essential pip_cache:cu122 cuda:12.2 cudnn python cmake numpy huggingface_hub |
Dependants | l4t-text-generation langchain langchain:samples text-generation-webui:1.7 text-generation-webui:6a7cd01 text-generation-webui:main |
Dockerfile | Dockerfile |
CONTAINER IMAGES
Repository/Tag | Date | Arch | Size |
---|---|---|---|
dustynv/llama_cpp:ggml-r35.2.1 |
2023-12-05 |
arm64 |
5.2GB |
dustynv/llama_cpp:ggml-r35.3.1 |
2023-12-06 |
arm64 |
5.2GB |
dustynv/llama_cpp:ggml-r35.4.1 |
2023-12-19 |
arm64 |
5.2GB |
dustynv/llama_cpp:ggml-r36.2.0 |
2023-12-19 |
arm64 |
5.1GB |
dustynv/llama_cpp:gguf-r35.2.1 |
2023-12-15 |
arm64 |
5.1GB |
dustynv/llama_cpp:gguf-r35.3.1 |
2023-12-19 |
arm64 |
5.2GB |
dustynv/llama_cpp:gguf-r35.4.1 |
2023-12-15 |
arm64 |
5.1GB |
dustynv/llama_cpp:gguf-r36.2.0 |
2023-12-19 |
arm64 |
5.1GB |
dustynv/llama_cpp:r35.2.1 |
2023-08-29 |
arm64 |
5.2GB |
dustynv/llama_cpp:r35.3.1 |
2023-08-15 |
arm64 |
5.2GB |
dustynv/llama_cpp:r35.4.1 |
2024-09-12 |
arm64 |
6.0GB |
dustynv/llama_cpp:r36.2.0 |
2024-09-12 |
arm64 |
5.6GB |
dustynv/llama_cpp:r36.4.0 |
2024-09-30 |
arm64 |
4.5GB |
Container images are compatible with other minor versions of JetPack/L4T:
• L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
• L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)
RUN CONTAINER
To start the container, you can use jetson-containers run
and autotag
, or manually put together a docker run
command:
# automatically pull or build a compatible container image
jetson-containers run $(autotag llama_cpp)
# or explicitly specify one of the container images above
jetson-containers run dustynv/llama_cpp:r36.4.0
# or if using 'docker run' (specify image and mounts/ect)
sudo docker run --runtime nvidia -it --rm --network=host dustynv/llama_cpp:r36.4.0
jetson-containers run
forwards arguments todocker run
with some defaults added (like--runtime nvidia
, mounts a/data
cache, and detects devices)
autotag
finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
To mount your own directories into the container, use the -v
or --volume
flags:
jetson-containers run -v /path/on/host:/path/in/container $(autotag llama_cpp)
To launch the container running a command, as opposed to an interactive shell:
jetson-containers run $(autotag llama_cpp) my_app --abc xyz
You can pass any options to it that you would to docker run
, and it'll print out the full command that it constructs before executing it.
BUILD CONTAINER
If you use autotag
as shown above, it'll ask to build the container for you if needed. To manually build it, first do the system setup, then run:
jetson-containers build llama_cpp
The dependencies from above will be built into the container, and it'll be tested during. Run it with --help
for build options.