- jetson-copilot (temporary name for Ollama-LlamaIndex-based, Streamlit-enabled container)
jetson-containers run $(autotag jetson-copilot)
This will start the ollama
server and enter into a bash
terminal.
First, create a directory on the host side to store Jetson related documents. The data
directory is mounted on the container.
cd jetson-containers
mkdir -p ./data/documents/jetson
Once in the container:
streamlit run /opt/jetson-copilot/app.py
Or you can start the container with additional arguments:
jetson-containers run $(autotag jetson-copilot) bash -c '/start_ollama && streamlit run app.py'
This will start the ollama
server and streamlit
app for "Jetson Copilot", an AI assistant to answer any questions based on documents provided in /data/documents/jetson
directory.
It should show something like this:
You can now view your Streamlit app in your browser.
Network URL: http://10.110.50.241:8501
External URL: http://216.228.112.22:8501
From your browser, open the above Network URL (http://10.110.50.241:8501
).
CONTAINERS
jetson-copilot |
|
---|---|
Aliases | jetrag |
Requires | L4T ['>=34.1.0'] |
Dependencies | build-essential cuda:12.2 cudnn python numpy cmake onnx pytorch:2.2 ollama torchvision huggingface_hub rust transformers |
Dockerfile | Dockerfile |
Images | dustynv/jetson-copilot:r35.4.1 (2024-07-03, 6.3GB) dustynv/jetson-copilot:r36.2.0 (2024-07-03, 6.3GB) dustynv/jetson-copilot:r36.3.0 (2024-07-03, 6.3GB) |
CONTAINER IMAGES
Repository/Tag | Date | Arch | Size |
---|---|---|---|
dustynv/jetson-copilot:r35.4.1 |
2024-07-03 |
arm64 |
6.3GB |
dustynv/jetson-copilot:r36.2.0 |
2024-07-03 |
arm64 |
6.3GB |
dustynv/jetson-copilot:r36.3.0 |
2024-07-03 |
arm64 |
6.3GB |
Container images are compatible with other minor versions of JetPack/L4T:
• L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
• L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)
RUN CONTAINER
To start the container, you can use jetson-containers run
and autotag
, or manually put together a docker run
command:
# automatically pull or build a compatible container image
jetson-containers run $(autotag jetson-copilot)
# or explicitly specify one of the container images above
jetson-containers run dustynv/jetson-copilot:r36.3.0
# or if using 'docker run' (specify image and mounts/ect)
sudo docker run --runtime nvidia -it --rm --network=host dustynv/jetson-copilot:r36.3.0
jetson-containers run
forwards arguments todocker run
with some defaults added (like--runtime nvidia
, mounts a/data
cache, and detects devices)
autotag
finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
To mount your own directories into the container, use the -v
or --volume
flags:
jetson-containers run -v /path/on/host:/path/in/container $(autotag jetson-copilot)
To launch the container running a command, as opposed to an interactive shell:
jetson-containers run $(autotag jetson-copilot) my_app --abc xyz
You can pass any options to it that you would to docker run
, and it'll print out the full command that it constructs before executing it.
BUILD CONTAINER
If you use autotag
as shown above, it'll ask to build the container for you if needed. To manually build it, first do the system setup, then run:
jetson-containers build jetson-copilot
The dependencies from above will be built into the container, and it'll be tested during. Run it with --help
for build options.