Skip to content

Commit df7663e

Browse files
authored
Update tch instructions (tracel-ai#2844)
* Update tch instructions * Add windows note
1 parent 8ddd5c5 commit df7663e

File tree

8 files changed

+32
-70
lines changed

8 files changed

+32
-70
lines changed

crates/burn-tch/README.md

Lines changed: 25 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The backend supports CPU (multithreaded), [CUDA](https://pytorch.org/docs/stable
1717
[`tch-rs`](https://github.com/LaurentMazare/tch-rs) requires the C++ PyTorch library (LibTorch) to
1818
be available on your system.
1919

20-
By default, the CPU distribution is installed for LibTorch v2.2.0 as required by `tch-rs`.
20+
By default, the CPU distribution is installed for LibTorch v2.6.0 as required by `tch-rs`.
2121

2222
<details>
2323
<summary><strong>CUDA</strong></summary>
@@ -26,20 +26,25 @@ To install the latest compatible CUDA distribution, set the `TORCH_CUDA_VERSION`
2626
variable before the `tch-rs` dependency is retrieved with `cargo`.
2727

2828
```shell
29-
export TORCH_CUDA_VERSION=cu121
29+
export TORCH_CUDA_VERSION=cu124
3030
```
3131

3232
On Windows:
3333

3434
```powershell
35-
$Env:TORCH_CUDA_VERSION = "cu121"
35+
$Env:TORCH_CUDA_VERSION = "cu124"
3636
```
3737

38+
> Note: `tch` doesn't expose the downloaded libtorch directory on Windows when using the automatic
39+
> download feature, so the `torch_cuda.dll` cannot be detected properly during build. In this case,
40+
> you can set the `LIBTORCH` environment variable to point to the `libtorch/` folder in `torch-sys`
41+
> `OUT_DIR` (or move the downloaded lib to a different folder and point to it).
42+
3843
For example, running the validation sample for the first time could be done with the following
3944
commands:
4045

4146
```shell
42-
export TORCH_CUDA_VERSION=cu121
47+
export TORCH_CUDA_VERSION=cu124
4348
cargo run --bin cuda --release
4449
```
4550

@@ -88,7 +93,7 @@ platform.
8893
First, download the LibTorch CPU distribution.
8994

9095
```shell
91-
wget -O libtorch.zip https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.2.0%2Bcpu.zip
96+
wget -O libtorch.zip https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.6.0%2Bcpu.zip
9297
unzip libtorch.zip
9398
```
9499

@@ -108,7 +113,7 @@ export LD_LIBRARY_PATH=/absolute/path/to/libtorch/lib:$LD_LIBRARY_PATH
108113
First, download the LibTorch CPU distribution.
109114

110115
```shell
111-
wget -O libtorch.zip https://download.pytorch.org/libtorch/cpu/libtorch-macos-x86_64-2.2.0.zip
116+
wget -O libtorch.zip https://download.pytorch.org/libtorch/cpu/libtorch-macos-x86_64-2.6.0.zip
112117
unzip libtorch.zip
113118
```
114119

@@ -128,7 +133,7 @@ export DYLD_LIBRARY_PATH=/absolute/path/to/libtorch/lib:$DYLD_LIBRARY_PATH
128133
First, download the LibTorch CPU distribution.
129134

130135
```powershell
131-
wget https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-2.2.0%2Bcpu.zip -OutFile libtorch.zip
136+
wget https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-2.6.0%2Bcpu.zip -OutFile libtorch.zip
132137
Expand-Archive libtorch.zip
133138
```
134139

@@ -144,62 +149,17 @@ $Env:Path += ";/absolute/path/to/libtorch/"
144149

145150
#### CUDA
146151

147-
LibTorch 2.2.0 currently includes binary distributions with CUDA 11.8 or 12.1 runtimes. The manual
148-
installation instructions are detailed below.
149-
150-
**CUDA 11.8**
151-
152-
<details open>
153-
<summary><strong>🐧 Linux</strong></summary>
154-
155-
First, download the LibTorch CUDA 11.8 distribution.
156-
157-
```shell
158-
wget -O libtorch.zip https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.2.0%2Bcu118.zip
159-
unzip libtorch.zip
160-
```
161-
162-
Then, point to that installation using the `LIBTORCH` and `LD_LIBRARY_PATH` environment variables
163-
before building `burn-tch` or a crate which depends on it.
164-
165-
```shell
166-
export LIBTORCH=/absolute/path/to/libtorch/
167-
export LD_LIBRARY_PATH=/absolute/path/to/libtorch/lib:$LD_LIBRARY_PATH
168-
```
169-
170-
**Note:** make sure your CUDA installation is in your `PATH` and `LD_LIBRARY_PATH`.
171-
172-
</details><br>
173-
174-
<details>
175-
<summary><strong>🪟 Windows</strong></summary>
176-
177-
First, download the LibTorch CUDA 11.8 distribution.
178-
179-
```powershell
180-
wget https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.2.0%2Bcu118.zip -OutFile libtorch.zip
181-
Expand-Archive libtorch.zip
182-
```
183-
184-
Then, set the `LIBTORCH` environment variable and append the library to your path as with the
185-
PowerShell commands below before building `burn-tch` or a crate which depends on it.
186-
187-
```powershell
188-
$Env:LIBTORCH = "/absolute/path/to/libtorch/"
189-
$Env:Path += ";/absolute/path/to/libtorch/"
190-
```
191-
192-
</details><br>
193-
194-
**CUDA 12.1**
152+
LibTorch 2.6.0 currently includes binary distributions with CUDA 11.8, 12.4 or 12.6 runtimes. The
153+
manual installation instructions are detailed below for CUDA 12.6, but can be applied to the other
154+
CUDA versions by replacing `cu126` with the corresponding version string (e.g., `cu118` or `cu124`).
195155

196156
<details open>
197157
<summary><strong>🐧 Linux</strong></summary>
198158

199-
First, download the LibTorch CUDA 12.1 distribution.
159+
First, download the LibTorch CUDA 12.6 distribution.
200160

201161
```shell
202-
wget -O libtorch.zip https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.2.0%2Bcu121.zip
162+
wget -O libtorch.zip https://download.pytorch.org/libtorch/cu126/libtorch-cxx11-abi-shared-with-deps-2.6.0%2Bcu126.zip
203163
unzip libtorch.zip
204164
```
205165

@@ -218,10 +178,10 @@ export LD_LIBRARY_PATH=/absolute/path/to/libtorch/lib:$LD_LIBRARY_PATH
218178
<details>
219179
<summary><strong>🪟 Windows</strong></summary>
220180

221-
First, download the LibTorch CUDA 12.1 distribution.
181+
First, download the LibTorch CUDA 12.6 distribution.
222182

223183
```powershell
224-
wget https://download.pytorch.org/libtorch/cu121/libtorch-win-shared-with-deps-2.2.0%2Bcu121.zip -OutFile libtorch.zip
184+
wget https://download.pytorch.org/libtorch/cu126/libtorch-win-shared-with-deps-2.6.0%2Bcu126.zip -OutFile libtorch.zip
225185
Expand-Archive libtorch.zip
226186
```
227187

@@ -243,13 +203,13 @@ is to use a PyTorch installation. This requires a Python installation.
243203
_Note: MPS acceleration is available on MacOS 12.3+._
244204

245205
```shell
246-
pip install torch==2.2.0 numpy==1.26.4 setuptools
206+
pip install torch==2.6.0 numpy==1.26.4 setuptools
247207
export LIBTORCH_USE_PYTORCH=1
248208
export DYLD_LIBRARY_PATH=/path/to/pytorch/lib:$DYLD_LIBRARY_PATH
249209
```
250210

251-
**Note:** if `venv` is used, it should be activated during coding and building,
252-
or the compiler may not work properly.
211+
**Note:** if `venv` is used, it should be activated during coding and building, or the compiler may
212+
not work properly.
253213

254214
## Example Usage
255215

@@ -263,7 +223,8 @@ For a more complete example using the `tch` backend, take a loot at the
263223

264224
Try `.cargo/config.toml` ([cargo book](https://doc.rust-lang.org/cargo/reference/config.html#env)).
265225

266-
Instead of setting the environments in your shell, you can manually add them to your `.cargo/config.toml`:
226+
Instead of setting the environments in your shell, you can manually add them to your
227+
`.cargo/config.toml`:
267228

268229
```toml
269230
[env]
@@ -281,4 +242,5 @@ LD_LIBRARY_PATH = "/absolute/path/to/libtorch/lib:$LD_LIBRARY_PATH"
281242
LIBTORCH = "/absolute/path/to/libtorch/libtorch"
282243
EOF
283244
```
245+
284246
This will automatically include the old `LD_LIBRARY_PATH` value in the new one.

examples/custom-image-dataset/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ achieves 70-80% accuracy on the test set after just 30 epochs.
5454
Run it with the Torch GPU backend:
5555

5656
```sh
57-
export TORCH_CUDA_VERSION=cu121
57+
export TORCH_CUDA_VERSION=cu124
5858
cargo run --example custom-image-dataset --release --features tch-gpu
5959
```
6060

examples/mnist/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ cargo run --example mnist --release --features ndarray # CPU NdAr
1717
cargo run --example mnist --release --features ndarray-blas-openblas # CPU NdArray Backend - f32 - blas with openblas
1818
cargo run --example mnist --release --features ndarray-blas-netlib # CPU NdArray Backend - f32 - blas with netlib
1919
echo "Using tch backend"
20-
export TORCH_CUDA_VERSION=cu121 # Set the cuda version
20+
export TORCH_CUDA_VERSION=cu124 # Set the cuda version
2121
cargo run --example mnist --release --features tch-gpu # GPU Tch Backend - f32
2222
cargo run --example mnist --release --features tch-cpu # CPU Tch Backend - f32
2323
echo "Using wgpu backend"

examples/modern-lstm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ cargo run --example lstm-train --release --features cuda
2727
cargo run --example lstm-train --release --features wgpu
2828

2929
# Tch GPU backend
30-
export TORCH_CUDA_VERSION=cu121 # Set the cuda version
30+
export TORCH_CUDA_VERSION=cu124 # Set the cuda version
3131
cargo run --example lstm-train --release --features tch-gpu
3232

3333
# Tch CPU backend

examples/simple-regression/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ cargo run --example regression --release --features ndarray # CPU
2626
cargo run --example regression --release --features ndarray-blas-openblas # CPU NdArray Backend - f32 - blas with openblas
2727
cargo run --example regression --release --features ndarray-blas-netlib # CPU NdArray Backend - f32 - blas with netlib
2828
echo "Using tch backend"
29-
export TORCH_CUDA_VERSION=cu121 # Set the cuda version
29+
export TORCH_CUDA_VERSION=cu124 # Set the cuda version
3030
cargo run --example regression --release --features tch-gpu # GPU Tch Backend - f32
3131
cargo run --example regression --release --features tch-cpu # CPU Tch Backend - f32
3232
echo "Using wgpu backend"

examples/text-classification/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ cd burn
2929
# Use the --release flag to really speed up training.
3030
# Use the f16 feature if your CUDA device supports FP16 (half precision) operations. May not work well on every device.
3131

32-
export TORCH_CUDA_VERSION=cu121 # Set the cuda version (CUDA users)
32+
export TORCH_CUDA_VERSION=cu124 # Set the cuda version (CUDA users)
3333

3434
# AG News
3535
cargo run --example ag-news-train --release --features tch-gpu # Train on the ag news dataset

examples/text-generation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ git clone https://github.com/tracel-ai/burn.git
1414
cd burn
1515

1616
# Use the --release flag to really speed up training.
17-
export TORCH_CUDA_VERSION=cu121
17+
export TORCH_CUDA_VERSION=cu124
1818
cargo run --example text-generation --release
1919
```
2020

examples/wgan/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ cargo run --example wgan-mnist --release --features cuda
1818
cargo run --example wgan-mnist --release --features wgpu
1919

2020
# Tch GPU backend
21-
export TORCH_CUDA_VERSION=cu121 # Set the cuda version
21+
export TORCH_CUDA_VERSION=cu124 # Set the cuda version
2222
cargo run --example wgan-mnist --release --features tch-gpu
2323

2424
# Tch CPU backend

0 commit comments

Comments
 (0)