diff --git a/Cargo.toml b/Cargo.toml
index c1a70bb..173eff4 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -23,8 +23,8 @@ exclude = [ 'examples/cudarc' ]
[package]
name = "ort"
-description = "A safe Rust wrapper for ONNX Runtime 1.17 - Optimize and Accelerate Machine Learning Inferencing"
-version = "2.0.0-rc.2"
+description = "A safe Rust wrapper for ONNX Runtime 1.18 - Optimize and accelerate machine learning inference & training"
+version = "2.0.0-rc.3"
edition = "2021"
rust-version = "1.70"
license = "MIT OR Apache-2.0"
@@ -83,7 +83,7 @@ qnn = [ "ort-sys/qnn" ]
[dependencies]
ndarray = { version = "0.15", optional = true }
thiserror = "1.0"
-ort-sys = { version = "2.0.0-rc.2", path = "ort-sys" }
+ort-sys = { version = "2.0.0-rc.3", path = "ort-sys" }
libloading = { version = "0.8", optional = true }
ureq = { version = "2.1", optional = true, default-features = false, features = [ "tls" ] }
diff --git a/README.md b/README.md
index 128f921..e3bdcc7 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,7 @@
-`ort` is an (unofficial) [ONNX Runtime](https://onnxruntime.ai/) 1.18 wrapper for Rust based on the now inactive [`onnxruntime-rs`](https://github.com/nbigaouette/onnxruntime-rs). ONNX Runtime accelerates ML inference on both CPU & GPU.
+`ort` is an (unofficial) [ONNX Runtime](https://onnxruntime.ai/) 1.18 wrapper for Rust based on the now inactive [`onnxruntime-rs`](https://github.com/nbigaouette/onnxruntime-rs). ONNX Runtime accelerates ML inference and training on both CPU & GPU.
## 📖 Documentation
- [Guide](https://ort.pyke.io/)
diff --git a/docs/pages/_meta.json b/docs/pages/_meta.json
index 14840b8..5fe05b2 100644
--- a/docs/pages/_meta.json
+++ b/docs/pages/_meta.json
@@ -10,7 +10,7 @@
},
"link-api": {
"title": "API Reference ↗",
- "href": "https://docs.rs/ort/2.0.0-rc.2/ort"
+ "href": "https://docs.rs/ort/2.0.0-rc.3/ort"
},
"link-crates": {
"title": "Crates.io ↗",
diff --git a/docs/pages/index.mdx b/docs/pages/index.mdx
index 97ab9f5..d6add90 100644
--- a/docs/pages/index.mdx
+++ b/docs/pages/index.mdx
@@ -11,7 +11,7 @@ import { Callout, Card, Cards, Steps } from 'nextra/components';
- These docs are for the latest alpha version of `ort`, `2.0.0-rc.2`. This version is production-ready (just not API stable) and we recommend new & existing projects use it.
+ These docs are for the latest alpha version of `ort`, `2.0.0-rc.3`. This version is production-ready (just not API stable) and we recommend new & existing projects use it.
`ort` makes it easy to deploy your machine learning models to production via [ONNX Runtime](https://onnxruntime.ai/), a hardware-accelerated inference engine. With `ort` + ONNX Runtime, you can run almost any ML model (including ResNet, YOLOv8, BERT, LLaMA) on almost any hardware, often far faster than PyTorch, and with the added bonus of Rust's efficiency.
@@ -37,7 +37,7 @@ Converting a neural network to a graph representation like ONNX opens the door t
If you have a [supported platform](/setup/platforms) (and you probably do), installing `ort` couldn't be any simpler! Just add it to your Cargo dependencies:
```toml
[dependencies]
-ort = "2.0.0-rc.2"
+ort = "2.0.0-rc.3"
```
### Convert your model
diff --git a/docs/pages/migrating/v2.mdx b/docs/pages/migrating/v2.mdx
index a3c1afd..c18776b 100644
--- a/docs/pages/migrating/v2.mdx
+++ b/docs/pages/migrating/v2.mdx
@@ -173,7 +173,7 @@ let l = outputs["latents"].try_extract_tensor::()?;
```
## Execution providers
-Execution provider structs with public fields have been replaced with builder pattern structs. See the [API reference](https://docs.rs/ort/2.0.0-rc.2/ort/index.html?search=ExecutionProvider) and the [execution providers reference](/perf/execution-providers) for more information.
+Execution provider structs with public fields have been replaced with builder pattern structs. See the [API reference](https://docs.rs/ort/2.0.0-rc.3/ort/index.html?search=ExecutionProvider) and the [execution providers reference](/perf/execution-providers) for more information.
```diff
-// v1.x
diff --git a/docs/pages/perf/execution-providers.mdx b/docs/pages/perf/execution-providers.mdx
index b084463..fbc5975 100644
--- a/docs/pages/perf/execution-providers.mdx
+++ b/docs/pages/perf/execution-providers.mdx
@@ -83,7 +83,7 @@ fn main() -> anyhow::Result<()> {
```
## Configuring EPs
-EPs have configuration options to control behavior or increase performance. Each `XXXExecutionProvider` struct returns a builder with configuration methods. See the [API reference](https://docs.rs/ort/2.0.0-rc.2/ort/index.html?search=ExecutionProvider) for the EP structs for more information on which options are supported and what they do.
+EPs have configuration options to control behavior or increase performance. Each `XXXExecutionProvider` struct returns a builder with configuration methods. See the [API reference](https://docs.rs/ort/2.0.0-rc.3/ort/index.html?search=ExecutionProvider) for the EP structs for more information on which options are supported and what they do.
```rust
use ort::{CoreMLExecutionProvider, Session};
diff --git a/docs/pages/setup/cargo-features.mdx b/docs/pages/setup/cargo-features.mdx
index 9fc9241..9e9f72e 100644
--- a/docs/pages/setup/cargo-features.mdx
+++ b/docs/pages/setup/cargo-features.mdx
@@ -9,8 +9,8 @@ title: Cargo features
- ✅ **`half`**: Enables support for float16 & bfloat16 tensors via the [`half`](https://crates.io/crates/half) crate. ONNX models that are converted to 16-bit precision will typically convert to/from 32-bit floats at the input/output, so you will likely never actually need to interact with a 16-bit tensor on the Rust side. Though, `half` isn't a heavy enough crate to worry about it affecting compile times.
- ✅ **`copy-dylibs`**: In case dynamic libraries are used (like with the CUDA execution provider), creates a symlink to them in the relevant places in the `target` folder to make [compile-time dynamic linking](/setup/linking#compile-time-dynamic-linking) work.
- ⚒️ **`load-dynamic`**: Enables [runtime dynamic linking](/setup/linking#runtime-loading-with-load-dynamic), which alleviates many of the troubles with compile-time dynamic linking and offers greater flexibility.
-- ⚒️ **`fetch-models`**: Enables the [`SessionBuilder::commit_from_url`](https://docs.rs/ort/2.0.0-rc.2/ort/struct.SessionBuilder.html#method.commit_from_url) method, allowing you to quickly download & run a model from a URL. This should only be used for quick testing.
-- ⚒️ **`operator-libraries`**: Allows for sessions to load custom operators from dynamic C++ libraries via [`SessionBuilder::with_operator_library`](https://docs.rs/ort/2.0.0-rc.2/ort/struct.SessionBuilder.html#method.with_operator_library). If possible, we recommend [writing your custom ops in Rust instead](/perf/custom-operators).
+- ⚒️ **`fetch-models`**: Enables the [`SessionBuilder::commit_from_url`](https://docs.rs/ort/2.0.0-rc.3/ort/struct.SessionBuilder.html#method.commit_from_url) method, allowing you to quickly download & run a model from a URL. This should only be used for quick testing.
+- ⚒️ **`operator-libraries`**: Allows for sessions to load custom operators from dynamic C++ libraries via [`SessionBuilder::with_operator_library`](https://docs.rs/ort/2.0.0-rc.3/ort/struct.SessionBuilder.html#method.with_operator_library). If possible, we recommend [writing your custom ops in Rust instead](/perf/custom-operators).
## Execution providers
Each [execution provider](/perf/execution-providers) is also gated behind a Cargo feature.
diff --git a/docs/pages/setup/platforms.mdx b/docs/pages/setup/platforms.mdx
index f83d131..6fbec09 100644
--- a/docs/pages/setup/platforms.mdx
+++ b/docs/pages/setup/platforms.mdx
@@ -5,7 +5,7 @@ description: ONNX Runtime, and by extension `ort`, supports a wide variety of pl
import { Callout } from 'nextra/components';
-Here are the supported platforms and binary availability status, as of v2.0.0-rc.2.
+Here are the supported platforms and binary availability status, as of v2.0.0-rc.3.
* 🟢 - Supported. Dynamic & static binaries provided by pyke.
* 🔷 - Supported. Static binaries provided by pyke.
diff --git a/ort-sys/Cargo.toml b/ort-sys/Cargo.toml
index 015a465..e07f561 100644
--- a/ort-sys/Cargo.toml
+++ b/ort-sys/Cargo.toml
@@ -1,7 +1,7 @@
[package]
name = "ort-sys"
-description = "Unsafe Rust bindings for ONNX Runtime 1.17 - Optimize and Accelerate Machine Learning Inferencing"
-version = "2.0.0-rc.2"
+description = "Unsafe Rust bindings for ONNX Runtime 1.18 - Optimize and Accelerate Machine Learning Inferencing"
+version = "2.0.0-rc.3"
edition = "2021"
rust-version = "1.70"
license = "MIT OR Apache-2.0"