Skip to content

Commit

Permalink
2.0.0-rc.3
Browse files Browse the repository at this point in the history
  • Loading branch information
decahedron1 committed Jul 6, 2024
1 parent 0a43482 commit 3dec017
Show file tree
Hide file tree
Showing 9 changed files with 14 additions and 14 deletions.
6 changes: 3 additions & 3 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ exclude = [ 'examples/cudarc' ]

[package]
name = "ort"
description = "A safe Rust wrapper for ONNX Runtime 1.17 - Optimize and Accelerate Machine Learning Inferencing"
version = "2.0.0-rc.2"
description = "A safe Rust wrapper for ONNX Runtime 1.18 - Optimize and accelerate machine learning inference & training"
version = "2.0.0-rc.3"
edition = "2021"
rust-version = "1.70"
license = "MIT OR Apache-2.0"
Expand Down Expand Up @@ -83,7 +83,7 @@ qnn = [ "ort-sys/qnn" ]
[dependencies]
ndarray = { version = "0.15", optional = true }
thiserror = "1.0"
ort-sys = { version = "2.0.0-rc.2", path = "ort-sys" }
ort-sys = { version = "2.0.0-rc.3", path = "ort-sys" }
libloading = { version = "0.8", optional = true }

ureq = { version = "2.1", optional = true, default-features = false, features = [ "tls" ] }
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
<a href="https://crates.io/crates/ort" target="_blank"><img alt="Crates.io" src="https://img.shields.io/crates/v/ort?style=for-the-badge&label=ort&logo=rust"></a> <img alt="ONNX Runtime" src="https://img.shields.io/badge/onnxruntime-v1.18.1-blue?style=for-the-badge&logo=cplusplus">
</div>

`ort` is an (unofficial) [ONNX Runtime](https://onnxruntime.ai/) 1.18 wrapper for Rust based on the now inactive [`onnxruntime-rs`](https://github.com/nbigaouette/onnxruntime-rs). ONNX Runtime accelerates ML inference on both CPU & GPU.
`ort` is an (unofficial) [ONNX Runtime](https://onnxruntime.ai/) 1.18 wrapper for Rust based on the now inactive [`onnxruntime-rs`](https://github.com/nbigaouette/onnxruntime-rs). ONNX Runtime accelerates ML inference and training on both CPU & GPU.

## 📖 Documentation
- [Guide](https://ort.pyke.io/)
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
},
"link-api": {
"title": "API Reference ↗",
"href": "https://docs.rs/ort/2.0.0-rc.2/ort"
"href": "https://docs.rs/ort/2.0.0-rc.3/ort"
},
"link-crates": {
"title": "Crates.io ↗",
Expand Down
4 changes: 2 additions & 2 deletions docs/pages/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import { Callout, Card, Cards, Steps } from 'nextra/components';
</div>

<Callout type='warning'>
These docs are for the latest alpha version of `ort`, `2.0.0-rc.2`. This version is production-ready (just not API stable) and we recommend new & existing projects use it.
These docs are for the latest alpha version of `ort`, `2.0.0-rc.3`. This version is production-ready (just not API stable) and we recommend new & existing projects use it.
</Callout>

`ort` makes it easy to deploy your machine learning models to production via [ONNX Runtime](https://onnxruntime.ai/), a hardware-accelerated inference engine. With `ort` + ONNX Runtime, you can run almost any ML model (including ResNet, YOLOv8, BERT, LLaMA) on almost any hardware, often far faster than PyTorch, and with the added bonus of Rust's efficiency.
Expand All @@ -37,7 +37,7 @@ Converting a neural network to a graph representation like ONNX opens the door t
If you have a [supported platform](/setup/platforms) (and you probably do), installing `ort` couldn't be any simpler! Just add it to your Cargo dependencies:
```toml
[dependencies]
ort = "2.0.0-rc.2"
ort = "2.0.0-rc.3"
```

### Convert your model
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/migrating/v2.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ let l = outputs["latents"].try_extract_tensor::<f32>()?;
```

## Execution providers
Execution provider structs with public fields have been replaced with builder pattern structs. See the [API reference](https://docs.rs/ort/2.0.0-rc.2/ort/index.html?search=ExecutionProvider) and the [execution providers reference](/perf/execution-providers) for more information.
Execution provider structs with public fields have been replaced with builder pattern structs. See the [API reference](https://docs.rs/ort/2.0.0-rc.3/ort/index.html?search=ExecutionProvider) and the [execution providers reference](/perf/execution-providers) for more information.

```diff
-// v1.x
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/perf/execution-providers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ fn main() -> anyhow::Result<()> {
```

## Configuring EPs
EPs have configuration options to control behavior or increase performance. Each `XXXExecutionProvider` struct returns a builder with configuration methods. See the [API reference](https://docs.rs/ort/2.0.0-rc.2/ort/index.html?search=ExecutionProvider) for the EP structs for more information on which options are supported and what they do.
EPs have configuration options to control behavior or increase performance. Each `XXXExecutionProvider` struct returns a builder with configuration methods. See the [API reference](https://docs.rs/ort/2.0.0-rc.3/ort/index.html?search=ExecutionProvider) for the EP structs for more information on which options are supported and what they do.

```rust
use ort::{CoreMLExecutionProvider, Session};
Expand Down
4 changes: 2 additions & 2 deletions docs/pages/setup/cargo-features.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ title: Cargo features
-**`half`**: Enables support for float16 & bfloat16 tensors via the [`half`](https://crates.io/crates/half) crate. ONNX models that are converted to 16-bit precision will typically convert to/from 32-bit floats at the input/output, so you will likely never actually need to interact with a 16-bit tensor on the Rust side. Though, `half` isn't a heavy enough crate to worry about it affecting compile times.
-**`copy-dylibs`**: In case dynamic libraries are used (like with the CUDA execution provider), creates a symlink to them in the relevant places in the `target` folder to make [compile-time dynamic linking](/setup/linking#compile-time-dynamic-linking) work.
- ⚒️ **`load-dynamic`**: Enables [runtime dynamic linking](/setup/linking#runtime-loading-with-load-dynamic), which alleviates many of the troubles with compile-time dynamic linking and offers greater flexibility.
- ⚒️ **`fetch-models`**: Enables the [`SessionBuilder::commit_from_url`](https://docs.rs/ort/2.0.0-rc.2/ort/struct.SessionBuilder.html#method.commit_from_url) method, allowing you to quickly download & run a model from a URL. This should only be used for quick testing.
- ⚒️ **`operator-libraries`**: Allows for sessions to load custom operators from dynamic C++ libraries via [`SessionBuilder::with_operator_library`](https://docs.rs/ort/2.0.0-rc.2/ort/struct.SessionBuilder.html#method.with_operator_library). If possible, we recommend [writing your custom ops in Rust instead](/perf/custom-operators).
- ⚒️ **`fetch-models`**: Enables the [`SessionBuilder::commit_from_url`](https://docs.rs/ort/2.0.0-rc.3/ort/struct.SessionBuilder.html#method.commit_from_url) method, allowing you to quickly download & run a model from a URL. This should only be used for quick testing.
- ⚒️ **`operator-libraries`**: Allows for sessions to load custom operators from dynamic C++ libraries via [`SessionBuilder::with_operator_library`](https://docs.rs/ort/2.0.0-rc.3/ort/struct.SessionBuilder.html#method.with_operator_library). If possible, we recommend [writing your custom ops in Rust instead](/perf/custom-operators).

## Execution providers
Each [execution provider](/perf/execution-providers) is also gated behind a Cargo feature.
Expand Down
2 changes: 1 addition & 1 deletion docs/pages/setup/platforms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: ONNX Runtime, and by extension `ort`, supports a wide variety of pl

import { Callout } from 'nextra/components';

Here are the supported platforms and binary availability status, as of v2.0.0-rc.2.
Here are the supported platforms and binary availability status, as of v2.0.0-rc.3.

* 🟢 - Supported. Dynamic & static binaries provided by pyke.
* 🔷 - Supported. Static binaries provided by pyke.
Expand Down
4 changes: 2 additions & 2 deletions ort-sys/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]
name = "ort-sys"
description = "Unsafe Rust bindings for ONNX Runtime 1.17 - Optimize and Accelerate Machine Learning Inferencing"
version = "2.0.0-rc.2"
description = "Unsafe Rust bindings for ONNX Runtime 1.18 - Optimize and Accelerate Machine Learning Inferencing"
version = "2.0.0-rc.3"
edition = "2021"
rust-version = "1.70"
license = "MIT OR Apache-2.0"
Expand Down

0 comments on commit 3dec017

Please sign in to comment.