Skip to content

Releases: microsoft/onnxruntime

ONNX Runtime v1.3.0

19 May 03:15
eb5da13
Compare
Choose a tag to compare

Key Updates

General

  • ONNX 1.7 support
    • Opset 12
    • Function expansion support that enables several new ONNX 1.7 ops such as NegativeLogLikelihoodLoss, GreaterOrEqual, LessOrEqual, Celu to run without a kernel implementation.
  • [Preview] ONNX Runtime Training
    • ONNX Runtime Training is a new capability released in preview to accelerate training transformer models. See the sample here to use this feature in your training experiments.
  • Improved threadpool support for better resource utilization
    • Improved threadpool abstractions that switch between openmp and Eigen threadpools based on build settings. All operators have been updated to use these new abstractions.
    • Improved Eigen based threadpool now allow ops to provide cost (among other things like thread affinity) for operations
    • Simpler configuration of thread count. If built with OpenMP, use the OpenMP env variables; else use the ORT APIs to configure the number of threads.
    • Support for sessions to share global threadpool. See this for more information.
  • Performance improvements
    • ~10% average measured latency improvements amongst key representative models (including ONNX model zoo models, MLPerf, and production models shipped in Microsoft products)
    • Further latency improvements for Transformer models on CPU and GPU - benchmark script
    • Improved batch inferencing latency for scikit-learn models for large batch sizes
      • Significant improvements in the implementations of the following ONNX operators: TreeEnsembleRegressor, TreeEnsembleClassifier, LinearRegressor, LinearClassifier, SVMRegressor, SVMClassifier, TopK
    • C# API optimizations - PR3171
  • Telemetry enabled for Windows (more details on telemetry collection)
  • Improved error reporting when a kernel cannot be found due to missing type implementation
  • Minor fixes based on static code analysis

Dependency updates

Please note that this version of onnxruntime depends on Visual C++ 2019 runtime. Previous versions depended on Visual C++ 2017. Please also refer https://github.com/microsoft/onnxruntime/tree/rel-1.3.0#system-requirements for the full set of system requirements.

APIs and Packages

  • [General Availability] Windows Machine Learning APIs - package published on Nuget - Microsoft.AI.MachineLearning
    • Performance improvements
    • Opset updates
  • [General Availability] ONNX Runtime with DirectML package published on Nuget -Microsoft.ML.OnnxRuntime.DirectML
  • [General Availability] Java API - Maven package coming soon.
  • [Preview] Javascript (node.js) API now available to build from the master branch.
  • ARM64 Linux CPU Python package now available on Pypi. Note: this requires building ONNX for ARM64.
  • Nightly dev builds from master (Nuget feed, TestPypi-CPU, GPU)
  • API Updates
    • I/O binding support for Python API - This reduces execution time significantly by allowing users to setup inputs/outputs on the GPU prior to model execution.
    • API to specify free dimensions based on both denotations and symbolic names.

Execution Providers

  • OpenVINO v2.0 EP
  • DirectML EP updates
    • Updated graph interface to abstract GPU-dependent graph optimization
    • ONNX opset 10 and 11 support
    • Initial support of 8bit and quantized operators
    • Performance optimizations
  • [Preview] Rockchip NPU EP
  • [Preview] Xilinx FPGA Vitis-AI EP
  • Capability to build execution providers as DLLs - supported for DNNL EP, work in progress for other EPs.
    • If enabled in the build, the provider will be available as a shared library. Previously, EPs had to be statically linked with the core code.
    • No runtime cost to include the EP if it isn't loaded; can now dynamically decide when to load it based on the model

Contributions

We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Adam Pocock, pranavm-nvidia, Andrew Kane, Takeshi Watanabe, Jianhao Zhang, Colin Jermain, Andrews548, Jan Scholz, Pranav Prakash, suryasidd, and S. Manohar Karlapalem.

The ONNX Runtime Training code was originally developed internally at Microsoft, before being ported to Github. We’d like to recognize the original contributors: Aishwarya Bhandare, Ashwin Kumar, Cheng Tang, Du Li, Edward Chen, Ethan Tao, Fanny Nina Paravecino, Ganesan Ramalingam, Harshitha Parnandi Venkata, Jesse Benson, Jorgen Thelin, Ke Deng, Liqun Fu, Li-Wen Chang, Peng Wang, Sergii Dymchenko, Sherlock Huang, Stuart Schaefer, Tao Qin, Thiago Crepaldi, Tianju Xu, Weichun Wang, Wei Zuo, Wei-Sheng Chin, Weixing Zhang, Xiaowan Dong, Xueyun Zhu, Zeeshan Siddiqui, and Zixuan Jiang.

Known Issues

  1. The source doesn't compile on Ubuntu 14.04. See #4048
  2. Crash when setting IntraOpNumThreads using the C/C++/C# API. Fix is available in the master branch.
    Workaround: Setting IntraOpNumThreads is inconsequential when using ORT that is built with openmp enabled. Hence it's not required and can be safely commented out. Use the openmp env variables to set the threading params for openmp enabled builds (which is the recommended way).

ONNX Runtime v1.2.0

10 Mar 22:22
dacb42f
Compare
Choose a tag to compare

Key Updates

Execution Providers

  • [Preview] Availability of Windows Machine Learning (WinML) APIs in Windows builds of ONNX Runtime, with DirectML for GPU acceleration
    • Windows ML is a WinRT API designed specifically for Windows developers that already ships as an inbox component in newer Windows versions
    • Compatible with Windows 8.1 for CPU and Windows 10 1709 for GPU
    • Available as source code in the GitHub and pre-built Nuget packages (windows.ai.machinelearning.dll)
    • For additional documentation and samples on getting started, visit the Windows ML API Reference documentation
  • TensorRT Execution Provider upgraded to TRT 7
  • CUDA updated to 10.1
    • Linux build requires CUDA Runtime 10.1.243, cublas10-10.2.1.243, and CUDNN 7.6.5.32. Note: cublas 10.1.x will not work
    • Windows build requires CUDA Runtime 10.1.243, CUDNN 7.6.5.32
    • onnxruntime now depends on curand lib, which is part of the CUDA SDK. If you already have the SDK fully installed, then it won't be an issue

Builds and Packages

  • Nuget package structure updated. There is now a separate Managed Assembly (Microsoft.ML.OnnxRuntime.Managed) shared between the CPU and GPU Nuget packages. The "native" Nuget will depend on the "managed" Nuget to bring it into relevant projects automatically. PR 3104 Note that this should transparent for customers installing the Nuget packages. ORT package details are here.
  • Build system: support getting dependencies from vcpkg (a C++ package manager for Windows, Linux, and MacOS)
  • Capability to generate an onnxruntime Android Archive (AAR) file from source, which can be imported directly in Android Studio

API Updates

  • SessionOptions:
    • default value of max_num_graph_transformation_steps increased to 10
    • default value of graph optimization level is changed to ORT_ENABLE_ALL(99)
  • OrtEnv can be created/destroyed multiple times
  • Java API
    • Gradle now required to build onnxruntime
    • Available on Android
  • C API Additions:
    • GetDenotationFromTypeInfo
    • CastTypeInfoToMapTypeInfo
    • CastTypeInfoToSequenceTypeInfo
    • GetMapKeyType
    • GetMapValueType
    • GetSequenceElementType
    • ReleaseMapTypeInfo
    • ReleaseSequenceTypeInfo
    • SessionEndProfiling
    • SessionGetModelMetadata
    • ModelMetadataGetProducerName
    • ModelMetadataGetGraphName
    • ModelMetadataGetDomain
    • ModelMetadataGetDescription
    • ModelMetadataLookupCustomMetadataMap
    • ModelMetadataGetVersion
    • ReleaseModelMetadata

Operators

  • This release introduces a change to the forward-compatibility pattern ONNX Runtime previously followed. This change was added to guarantee correctness of model prediction and removes behavior ambiguity due to missing opset information. This release adds a model opset number and IR version check - ONNX Runtime will not support models with ONNX versions higher than the supported opset implemented for that version (see version matrix). If higher opset versions are needed, consider using custom operators via ORT's custom schema/kernel registry mechanism.
  • Int8 type support for Where Op
  • Updates to Contrib ops:
    • Changes: ReorderInput in kMSNchwcDomain, SkipLayerNormalization
    • New: QLinearAdd, QLinearMul, QLinearReduceMean, MulInteger, QLinearAveragePool
  • Added featurizer operators as an expansion of Contrib operators - these are not part of the official build and are experimental

Contributions

We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Eric Cousineau (Toyota Research Institute), Adam Pocock (Oracle), tinchi, Changyoung Koh, Andrews548, Jianhao Zhang, nicklas-mohr-jas, James Yuzawa, William Tambellini, Maher Jendoubi, Mina Asham, Saquib Nadeem Hashmi, Sanster, and Takeshi Watanabe.

ONNX Runtime v1.1.2

21 Feb 17:41
a8efa42
Compare
Choose a tag to compare

This is a minor patch release on 1.1.1.

This fixes the a minor issue that some logging in execution_frame.cc cannot be controlled by SessionLogVerbosityLevel in SessionOptions. PR #3043

ONNX Runtime v1.1.1

24 Jan 01:25
Compare
Choose a tag to compare

This is a minor patch release on 1.1.0.

Summary

  • Updated default optimization level to apply all by default to support best performance for popular models
  • Operator updates and other bugs

All fixes

  • update default optimization level + fix gemm_activation fusion #2791
  • Fix C# handling of unicode strings #2697
  • Initialize max of softmax with lowest of float #2786
  • Implement a more stable softmax #2715
  • add uint8 support to where op #2792
  • Fix memory leak in samples and test #2778
  • Fix memory leak in TRT #2815
  • Fix nightly build version number issue #2771

ONNX Runtime v1.1.0

19 Dec 22:07
c33dab3
Compare
Choose a tag to compare

Key Updates

  • Performance improvements to accelerate BERT model inference latency on both GPU and CPU. Updates include:
    • Additional fused CPU kernels as well as related transformers for key operators such as Attention, EmbedLayerNormalization, SkipLayerNormalization, FastGelu
    • Further optimization such as parallelizing Gelu and LayerNorm, enabling legacy stream mode, improving performance of elementwise operators, and fusing add bias into SkipLayerNormalization and FastGelu
    • Extended CUDA support for opset 11
  • Performance improvement for Faster R-CNN and Master R-CNN with new and updated implementation of opset 11 CUDA kernels, including Resize, Expand, Scatter, and Pad
  • TensorRT Execution Provider updates, including support for inputs with dynamic shapes
  • MKL-DNN (renamed DNNL) updated to v1.1
  • [Preview] NN API Execution Provider for Android - see more
  • [Preview] Java API for ONNX Runtime - see more
  • Tool for Python API: Automatically maps a dataframe to the inputs of an ONNX graph based on schema information in the pandas frame
  • Custom ops can be loaded from shared libraries: Custom ops can now be packaged in shared libraries and distributed for use in multiple applications without modification.

Contributions

We'd like to thank our community members across various teams at Microsoft and other companies for all the valuable contributions.

We'd like to extend special recognition to these individuals for their contributions in this release: Jianhao Zhang (JD AI), Adam Pocock (Oracle), nihui (Tencent), and Nick Groszewski. From the Intel teams, we'd like to thank Patrick Foley, Akhila Vidiyala, Ilya Lavrenov, Manohar Karlapalem, Surya Siddharth Pemmaraju, Sreekanth Yalachigere, Michal Karzynski, Thomas V Trimeloni, Tomasz Dolbniak, Amy Zhuang, Scott Cyphers, Alexander Slepko and other team members on their valuable work to support the Intel Execution Providers for ONNX Runtime.

ONNX Runtime v1.0.0

30 Oct 06:27
Compare
Choose a tag to compare

Key Updates

General

  • ONNX 1.6 compatibility - operator support for all opset11 ops on CPU, including Sequence ops.
  • Free dimension override: Add ability to override free dimensions to the inputs of a model. Free dimensions are tensor shapes which aren't statically known at model author time, and must be provided at runtime. Free dimensions are most often used for the batch size of a model's inputs, allowing for customizable batch sizes at runtime. This feature enables certain optimizations since the shape can be known apriori.
  • Performance improvements to further accelerate model inferencing latency on CPU and GPU. Notable updates include:
    • Additional CUDA operators added to support Object Detection and BERT models. Note: CUDA operator coverage is still limited and performance will vary significantly depending on the model and operator usage.
    • Improved parallelism for operators that use GEMM and MatMul
    • New implementation for 64 bits MatMul on x86_64 CPU
    • Added ability to set # of threads used by intra and inter operator parallelism to allow optimal configuration for both sequential and concurrent inferencing scenarios
    • Gelu fusion optimizer
  • Threading updates:
    • Eigen ThreadPool is now the default (previously there were two thread pool implementations, TaskThreadPool and Eigen ThreadPool)
    • Ability to disable multiple threading by setting thread pool size to 1 and onnxruntime_USE_OPENMP to OFF.
    • MLAS now uses the number of thread pool threads plus one as the parallelism level. (e.g. if you have 4 CPUs, you need to set the thread pool size to 3 so that you only have one thread per CPU)
  • CPU Python package is manylinux1 compliant. The GPU Python package is manylinux2010 and compatible with CUDA 10.0/cuDNN 7.6
  • Support for CentOS 6 and 7 for Python, C, and C++. Most of the code is now C++11 compliant (previously required C++14). C# .NET Core compatibility coming soon.
  • Package for ArchLinux
  • Telemetry - component level logging through Trace Logging for Windows builds. Data collection is limited and used strictly to identify areas for improvement. You can read more about the data collected and how to manage these settings here.
  • Bug fixes to address various issues filed on Github and other channels

API updates

  • Updates to the C API for clarity of usage. The 1.0 version of the API is now stable and will maintain backwards compatibility. Versioning is in supported to accommodate future updates.
  • C APIs are ABI compatible and follows Semantic Versioning. Programs linked with the current version of the ONNX Runtime library will continue to work with subsequent releases without updating any client code or re-linking.
  • New session option available for serializing optimized ONNX models
  • Enabled some new capabilities through the Python and C# APIs for feature parity, including registration of execution providers in Python and setting additional run options in C#.

Execution Providers (EP)

Updates

  • General Availability of the OpenVINO™ EP for Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, and the Intel® Vision Accelerator Design with Intel® Movidius™ Myriad™ VPU powered by OpenVINO™nGraph EP support of new operators.
  • MKL-DNN EP updated from 0.18.1 to 1.0.2 for an average of 5-10% (up to 50%) performance improvement on ONNX Model Zoo model latency
  • nGraph EP updated from 0.18 to 0.26, with support of new operators for quantization and performance improvements on LSTM ops (without peephole) and Pad op
  • TensorRT EP updated to the latest TensorRT 6.0 libraries
  • Android DNNLibrary version update

New EP support

  • [Preview] NUPHAR (Neural-network Unified Preprocessing Heterogeneous ARchitecture) is a TVM and LLVM based EP offering model acceleration by compiling nodes in subgraphs into optimized functions via JIT
  • [Preview] DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows, providing GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers
  • [Preview] Support for Intel® Vision Accelerator Design with Intel® Arria™ 10 FPGA powered by OpenVINO™.
  • [Preview] ARM Compute Library (ACL) Execution Provider targets ARM CPUs and GPUs for optimized execution of ONNX operators using the low-level libraries.

Build updates

  • Two new cmake options: onnxruntime_USE_GEMMLOWP, onnxruntime_USE_AUTOML, onnxruntime_USE_DML
  • Removed two cmake options: onnxruntime_USE_MLAS/onnxruntime_USE_EIGEN_THREADPOOL. These are always ON now.
  • The minimal supported gcc version is 4.8.2

Tooling

  • Availability of ONNX Go Live tool, which automates the process of shipping ONNX models by combining model conversion, correctness tests, and performance tuning into a single pipeline as a series of Docker images.
  • Updates to the quantization tool
    • Supports selective quantization for some nodes instead of all possible nodes
    • Bias quantization for Conv nodes
    • Node fusion for dynamic quantization
  • onnxruntime_perf_tool usage updates:
    • new option "-y" for controlling inter_op_num_threads
    • max optimization level is now 99, and 3 is now an invalid value. In most cases, this tool should be run with "-o 99"

Other Dependency Updates

  • Replaced gsl with gsl-lite to be compatible with C++11
  • Added NVIDIA cub
  • Added Wil for DML execution provider
  • Pybind11 updated from 2.2.4 to 2.4.0 to fix a compatibility issue with Baidu PaddlePaddle and some other python modules that are also depend on Pybind11
  • TVM updated to a newer version

ONNX Runtime v0.5.1

12 Oct 23:52
Compare
Choose a tag to compare

Bug Fixes

  • Fix in C# API marshalling for InferenceSession.Run()
  • Some fixes in OnnxRuntime server

Only NuGet packages are released for this patch release, because only the C# API users are impacted

ONNX Runtime v0.5.0

01 Aug 19:48
1f8019b
Compare
Choose a tag to compare
  • Execution Provider updates
    • MKL-DNN provider (subgraph based execution) for improved performance
    • Intel OpenVINO EP now available for Public Preview - build instructions
    • Update to CUDA 10 for inferencing with NVIDIA GPUs
    • Base CPU EP has faster convolution performance using the NCHWc blocked layout. This layout optimization can be enabled by setting graph optimization level to 3 in the session options.
  • C++ API for inferencing (wrapper on C API)
  • ONNX Runtime Server (Beta) for inferencing with HTTP and GRPC endpoints
  • Python Operator (Beta) to support custom Python code in a single node of an ONNX graph to make it easier for experimentation of custom operators
  • Support of Keras-based Mask R-CNN model. The model relies on some custom operators pending to be added in ONNX; in the meantime, it can be converted using this script for inferencing using ONNX Runtime 0.5. Other object detection models can be found from the ONNX Model Zoo.
  • Minor updates to the C API
    • For consistency, all C APIs now return an ORT status code
  • Code coverage for this release is 83%

ONNX Runtime v0.4.0

02 May 02:25
bf859a9
Compare
Choose a tag to compare

Key Updates

  • New execution providers for improved performance on specialized hardware
    • Intel nGraph
    • NVIDIA TensorRT
  • ONNX 1.5 compatibility
    • Opset 10 operator support
    • Supports newly added ONNX model zoo object detection models (YOLO v3, SSD)
    • Quantization operators
  • Updates to C API for Custom Operators
    • Allocation of outputs during compute
    • C++ wrapper to greatly simplify implementation
    • Supports custom op DLLs when ONNX Runtime is compiled statically
  • Graph optimizations with Constant Folding for improved performance
  • Official binary packages
    • Nuget package creation pipeline updated with security-focused tasks
      • CredScan
      • SDLNative Rules for PreFast
      • BinSim
    • Additional binaries built with MKL-ML published in Nuget
    • Size reduction in Windows (700KB+), Linux (65%) and Mac (45%) binaries

ONNX Runtime v0.3.1

09 Apr 00:39
Compare
Choose a tag to compare

This is a patch release for 0.3.0.

Updates include

  • Binary size reduction through usage of protobuf-lite and operator fixes
  • Build option to disable contrib ops (ops not in ONNX standard)
  • Build option to statically link MSVC
  • Minor bug fixes