Releases: neo-ai/neo-ai-dlr
v1.10.0
Release 1.10.0 can be used to run models compiled using the release-1.10.0 branch of neo-ai/TVM.
This release implements support for TF2 frontend parser, support for TF2 models was the major driving factor behind this release.
It also fixes bugs for TRT memory leak, TRT performance, weights doubled after TRT compilation, and also fix for RelayVM fails on Windows.
This release upgrades TreeLite to version 1.2.0.
It also adds support for KERAS dense 3D inputs and nested model.
Pre-built wheels can be installed via pip install link-to-wheel. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
v1.9.0
Release 1.9.0 can be used to run models compiled using the release-1.9.0 branch of neo-ai/TVM.
This Release fixes bug for CUDA NMS implementation affecting MXNet models, and fix for RelayVM for 32 bit platforms.
It also adds support for TF2.x.
Pre-built wheels can be installed via pip install link-to-wheel. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
v1.8.0
Release 1.8.0 can be used to run models compiled using the release-1.8.0 branch of neo-ai/TVM.
This Release provides better performance on PyTorch object detection and TensorFlow object detection models on GPU devices.
Pre-built wheels can be installed via pip install link-to-wheel. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
v1.7.0
Release 1.7.0 can be used to run models compiled using the release-1.7.0 branch of neo-ai/TVM.
This Release supports PyTorch object detection models on GPU.
Loading of Multiple DLR is available.
It also supports YOLO v5 in PyTorch.
Pre-built wheels can be installed via pip install link-to-wheel. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
v1.6.0
Release 1.6.0 can be used to run models compiled using the release-1.6.0 branch of neo-ai/TVM.
It enables specifying individual TVM artifacts to CreateDLRModel API and add APIs to use DLTensors for SetInput/GetOutput. It provides missing DLR C++ APIs to GraphRuntime & VMRuntime. Version 1.6.0 skips loading tvm artifacts from disk and allows passing in data directly for graph, params and relay_exec.
This Release supports Pytorch object detection models on CPU. Additional TensorFlow object detection models on GPU are supported like ssd mobilenet,mask_rcnn_resnet, faster_rcnn_resnet, etc.
It also supports NonMaxSuppressionV5 aka tf.image.non_max_suppression_with_scores which returns scores in addition to indices and size.
Pre-built wheels can be installed via pip install link-to-wheel. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
v1.5.0
Release 1.5.0 can be used to run models compiled using the release-1.5.0 branch of neo-ai/TVM.
This update enables support for TensorFlow object detection models on GPU. For these particular models, since part of the model runs on CPU while the other part runs on GPU, the model might have to be loaded with dev_type='cpu'
. The DeviceType field in the metadata file will now tell you which device type is required. This can be queried using GetDLRDeviceType
.
Pre-built wheels can be installed via pip install link-to-wheel
. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
v1.4.0
Release 1.4.0 can be used to run models compiled using the release-1.4.0 branch of neo-ai/TVM.
This update brings support for TensorFlow object detection models for CPU, and some PyTorch object detection models.
v1.3.0
Release 1.3.0 can be used to run models compiled using the release-1.3.0 branch of neo-ai/TVM
Pre-built wheels can be installed via pip install link-to-wheel
. If you don't see your platform in the table, see Installing DLR for instructions to build from source.
v1.2.0
Release 1.2.0 can be used to run models compiled using the release-1.2.0 branch of neo-ai/TVM
Pre-built wheels can be installed via pip install link-to-wheel
. If you don't see your platform in the table, see Installing DLR for instructions to build from source.