Skip to content

Commit

Permalink
Update tripy version to 0.0.4 (#397)
Browse files Browse the repository at this point in the history
  • Loading branch information
parthchadha authored Nov 20, 2024
1 parent 89e2090 commit 4f8fd90
Show file tree
Hide file tree
Showing 3 changed files with 26 additions and 3 deletions.
25 changes: 24 additions & 1 deletion tripy/docs/packages.html
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@

<body>
<h1>Package Index</h1>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.4/tripy-0.0.4-py3-none-any.whl">tripy-0.0.4-py3-none-any.whl</a><br>

<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/tripy-v0.0.3/tripy-0.0.3-py3-none-any.whl">tripy-0.0.3-py3-none-any.whl</a><br>

Expand Down Expand Up @@ -102,6 +105,26 @@ <h1>Package Index</h1>
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp312-cp312-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp312-cp312-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.36/mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp39-cp39-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.36+cuda12.trt102-cp39-cp39-linux_x86_64.whl</a><br>
</body>


<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp310-cp310-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp310-cp310-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp311-cp311-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp311-cp311-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp312-cp312-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp312-cp312-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp39-cp39-linux_x86_64.whl">mlir_tensorrt_compiler-0.1.37+cuda12.trt102-cp39-cp39-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp310-cp310-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp310-cp310-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp311-cp311-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp311-cp311-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp312-cp312-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp312-cp312-linux_x86_64.whl</a><br>
<a
href="https://github.com/NVIDIA/TensorRT-Incubator/releases/download/mlir-tensorrt-v0.1.37/mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp39-cp39-linux_x86_64.whl">mlir_tensorrt_runtime-0.1.37+cuda12.trt102-cp39-cp39-linux_x86_64.whl</a><br>


</body>

</html>
2 changes: 1 addition & 1 deletion tripy/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "tripy"
version = "0.0.3"
version = "0.0.4"
authors = [{name = "NVIDIA", email="[email protected]"}]
description = "Tripy: A Python Programming Model For TensorRT"
readme = "README.md"
Expand Down
2 changes: 1 addition & 1 deletion tripy/tripy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
# limitations under the License.
#

__version__ = "0.0.3"
__version__ = "0.0.4"

# Import TensorRT to make sure all dependent libraries are loaded first.
import tensorrt
Expand Down

0 comments on commit 4f8fd90

Please sign in to comment.