This project aims to optimize the inference performance of various monocular depth estimation models using NVIDIA's TensorRT. It provides a pipeline to convert pre-trained PyTorch models into ONNX format and then into TensorRT engines, allowing for a comparative analysis of inference speeds.
- Key Features:
- Introduction to various monocular depth estimation models and a TensorRT conversion pipeline.
- Performance comparison (FPS, inference time) between the original PyTorch models and the TensorRT-optimized models.
- Generation of 3D depth information and point clouds from 2D images.
- Hardware: NVIDIA RTX3060 (notebook)
- OS: Windows Subsystem for Linux (WSL)
- Linux Distribution: Ubuntu 22.04.5 LTS
- CUDA Version: 12.8
# Create and activate a Conda virtual environment
conda create -n trte python=3.11 --yes
conda activate trte
# Install the required libraries
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip install tensorrt-cu12
pip install onnx
pip install opencv-python
pip install matplotlibEach model directory contains a README.md file with detailed instructions.
| Model Name | Link to TensorRT Conversion | Main Outputs |
|---|---|---|
| Depth Anything V2 | TensorRT Conversion | Depth |
| Distill Any Depth | TensorRT Conversion | Depth |
| Depth Anything AC | TensorRT Conversion | Depth |
| Depth Pro | TensorRT Conversion | Depth |
| Uni Depth V2 | TensorRT Conversion | Depth |
| Metric3D V2 | TensorRT Conversion | Depth |
| UniK3D | TensorRT Conversion | Depth |
| MoGe-2 | TensorRT Conversion | Depth |
| VGGT | TensorRT Conversion | Depth |
| StreamVGGT | TensorRT Conversion | Depth |
| Depth Anything V3 | TensorRT Conversion | Depth |
- Unified Inference Script: Create a single inference script that accepts the model name as an argument to improve user experience.
- Summarize Performance Analysis: Add a table to the main
README.mdthat summarizes the performance of all models (including input resolution, precision, and hardware details) for easy comparison. - Docker Support: Add a
Dockerfileto facilitate the environment setup and ensure reproducibility.