- Read the documentation landing page before using this repository: https://github.com/TexasInstruments/edgeai
- Specifically the section: edgeai-mpu
Edge AI model training, quantization, compilation/benchmark & Model Zoo
- Release name: 11.0
- Git branch: r11.0
- tidl_tools version: 11_00_08_00
- edgeai-tidl-tools git tag: 11_00_08_00
- Date: 2025 July 9
We are in the process of adding support for several new models. Configs for models verified in this release are in this repository and the models are available in edgeai-modelzoo. The following new models have been verified:
Model name | Model Type | Source repository |
---|---|---|
rtmdet lite version (multiple flavours) | Object Detection | edgeai-mmdetection |
fastbev (without temporal) | Multi-view 3DOD for ADAS | edgeai-mmdetection3d |
bevformer_tiny | Multi-view 3DOD for ADAS | edgeai-mmdetection3d |
Note: The 3DOD models are trained with pandaset dataset (which is a Multi-view, Multi-modality ADAS / Automous Driving Dataset). edgeai-mmdetection3d and edgeai-benchmark now supports pandaset dataset. See more details of this dataset in edgeai-mmdetection3d.
- Accuracy fix for object detection models in edgeai-modelmaker and edgeai-mmdetection
More details are in the Release Notes
This repository is an aggregation of multiple repositories using git subtree. Due to large number of commits, git clone can be slow. Recommend to clone only the required depth. For example, to clone the main branch with a shallow depth=1:
git clone --depth 1 https://github.com/TexasInstruments/edgeai-tensorlab.git
If you have done a shallow clone and later need the full history, you can fetch more commits. This will convert the shallow clone into a full clone by fetching the entire history of commits.
git fetch --unshallow
Or, you can incrementally deepen the history:
git fetch --depth=<new_depth>
Want do use Edge AI on TI's MPU devices - but don't know where to start? We have multiple solutions to help develop and deploy models.
EDGE-AI-STUDIO - easy to use GUI tools
- Model Composer: Capture images, annotate them, train and compile models using GUI.
- Model Analyzer: Use our hosted Jupyter notebooks to try model compilation online.
edgeai-modelmaker - a commandline tool that supports Bring Your Own Data (BYOD) development flow
- Use EDGE-AI-STUDIO Model Composer (above GUI tool) to collect and annotate data to create a dataset
- Export the dataset on to your machine.
- Use edgeai-modelmaker to train a model using the dataset. edgeai-modelmaker allows you to tweak more parameters than what is supported in the GUI tool
- It is fully customizable, so you can look at how models and tasks are integrated and even add your own model or tasks.
edgeai-modelzoo - for advanced users
- Navigagte to edgeai-modelzoo to see see example models, their documentation and performance benchmarks.
- Browse to the respositories that were used to train those models and try to train your own model using one of those.
- Use edgeai-benchmark or edgeai-tidl-tools to compile models and create compiled artifacts.
- Run the compiled models using Edge AI SDK
- The subcomponents have detailed documentation. In the browser, navigate into the sub-folders to see detailed documentation. Here is a high level overview.
Category | ToolLink | Purpose | IS NOT |
---|---|---|---|
Model Zoo / Models collection | edgeai-modelzoo | provides collection of pretrained models, documentation & benchmark information | |
Model compilation & benchmarking | edgeai-benchmark | Wrapper on top of edgeai-tidl-tools for easy model compilation and speed/accuracy benchmarking - Bring your own model and compile, benchmark and generate artifacts for deployment on SDK with camera, inference and display (using edgeai-gst-apps) - Comprehends inference pipeline including dataset loading, pre-processing and post-processing - Benchmarking of accuracy and latency with large data sets - Post training quantization - Docker for easy development environment setup |
|
Model training tools | edgeai-modeloptimization | Model optimization tools for improved model training, tools to train TIDL friendly models. - Model surgery: Modifies models with minimal loss in accuracy and makes it suitable for TI device (replaces unsupported operators) - QAT: Quantization Aware Training to improve accuracy with fixed point quantization - Model Pruning/sparsity: Induces sparsity during training – only applicable for specific devices - this is in development. |
- Does not support Tensorflow |
Model training code | edgeai-torchvision edgeai-mmdetection edgeai-mmdetection3d edgeai-hf-transformers edgeai-mmpose edgeai-tensorvision |
Training repositories for various tasks - Provides extensions of popular training repositories (like mmdetection, torchvision) with lite version of models |
- Does not support Tensorflow |
End-to-end Model development - Datasets, Training & Compilation | edgeai-modelmaker | Beginner friendly, command line, integrated environment for training & compilation - Bring your own data, select a model, perform training and generate artifacts for deployment on SDK - Backend tool for model composer (early availability of features compared to Model Composer ) |
- Does not support Bring Your Own Model workflow |
Example datasets, used in edgeai-modelmaker | edgeai-datasets | Example datasets |
Category | ToolLink | Purpose | IS NOT |
---|---|---|---|
Model training code | edgeai-yolox is being deprecated - use edgeai-mmpose for Keypoint detection and edgeai-mmdetection for Object Detection |
Technical documentation can be found in the documentation of each repository. Here we have a collection of technical reports & tutorials that give high level overview on various topics - see Edge AI Tech Reports.
This umbrella repository uses and modifies several source repositories. The following table can be used to navigate to the source of the original repositories and see the contents & contributors.
Sub-repository/Sub-directory | Original source repository |
---|---|
edgeai-hf-transformers | https://github.com/huggingface/transformers |
edgeai-mmdeploy | https://github.com/open-mmlab/mmdeploy |
edgeai-mmdetection | https://github.com/open-mmlab/mmdetection |
edgeai-mmdetection3d | https://github.com/open-mmlab/mmdetection3d |
edgeai-mmpose | https://github.com/open-mmlab/mmpose |
edgeai-torchvision | https://github.com/pytorch/vision |
edgeai-yolox | https://github.com/Megvii-BaseDetection/YOLOX |
edgeai-benchmark | NA |
edgeai-modelzoo | NA |
edgeai-modelmaker | NA |
edgeai-modeloptimization | NA |
edgeai-tensorvision | NA |