Skip to content

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

License

Notifications You must be signed in to change notification settings

open-mmlab/mmtracking

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Folders and files

NameName
Last commit message
Last commit date
Feb 8, 2023
Aug 24, 2022
Sep 5, 2022
Jan 3, 2023
Sep 7, 2022
Apr 29, 2022
Feb 27, 2023
Feb 8, 2023
Dec 8, 2022
Oct 11, 2022
Dec 29, 2020
Feb 8, 2023
Sep 13, 2022
Dec 17, 2021
Feb 7, 2023
Aug 31, 2022
Aug 31, 2021
Dec 29, 2020
Jul 27, 2021
Oct 11, 2022
Sep 13, 2022
Dec 27, 2020
Sep 13, 2022
Jun 15, 2022

Repository files navigation

English | 简体中文

Introduction

MMTracking is an open source video perception toolbox by PyTorch. It is a part of OpenMMLab project.

The master branch works with PyTorch1.6+.

Major features

  • The First Unified Video Perception Platform

    We are the first open source toolbox that unifies versatile video perception tasks include video object detection, multiple object tracking, single object tracking and video instance segmentation.

  • Modular Design

    We decompose the video perception framework into different components and one can easily construct a customized method by combining different modules.

  • Simple, Fast and Strong

    Simple: MMTracking interacts with other OpenMMLab projects. It is built upon MMDetection that we can capitalize any detector only through modifying the configs.

    Fast: All operations run on GPUs. The training and inference speeds are faster than or comparable to other implementations.

    Strong: We reproduce state-of-the-art models and some of them even outperform the official implementations.

What's New

Release Mask2Former, PrDiMP and StrongSORT++) pretrained models.

v1.0.0rc1 was released in 10/10/2022. Please refer to changelog.md for details and release history.

Get Started

Please refer to get_started.md for install instructions.

Please refer to inference.md for the basic usage of MMTracking. If you want to train and test your own model, please see dataset_prepare.md and train_test.md.

A Colab tutorial is also provided. You may preview the notebook here or directly run it on Colab.

There are also usage tutorials, such as learning about configs, visualization, analysis tools,

Benchmark and model zoo

Results and models are available in the model zoo.

Video Object Detection

Supported Methods

Supported Datasets

Multi-Object Tracking

Supported Methods

Supported Datasets

Video Instance Segmentation

Supported Methods

Supported Datasets

Single Object Tracking

Supported Methods

Supported Datasets

Contributing

We appreciate all contributions to improve MMTracking. Please refer to CONTRIBUTING.md for the contributing guideline and this discussion for development roadmap.

Acknowledgement

MMTracking is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new video perception methods.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmtrack2020,
    title={{MMTracking: OpenMMLab} video perception toolbox and benchmark},
    author={MMTracking Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmtracking}},
    year={2020}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab

  • MMEngine: OpenMMLab foundational library for training deep learning models.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM installs OpenMMLab packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning Toolbox and Benchmark.
  • MMRazor: OpenMMLab Model Compression Toolbox and Benchmark.
  • MMFewShot: OpenMMLab FewShot Learning Toolbox and Benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMGeneration: OpenMMLab Generative Model toolbox and benchmark.
  • MMDeploy: OpenMMlab deep learning model deployment toolset.

About

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.

Topics

Resources

License

Citation

Stars

Watchers

Forks

Packages

No packages published

Languages