MMFlow v1.0.0rc0 Released
Pre-releaseWe are excited to announce the release of MMFlow 1.0.0rc0. MMFlow 1.0.0rc0 is a part of the OpenMMLab 2.0 projects. Built upon the new training engine, MMFlow 1.x unifies the interfaces of dataset, models, evaluation, and visualization with faster training and testing speed.
Highlights
-
New engines MMFlow 1.x is based on MMEngine, which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.
-
Unified interfaces As a part of the OpenMMLab 2.0 projects, MMFlow 1.x unifies and refactors the interfaces and internal logics of training, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.
-
Faster speed We optimize the training and inference speed for common models.
-
More documentation and tutorials We add a bunch of documentation and tutorials to help users get started more smoothly. Read it here.
Breaking Changes
We briefly list the major breaking changes here.
We will update the migration guide to provide complete details and migration instructions.
Training and testing
-
MMFlow 1.x runs on PyTorch>=1.6. We have deprecated the support of PyTorch 1.5 to embrace the mixed precision training and other new features since PyTorch 1.6. Some models can still run on PyTorch 1.5, but the full functionality of MMFlow 1.x is not guaranteed.
-
MMFlow 1.x uses Runner in MMEngine rather than that in MMCV. The new Runner implements and unifies the building logic of dataset, model, evaluation, and visualization. Therefore, MMFlow 1.x no longer maintains the building logics of those modules in
mmflow.train.apis
andtools/train.py
. Those code have been migrated into MMEngine. Please refer to the migration guide of Runner in MMEngine for more details. -
The Runner in MMEngine also supports testing and validation. The testing scripts are also simplified, which has similar logic as that in training scripts to build the runner.
-
The execution points of hooks in the new Runner have been enriched to allow more flexible customization. Please refer to the migration guide of Hook in MMEngine for more details.
-
Learning rate and momentum scheduling has been migrated from
Hook
toParameter Scheduler
in MMEngine. Please refer to the migration guide of Parameter Scheduler in MMEngine for more details.
Configs
-
The Runner in MMEngine uses a different config structures to ease the understanding of the components in runner. Users can read the config example of mmflow or refer to the migration guide in MMEngine for migration details.
-
The file names of configs and models are also refactored to follow the new rules unified across OpenMMLab 2.0 projects. Please refer to the user guides of config for more details.
Components
- Dataset
- Data Transforms
- Model
- Evaluation
- Visualization
Improvements
-
The training speed of those models with some common training strategies are improved, including those with synchronized batch normalization and mixed precision training.
-
Support mixed precision training of all the models. However, some models may got Nan results due to some numerical issues. We will update the documentation and list their results (accuracy of failure) of mixed precision training.
Ongoing changes
-
Inference interfaces: a unified inference interfaces will be supported in the future to ease the use of released models.
-
Interfaces of useful tools that can be used in notebook: more useful tools that implemented in the
tools
directory will have their python interfaces so that they can be used through notebook and in downstream libraries. -
Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMFlow 1.x.