|
1 | 1 | # Changelog (MMPreTrain)
|
2 | 2 |
|
| 3 | +## v1.0.0rc8(22/05/2023) |
| 4 | + |
| 5 | +### Highlights |
| 6 | + |
| 7 | +- Support multiple multi-modal algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)! |
| 8 | +- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones. |
| 9 | +- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. |
| 10 | + |
| 11 | +### New Features |
| 12 | + |
| 13 | +- Support Chinese CLIP. ([#1576](https://github.com/open-mmlab/mmpretrain/pull/1576)) |
| 14 | +- Add ScienceQA Metrics ([#1577](https://github.com/open-mmlab/mmpretrain/pull/1577)) |
| 15 | +- Support multiple multi-modal algorithms and inferencers. ([#1561](https://github.com/open-mmlab/mmpretrain/pull/1561)) |
| 16 | +- add eva02 backbone ([#1450](https://github.com/open-mmlab/mmpretrain/pull/1450)) |
| 17 | +- Support dinov2 backbone ([#1522](https://github.com/open-mmlab/mmpretrain/pull/1522)) |
| 18 | +- Support some downstream classification datasets. ([#1467](https://github.com/open-mmlab/mmpretrain/pull/1467)) |
| 19 | +- Support GLIP ([#1308](https://github.com/open-mmlab/mmpretrain/pull/1308)) |
| 20 | +- Register torchvision transforms into mmpretrain ([#1265](https://github.com/open-mmlab/mmpretrain/pull/1265)) |
| 21 | +- Add ViT of SAM ([#1476](https://github.com/open-mmlab/mmpretrain/pull/1476)) |
| 22 | + |
| 23 | +### Improvements |
| 24 | + |
| 25 | +- [Refactor] Support to freeze channel reduction and add layer decay function ([#1490](https://github.com/open-mmlab/mmpretrain/pull/1490)) |
| 26 | +- [Refactor] Support resizing pos_embed while loading ckpt and format output ([#1488](https://github.com/open-mmlab/mmpretrain/pull/1488)) |
| 27 | + |
| 28 | +### Bug Fixes |
| 29 | + |
| 30 | +- Fix scienceqa ([#1581](https://github.com/open-mmlab/mmpretrain/pull/1581)) |
| 31 | +- Fix config of beit ([#1528](https://github.com/open-mmlab/mmpretrain/pull/1528)) |
| 32 | +- Incorrect stage freeze on RIFormer Model ([#1573](https://github.com/open-mmlab/mmpretrain/pull/1573)) |
| 33 | +- Fix ddp bugs caused by `out_type`. ([#1570](https://github.com/open-mmlab/mmpretrain/pull/1570)) |
| 34 | +- Fix multi-task-head loss potential bug ([#1530](https://github.com/open-mmlab/mmpretrain/pull/1530)) |
| 35 | +- Support bce loss without batch augmentations ([#1525](https://github.com/open-mmlab/mmpretrain/pull/1525)) |
| 36 | +- Fix clip generator init bug ([#1518](https://github.com/open-mmlab/mmpretrain/pull/1518)) |
| 37 | +- Fix the bug in binary cross entropy loss ([#1499](https://github.com/open-mmlab/mmpretrain/pull/1499)) |
| 38 | + |
| 39 | +### Docs Update |
| 40 | + |
| 41 | +- Update PoolFormer citation to CVPR version ([#1505](https://github.com/open-mmlab/mmpretrain/pull/1505)) |
| 42 | +- Refine Inference Doc ([#1489](https://github.com/open-mmlab/mmpretrain/pull/1489)) |
| 43 | +- Add doc for usage of confusion matrix ([#1513](https://github.com/open-mmlab/mmpretrain/pull/1513)) |
| 44 | +- Update MMagic link ([#1517](https://github.com/open-mmlab/mmpretrain/pull/1517)) |
| 45 | +- Fix example_project README ([#1575](https://github.com/open-mmlab/mmpretrain/pull/1575)) |
| 46 | +- Add NPU support page ([#1481](https://github.com/open-mmlab/mmpretrain/pull/1481)) |
| 47 | +- train cfg: Removed old description ([#1473](https://github.com/open-mmlab/mmpretrain/pull/1473)) |
| 48 | +- Fix typo in MultiLabelDataset docstring ([#1483](https://github.com/open-mmlab/mmpretrain/pull/1483)) |
| 49 | + |
3 | 50 | ## v1.0.0rc7(07/04/2023)
|
4 | 51 |
|
5 | 52 | ### Highlights
|
|
0 commit comments