Skip to content

Commit 4dd8a86

Browse files
Bump version to v1.0.0rc8 (#1583)
* Bump version to v1.0.0rc8 * Apply suggestions from code review Co-authored-by: Yixiao Fang <[email protected]> * Update README.md --------- Co-authored-by: Yixiao Fang <[email protected]>
1 parent be389eb commit 4dd8a86

File tree

10 files changed

+94
-9
lines changed

10 files changed

+94
-9
lines changed

README.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351
8686

8787
## What's new
8888

89+
🌟 v1.0.0rc8 was released in 22/05/2023
90+
91+
- Support multiple **multi-modal** algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
92+
- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
93+
- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain. See [the doc](https://mmpretrain.readthedocs.io/en/latest/api/data_process.html#torchvision-transforms)
94+
8995
🌟 v1.0.0rc7 was released in 07/04/2023
9096

9197
- Integrated Self-supervised learning algorithms from **MMSelfSup**, such as **MAE**, **BEiT**, etc.
@@ -160,6 +166,9 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
160166
<td>
161167
<b>Self-supervised Learning</b>
162168
</td>
169+
<td>
170+
<b>Multi-Modality Algorithms</b>
171+
</td>
163172
<td>
164173
<b>Others</b>
165174
</td>
@@ -239,6 +248,15 @@ Results and models are available in the [model zoo](https://mmpretrain.readthedo
239248
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
240249
</ul>
241250
</td>
251+
<td>
252+
<ul>
253+
<li><a href="configs/blip">BLIP (arxiv'2022)</a></li>
254+
<li><a href="configs/blip2">BLIP-2 (arxiv'2023)</a></li>
255+
<li><a href="configs/ofa">OFA (CoRR'2022)</a></li>
256+
<li><a href="configs/flamingo">Flamingo (NeurIPS'2022)</a></li>
257+
<li><a href="configs/chinese_clip">Chinese CLIP (arxiv'2022)</a></li>
258+
</ul>
259+
</td>
242260
<td>
243261
Image Retrieval Task:
244262
<ul>

README_zh-CN.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,12 @@ https://github.com/open-mmlab/mmpretrain/assets/26739999/e4dcd3a2-f895-4d1b-a351
8484

8585
## 更新日志
8686

87+
🌟 2023/5/22 发布了 v1.0.0rc8 版本
88+
89+
- 支持多种多模态算法和推理器。您可以通过 [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo) 探索这些功能!
90+
- 新增 EVA-02,Dino-V2,ViT-SAM 和 GLIP 主干网络。
91+
- 将 torchvision 变换注册到 MMPretrain,现在您可以轻松地将 torchvision 的数据增强集成到 MMPretrain 中。
92+
8793
🌟 2023/4/7 发布了 v1.0.0rc7 版本
8894

8995
- 整和来自 MMSelfSup 的自监督学习算法,例如 `MAE`, `BEiT`
@@ -157,6 +163,9 @@ mim install -e ".[multimodal]"
157163
<td>
158164
<b>自监督学习</b>
159165
</td>
166+
<td>
167+
<b>多模态算法</b>
168+
</td>
160169
<td>
161170
<b>其它</b>
162171
</td>
@@ -235,6 +244,15 @@ mim install -e ".[multimodal]"
235244
<li><a href="configs/mixmim">MixMIM (arXiv'2022)</a></li>
236245
</ul>
237246
</td>
247+
<td>
248+
<ul>
249+
<li><a href="configs/blip">BLIP (arxiv'2022)</a></li>
250+
<li><a href="configs/blip2">BLIP-2 (arxiv'2023)</a></li>
251+
<li><a href="configs/ofa">OFA (CoRR'2022)</a></li>
252+
<li><a href="configs/flamingo">Flamingo (NeurIPS'2022)</a></li>
253+
<li><a href="configs/chinese_clip">Chinese CLIP (arxiv'2022)</a></li>
254+
</ul>
255+
</td>
238256
<td>
239257
图像检索任务:
240258
<ul>

docker/serve/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ ARG CUDA="11.3"
33
ARG CUDNN="8"
44
FROM pytorch/torchserve:latest-gpu
55

6-
ARG MMPRE="1.0.0rc5"
6+
ARG MMPRE="1.0.0rc8"
77

88
ENV PYTHONUNBUFFERED TRUE
99

docs/en/get_started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ pip install -U openmim && mim install -e .
6363
Just install with mim.
6464

6565
```shell
66-
pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
66+
pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
6767
```
6868

6969
```{note}
@@ -80,7 +80,7 @@ can add `[multimodal]` during the installation. For example:
8080
mim install -e ".[multimodal]"
8181

8282
# Install as a Python package
83-
mim install "mmpretrain[multimodal]>=1.0.0rc7"
83+
mim install "mmpretrain[multimodal]>=1.0.0rc8"
8484
```
8585

8686
## Verify the installation

docs/en/notes/changelog.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,52 @@
11
# Changelog (MMPreTrain)
22

3+
## v1.0.0rc8(22/05/2023)
4+
5+
### Highlights
6+
7+
- Support multiple multi-modal algorithms and inferencers. You can explore these features by the [gradio demo](https://github.com/open-mmlab/mmpretrain/tree/main/projects/gradio_demo)!
8+
- Add EVA-02, Dino-V2, ViT-SAM and GLIP backbones.
9+
- Register torchvision transforms into MMPretrain, you can now easily integrate torchvision's data augmentations in MMPretrain.
10+
11+
### New Features
12+
13+
- Support Chinese CLIP. ([#1576](https://github.com/open-mmlab/mmpretrain/pull/1576))
14+
- Add ScienceQA Metrics ([#1577](https://github.com/open-mmlab/mmpretrain/pull/1577))
15+
- Support multiple multi-modal algorithms and inferencers. ([#1561](https://github.com/open-mmlab/mmpretrain/pull/1561))
16+
- add eva02 backbone ([#1450](https://github.com/open-mmlab/mmpretrain/pull/1450))
17+
- Support dinov2 backbone ([#1522](https://github.com/open-mmlab/mmpretrain/pull/1522))
18+
- Support some downstream classification datasets. ([#1467](https://github.com/open-mmlab/mmpretrain/pull/1467))
19+
- Support GLIP ([#1308](https://github.com/open-mmlab/mmpretrain/pull/1308))
20+
- Register torchvision transforms into mmpretrain ([#1265](https://github.com/open-mmlab/mmpretrain/pull/1265))
21+
- Add ViT of SAM ([#1476](https://github.com/open-mmlab/mmpretrain/pull/1476))
22+
23+
### Improvements
24+
25+
- [Refactor] Support to freeze channel reduction and add layer decay function ([#1490](https://github.com/open-mmlab/mmpretrain/pull/1490))
26+
- [Refactor] Support resizing pos_embed while loading ckpt and format output ([#1488](https://github.com/open-mmlab/mmpretrain/pull/1488))
27+
28+
### Bug Fixes
29+
30+
- Fix scienceqa ([#1581](https://github.com/open-mmlab/mmpretrain/pull/1581))
31+
- Fix config of beit ([#1528](https://github.com/open-mmlab/mmpretrain/pull/1528))
32+
- Incorrect stage freeze on RIFormer Model ([#1573](https://github.com/open-mmlab/mmpretrain/pull/1573))
33+
- Fix ddp bugs caused by `out_type`. ([#1570](https://github.com/open-mmlab/mmpretrain/pull/1570))
34+
- Fix multi-task-head loss potential bug ([#1530](https://github.com/open-mmlab/mmpretrain/pull/1530))
35+
- Support bce loss without batch augmentations ([#1525](https://github.com/open-mmlab/mmpretrain/pull/1525))
36+
- Fix clip generator init bug ([#1518](https://github.com/open-mmlab/mmpretrain/pull/1518))
37+
- Fix the bug in binary cross entropy loss ([#1499](https://github.com/open-mmlab/mmpretrain/pull/1499))
38+
39+
### Docs Update
40+
41+
- Update PoolFormer citation to CVPR version ([#1505](https://github.com/open-mmlab/mmpretrain/pull/1505))
42+
- Refine Inference Doc ([#1489](https://github.com/open-mmlab/mmpretrain/pull/1489))
43+
- Add doc for usage of confusion matrix ([#1513](https://github.com/open-mmlab/mmpretrain/pull/1513))
44+
- Update MMagic link ([#1517](https://github.com/open-mmlab/mmpretrain/pull/1517))
45+
- Fix example_project README ([#1575](https://github.com/open-mmlab/mmpretrain/pull/1575))
46+
- Add NPU support page ([#1481](https://github.com/open-mmlab/mmpretrain/pull/1481))
47+
- train cfg: Removed old description ([#1473](https://github.com/open-mmlab/mmpretrain/pull/1473))
48+
- Fix typo in MultiLabelDataset docstring ([#1483](https://github.com/open-mmlab/mmpretrain/pull/1483))
49+
350
## v1.0.0rc7(07/04/2023)
451

552
### Highlights

docs/en/notes/faq.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,8 @@ and make sure you fill in all required information in the template.
1616

1717
| MMPretrain version | MMEngine version | MMCV version |
1818
| :----------------: | :---------------: | :--------------: |
19-
| 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
19+
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
20+
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
2021

2122
```{note}
2223
Since the `dev` branch is under frequent development, the MMEngine and MMCV

docs/zh_CN/get_started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ pip install -U openmim && mim install -e .
6767
直接使用 mim 安装即可。
6868

6969
```shell
70-
pip install -U openmim && mim install "mmpretrain>=1.0.0rc7"
70+
pip install -U openmim && mim install "mmpretrain>=1.0.0rc8"
7171
```
7272

7373
```{note}
@@ -83,7 +83,7 @@ MMPretrain 中的多模态模型需要额外的依赖项,要安装这些依赖
8383
mim install -e ".[multimodal]"
8484

8585
# 作为 Python 包安装
86-
mim install "mmpretrain[multimodal]>=1.0.0rc7"
86+
mim install "mmpretrain[multimodal]>=1.0.0rc8"
8787
```
8888

8989
## 验证安装

docs/zh_CN/notes/faq.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,8 @@
1313

1414
| MMPretrain 版本 | MMEngine 版本 | MMCV 版本 |
1515
| :-------------: | :---------------: | :--------------: |
16-
| 1.0.0rc7 (main) | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
16+
| 1.0.0rc8 (main) | mmengine >= 0.7.1 | mmcv >= 2.0.0rc4 |
17+
| 1.0.0rc7 | mmengine >= 0.5.0 | mmcv >= 2.0.0rc4 |
1718

1819
```{note}
1920
由于 `dev` 分支处于频繁开发中,MMEngine 和 MMCV 版本依赖可能不准确。如果您在使用

mmpretrain/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
mmcv_maximum_version = '2.1.0'
1111
mmcv_version = digit_version(mmcv.__version__)
1212

13-
mmengine_minimum_version = '0.5.0'
13+
mmengine_minimum_version = '0.7.1'
1414
mmengine_maximum_version = '1.0.0'
1515
mmengine_version = digit_version(mmengine.__version__)
1616

mmpretrain/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Copyright (c) OpenMMLab. All rights reserved
22

3-
__version__ = '1.0.0rc7'
3+
__version__ = '1.0.0rc8'
44

55

66
def parse_version_info(version_str):

0 commit comments

Comments
 (0)