From f540e17f05b63f0568de1c45d12317f29808ddf7 Mon Sep 17 00:00:00 2001 From: qiaolm <84311688+er-muyue@users.noreply.github.com> Date: Wed, 7 Feb 2024 10:27:35 +0800 Subject: [PATCH] Update README.md --- README.md | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 8b27184..7158246 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ MobileVLM: Vision Language Model for Mobile Devices * **MobileVLM V2: Faster and Stronger Baseline for Vision Language Model**
- [![arXiv](https://img.shields.io/badge/Arxiv-Comming-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2312.16886) + [![arXiv](https://img.shields.io/badge/Arxiv-2402.03766-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2402.03766) [![BibTex](https://img.shields.io/badge/📜-MobileVLM%20V2%20BibTex-Green.svg)](https://github.com/Meituan-AutoML/MobileVLM?tab=readme-ov-file#%EF%B8%8F-reference)
@@ -51,9 +51,9 @@ MobileVLM: Vision Language Model for Mobile Devices * ⏳ MobileLLaMA Pre-training code. * ⏳ MobileVLM V2 training data and code are being sorted out. -* **`Feb. 6th, 2024`**: 🔥🔥🔥 **MobileVLM V2** is out! The evaluation code of MobileVLM V2 is available now! Our MobileVLM V2 weights are publicly avaliable on the HuggingFace website. Enjoy [them](https://huggingface.co/mtgv/) ! -* **`Feb. 6th, 2024`**: The SFT code and dataset of MobileLLaMA are released now! You can train your own chat model. -* **`Jan. 23th, 2024`**: 🚀🚀🚀 **MobileVLM** is officially supported by [`llama.cpp`](https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/MobileVLM-README.md) now ! Have a try ! +* **`Feb. 06th, 2024`**: 🔥🔥🔥 **MobileVLM V2** is out! Paper [here](https://arxiv.org/abs/2402.03766)! The evaluation code of MobileVLM V2 is available now! Our MobileVLM V2 weights are publicly avaliable on the HuggingFace website. Enjoy [them](https://huggingface.co/mtgv/) ! +* **`Feb. 06th, 2024`**: The SFT code and dataset of MobileLLaMA are released now! You can train your own chat model. +* **`Jan. 23rd, 2024`**: 🚀🚀🚀 **MobileVLM** is officially supported by [`llama.cpp`](https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/MobileVLM-README.md) now ! Have a try ! * **`Jan. 15th, 2024`**: Customized `llama.cpp` for **MobileVLM** and its [deployment instruction](#deployment-on-mobile-devices) on mobile devices. * **`Jan. 11st, 2024`**: The training and evaluation codes of MobileVLM are available now! Follow these step-by-step instructions below to easily train your own mobileVLM in **5 hours** ⚡️ ! * **`Dec. 31st, 2023`**: Our MobileVLM weights are uploaded on the HuggingFace website. We also provide inference examples for the MobileLLaMA/MobileVLM model so that anyone can enjoy [them](https://huggingface.co/mtgv/) early. @@ -305,6 +305,14 @@ If you find MobileVLM or MobileLLaMA useful in your research or applications, pl journal={arXiv preprint arXiv:2312.16886}, year={2023} } + +@article{chu2024mobilevlm, + title={MobileVLM V2: Faster and Stronger Baseline for Vision Language Model}, + author={Chu, Xiangxiang and Qiao, Limeng and Zhang, Xinyu and Xu, Shuang and Wei, Fei and Yang, Yang and Sun, Xiaofei and Hu, Yiming and Lin, Xinyang and Zhang, Bo and Shen, Chunhua}, + journal={arXiv preprint arXiv:2402.03766}, + year={2024} +} + ```