Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
er-muyue authored Feb 7, 2024
1 parent 3ba7b5a commit f540e17
Showing 1 changed file with 12 additions and 4 deletions.
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ MobileVLM: Vision Language Model for Mobile Devices
</h5>

* **MobileVLM V2: Faster and Stronger Baseline for Vision Language Model** <br>
[![arXiv](https://img.shields.io/badge/Arxiv-Comming-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2312.16886)
[![arXiv](https://img.shields.io/badge/Arxiv-2402.03766-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2402.03766)
[![BibTex](https://img.shields.io/badge/📜-MobileVLM%20V2%20BibTex-Green.svg)](https://github.com/Meituan-AutoML/MobileVLM?tab=readme-ov-file#%EF%B8%8F-reference)

<details>
Expand Down Expand Up @@ -51,9 +51,9 @@ MobileVLM: Vision Language Model for Mobile Devices

* ⏳ MobileLLaMA Pre-training code.
* ⏳ MobileVLM V2 training data and code are being sorted out.
* **`Feb. 6th, 2024`**: 🔥🔥🔥 **MobileVLM V2** is out! The evaluation code of MobileVLM V2 is available now! Our MobileVLM V2 weights are publicly avaliable on the HuggingFace website. Enjoy [them](https://huggingface.co/mtgv/) !
* **`Feb. 6th, 2024`**: The SFT code and dataset of MobileLLaMA are released now! You can train your own chat model.
* **`Jan. 23th, 2024`**: 🚀🚀🚀 **MobileVLM** is officially supported by [`llama.cpp`](https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/MobileVLM-README.md) now ! Have a try !
* **`Feb. 06th, 2024`**: 🔥🔥🔥 **MobileVLM V2** is out! Paper [here](https://arxiv.org/abs/2402.03766)! The evaluation code of MobileVLM V2 is available now! Our MobileVLM V2 weights are publicly avaliable on the HuggingFace website. Enjoy [them](https://huggingface.co/mtgv/) !
* **`Feb. 06th, 2024`**: The SFT code and dataset of MobileLLaMA are released now! You can train your own chat model.
* **`Jan. 23rd, 2024`**: 🚀🚀🚀 **MobileVLM** is officially supported by [`llama.cpp`](https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/MobileVLM-README.md) now ! Have a try !
* **`Jan. 15th, 2024`**: Customized `llama.cpp` for **MobileVLM** and its [deployment instruction](#deployment-on-mobile-devices) on mobile devices.
* **`Jan. 11st, 2024`**: The training and evaluation codes of MobileVLM are available now! Follow these step-by-step instructions below to easily train your own mobileVLM in **5 hours** ⚡️ !
* **`Dec. 31st, 2023`**: Our MobileVLM weights are uploaded on the HuggingFace website. We also provide inference examples for the MobileLLaMA/MobileVLM model so that anyone can enjoy [them](https://huggingface.co/mtgv/) early.
Expand Down Expand Up @@ -305,6 +305,14 @@ If you find MobileVLM or MobileLLaMA useful in your research or applications, pl
journal={arXiv preprint arXiv:2312.16886},
year={2023}
}
@article{chu2024mobilevlm,
title={MobileVLM V2: Faster and Stronger Baseline for Vision Language Model},
author={Chu, Xiangxiang and Qiao, Limeng and Zhang, Xinyu and Xu, Shuang and Wei, Fei and Yang, Yang and Sun, Xiaofei and Hu, Yiming and Lin, Xinyang and Zhang, Bo and Shen, Chunhua},
journal={arXiv preprint arXiv:2402.03766},
year={2024}
}
```


Expand Down

0 comments on commit f540e17

Please sign in to comment.