Skip to content

MobileVLM : A Fast, Reproducible and Strong Vision Language Assistant for Mobile Devices

License

Notifications You must be signed in to change notification settings

huyiming2018/MobileVLM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices

We present MobileVLM, a competent multimodal vision language model (MMVLM) targeted to run on mobile devices. It is an amalgamation of a myriad of architectural designs and techniques that are mobile-oriented, which comprises a set of language models at the scale of 1.4B and 2.7B parameters, trained from scratch, a multimodal vision model that is pre-trained in the CLIP fashion, cross-modality interaction via an efficient projector. We evaluate MobileVLM on several typical VLM benchmarks. Our models demonstrate on par performance compared with a few much larger models. More importantly, we measure the inference speed on both a Qualcomm Snapdragon 888 CPU and an NVIDIA Jeston Orin GPU, and we obtain state-of-the-art performance of 21.5 tokens and 65.3 tokens per second, respectively.

MobileVLM Architecture

Figure 1. The MobileVLM architecture (right) utilizes MobileLLaMA as its language model, intakes $\mathbf{X}_v$ and $\mathbf{X}_q$ which are image and language instructions as respective inputs and gives $\mathbf{Y}_a$ as the output language response. LDP refers to a lightweight downsample projector (left).

Weights Release and Usage

We release MobileLLaMA weights in a PyTorch format can be conveniently used with the Hugging Face transformers library. Our checkpoint weights is licensed permissively under the Apache 2.0 license.

Install

  • Clone this repository and navigate to LLaVA folder
git clone https://github.com/Meituan-AutoML/MobileVLM.git
cd MobileVLM

MobileLLaMA weights

PyTorch weights for Hugging Face transformers:

Example for MobileLLaMA model inference

import torch
from transformers import LlamaTokenizer, LlamaForCausalLM

## v2 models
model_path = 'mtgv/MobileLLaMA-1.4B-Base'

tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float16, device_map='auto',
)

prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

generation_output = model.generate(
    input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))

For more advanced usage, please follow the transformers LLaMA documentation.

Evaluating MobileLLaMA with LM-Eval-Harness

The model can be evaluated with lm-eval-harness.

MobileVLM weights

Reference

If you found MobileVLM or MobileLLaMA useful in your research or applications, please cite using the following BibTeX:

@misc{chu2023mobilevlm,
      title={MobileVLM : A Fast, Reproducible and Strong Vision Language Assistant for Mobile Devices}, 
      author={Xiangxiang Chu and Limeng Qiao and Xinyang Lin and Shuang Xu and Yang Yang and Yiming Hu and Fei Wei and Xinyu Zhang and Bo Zhang and Xiaolin Wei and Chunhua Shen},
      year={2023},
      eprint={2312.16886},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

MobileVLM : A Fast, Reproducible and Strong Vision Language Assistant for Mobile Devices

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published