If you make use of our work, please cite our repo:
@misc{cocchi2024llavamore,
title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}},
author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
url={https://github.com/aimagelab/LLaVA-MORE},
year={2024}
}
- [2024/08/16] 📌 Improved LLaVA-MORE 8B model, considering advanced image backbones.
- [2024/08/01] 🔥 First release of our LLaVA-MORE 8B, based on LLaMA 3.1.
- [2024/08/01] 🔎 If you are interested in this area of research, check out our survey on the revolution of Multimodal LLMs, recently published in ACL (Findings).
- [2024/08/01] 📚 Check out the latest researches from AImageLab.
LLaVA-MORE
enhances the well-known LLaVA architecture by integrating for the first time the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.
To further support the research community in enhancing Multimodal LLM performance, we are also releasing the training code and scripts for distributed training.
Remember to star the repository to stay updated on future releases 🤗!
In this section, we present the performance of our model compared to other versions of LLaVA across different multimodal datasets.
Model Name | Text-VQA* | Science-QA | AI2D | SEED-vid | SEED-all | SEED-img | MMMU | MMBench-Cn | MMBench-En | POPE | GQA | MME-P | MME-C |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LLaVA-v1.5-7B | 58.2 | 69.0 | 56.4 | 42.0 | 61.6 | 66.8 | 34.2 | 56.5 | 65.3 | 85.6 | 62.4 | 1474.3 | 314.6 |
LLaVA-v1.5-LLaMA3-8B | 57.6 | 74.2 | 60.7 | 42.0 | 64.3 | 70.1 | 37.3 | 65.4 | 70.3 | 85.4 | 63.5 | 1544.4 | 330.3 |
LLaVA-MORE-8B | 58.4 | 76.3 | 61.8 | 42.4 | 64.1 | 69.8 | 39.4 | 68.2 | 72.4 | 85.1 | 63.6 | 1531.5 | 353.3 |
LLaVA-MORE-8B-S2 | 60.9 | 76.7 | 62.2 | 42.3 | 64.2 | 69.9 | 38.7 | 65.8 | 71.1 | 86.5 | 64.5 | 1563.8 | 293.2 |
LLaVA-MORE-8B-siglip | 62.1 | 77.5 | 63.6 | 46.1 | 65.8 | 71.0 | 39.8 | 68.2 | 73.1 | 86.1 | 64.6 | 1531.0 | 315.4 |
LLaVA-MORE-8B-S2-siglip | 63.5 | 77.1 | 62.7 | 44.7 | 65.5 | 71.0 | 40.0 | 68.0 | 71.8 | 86.0 | 64.9 | 1541.4 | 336.4 |
* The results of TextVQA are computed with OCR token in the input prompt.
In the table below, you can find links to ours 🤗 Hugging Face models.
Model Name | 🤗 Hugging Face | Summary |
---|---|---|
LLaVA_MORE-llama_3_1-8B-pretrain | Hugging Face Model | Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone |
LLaVA_MORE-llama_3_1-8B-finetuning | Hugging Face Model | Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone |
LLaVA_MORE-llama_3_1-8B-S2-pretrain | Hugging Face Model | Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone |
LLaVA_MORE-llama_3_1-8B-S2-finetuning | Hugging Face Model | Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone |
LLaVA_MORE-llama_3_1-8B-siglip-pretrain | Hugging Face Model | Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone |
LLaVA_MORE-llama_3_1-8B-siglip-finetuning | Hugging Face Model | Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone |
LLaVA_MORE-llama_3_1-8B-S2-siglip-pretrain | Hugging Face Model | Pretrained on LCS-558K and using LLaMA 3.1 8B Instruct as LLM backbone |
LLaVA_MORE-llama_3_1-8B-S2-siglip-finetuning | Hugging Face Model | Finetuned on LLaVA-Instruct-665K and using LLaMA 3.1 8B Instruct as LLM backbone |
To create the conda environment named more
use the following instructions.
With this environment you will have all the packages to run the code in this repo.
conda create -n more python==3.8.16
conda activate more
pip install -r requirements.txt
Note that the requirements are heavily inspired by the original LLaVA repo.
To help the community in training complex systems in distributed scenarios, we are publicly releasing not only the source code but also the bash scripts needed to train LLaVA-MORE
on HPC facilities with a SLURM scheduler.
To further extend the reproducibility of our approach, we are also releasing the wandb logs of the training runs.
Pretraining
sbatch scripts/more/11_pretrain_llama_31_acc_st_1.sh
Finetuning
sbatch scripts/more/12_finetuning_llama_31_acc_st_1.sh
As mentioned before, LLaVA-MORE
introduces the use of LLaMA 3.1 within the LLaVA architecture for the first time. However, this repository goes beyond that single enhancement.
We have also incorporated the ability to use different visual backbones, such as SigLIP, and various methods for managing image resolutions (S2).
Considering that, you can view this repo as an effort to expand the study of Multimodal LLMs in multiple directions and as a starting point for enhancing new features to improve the connection between images and language.
You can find more references in this folder: scripts/more
.
You can try our LLaVA-MORE
in the Image-To-Text task by running the following script.
python -u llava/eval/run_llava.py
If you get out-of-memory problems, consider loading the model weights in 8 bit (load_in_8bit=True
).
We thank the LLaVA team for open-sourcing a modular codebase to extend and train different models within the LLaVA family. We are also happy users of the lmms-eval library, which has significantly reduced the evaluation time of our checkpoints across different datasets.
We also thank CINECA for the availability of high-performance computing resources used to train LLaVA-MORE
. This work is supported by the PNRR-M4C2 project FAIR - Future Artificial Intelligence Research and by the PNRR project ITSERR - Italian Strengthening of Esfri RI Resilience.
In case you face any issues or have any questions, please feel free to create an issue. Additionally, we welcome you to open a pull request to integrate new features and contribute to our project.