Hyper-Pretrained Transformers (HPT) is a novel multimodal LLM framework from HyperGAI, and has been trained for vision-language models that are capable of understanding both textual and visual inputs. HPT has achieved highly competitive results with state-of-the-art models on a variety of multimodal LLM benchmarks. This repository contains the open-source implementation of inference code to reproduce the evaluation results of HPT Air on different benchmarks. The model weights are released in HuggingFace Repository.
For more details and exciting examples of HPT, please read our technical blog post.
- Overview of Model Achitecture
- Quick Start
- Benchmark Evaluations
- Pretrained Models Used
- Disclaimer and Responsible Use
- Contact Us
- License
- Acknowledgements
pip install -r requirements.txt
pip install -e .
You can download the model weights from HF into your [Local Path] and set the global_model_path
as your [Local Path] in the model config file:
git lfs install
git clone https://huggingface.co/HyperGAI/HPT [Local Path]
or directly set global_model_path
as the HF repo-id ('HyperGAI/HPT').
You can also set other strategies in the config file that are different from our default settings.
After setting up the config file, launch the model demo for a quick trial:
python demo/demo.py --image_path [Image] --text [Text] --model [Config]
Example:
python demo/demo.py --image_path demo/einstein.jpg --text 'Question: What is unusual about this image?\nAnswer:' --model hpt-air-demo
You can design different prompts here to boost the question.
Launch the model for benchmark evaluation:
torchrun --nproc-per-node=8 run.py --data [Dataset] --model [Config]
Example:
torchrun --nproc-per-node=8 run.py --data MMMU_DEV_VAL --model hpt-air-mmmu
[1] If not specifically mentioned, all listed results are from the test set. You may need to submit the result file into the server to obtain the final score.
-
Pretrained LLM: Yi-6B-Chat
-
Pretrained Visual Encoder: clip-vit-large-patch14-336
Note that the HPT Air is a quick open release of our models to facilitate the open, responsible AI research and community development. It does not have any moderation mechanism and provides no guarantees on their results. We hope to engage with the community to make the model finely respect guardrails to allow practical adoptions in real-world applications requiring moderated outputs.
- Contact: [email protected]
- Follow us on Twitter.
- Follow us on Linkedin.
- Visit our website to learn more about us.
This project is released under the Apache 2.0 license. Parts of this project contain code and models from other sources, which are subject to their respective licenses and you need to apply their respective license if you want to use for commercial purposes.
The evaluation code for running this demo was extended based on the VLMEvalKit project. We also thank OpenAI for open-sourcing their visual encoder models and 01.AI for open-sourcing their large language models.