You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thank you for the contribution with the dataset! Really cool stuff! I was wondering, are you planing to release the code you used to create the dataset?
The text was updated successfully, but these errors were encountered:
Thank you for your interest in our work! We'll release the pipeline for recaptioning the dataset soon.
Also, we have already released our recaption model (the LLaMA3-powered LLaVA) here: https://huggingface.co/tennant/llava-llama-3-8b-hqedit
Thank you for releasing the recaptioning model weights; did you run it using Transformers, or with the original Llava repo? I tried it in transformers, and the transformers library complains about a missing preprocessor_config.json file, and also it notes that the model type is set as llava_llama which it does not recognize.
Thanks for your interest @pbaylies !
We used a slightly modified version of the original LLaVA repo on GPU (we changed the conversation template to LLaMA3's) and a Jax-implemented version on TPU for inference. We'll release both inference pipelines in a few days, so stay tuned!
Hi! Thank you for the contribution with the dataset! Really cool stuff! I was wondering, are you planing to release the code you used to create the dataset?
The text was updated successfully, but these errors were encountered: