-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Fix Idefics3 fails during multi-image inference #11080
[Bugfix] Fix Idefics3 fails during multi-image inference #11080
Conversation
Signed-off-by: B-201 <[email protected]>
…ics3-multi-image
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
So the shape of If so, can you update the shape of |
Signed-off-by: B-201 <[email protected]>
Thank you for pointing that out. I've made the changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks four your fix
…t#11080) Signed-off-by: B-201 <[email protected]> Signed-off-by: Akshat Tripathi <[email protected]>
…t#11080) Signed-off-by: B-201 <[email protected]>
…t#11080) Signed-off-by: B-201 <[email protected]>
Currently, during inference with Idefics3, if the image sizes for each prompt vary, the following error might occur:
Following code can reproduce this error:
This issue has been resolved in this fix. I have verified this with:
pytest tests/models/decoder_only/vision_language/test_models.py -k "idefics3"
It passed in my local environment.