-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supporting Vision models from Groq #65
Comments
@VedantR3907, the issue is due to the way litellm validates if a given model has vision capability or not. So litellm maintains a list of models with their properties and capabilities in a static json and the llama 3.2 models (including the vision models) are not added in it. For now you try to uninstall zerox and install from this fork (#40): and pass |
@VedantR3907, there was some issue as all the kwargs were passed to litellm, fixed that, remove and reinstall pyzerox using the same pip command shared eariler, however this time there is a new error: Looks like litellm hasn't added support for vision models from groq. |
@VedantR3907, in the latest versions of litellm 1.50.1 (we are using a lower version in pyzerox), I can get image prompting to work with llama 3.2 vision but the current implementation in pyzerox backend uses system prompt for instructions which groq backend doesn't support along with image input. |
@pradhyumna85, I made changes in the modellitellm.py, It works now, But Currently still I am using the same system prompt passing as a text for groq models which is getting used for all the other models. We can change that cause for bigger models it is working perfectly (for Groq). But smaller models is not perfect but good. I only changed the _prepare_messages function from the modellitellm.py `async def _prepare_messages(
|
can you share what all changes you made @VedantR3907 |
@MANOJ21K See the code I shared above I passed the system prompt for the GROQ models as user's message, withing in the user_content list. copy paste the code above you will be able to use the system prompt written by @pradhyumna85 |
@pradhyumna85 can you share the modellitellm.py and py-zerox version |
I discovered this library through a blog post on Medium. As I explored and gained experience with other libraries and To address this issue, I suggest being flexible when using There is already an open PR addressing this. I haven't seen any problems with
|
I tried using Vision models like llama-3.2-90b-vision-preview, llama-3.2-11b-vision-preview, llava-v1.5-7b-4096-preview but it shows same thing as:
The text was updated successfully, but these errors were encountered: