Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM video understanding #10720

Open
4 of 5 tasks
Dongnc1017 opened this issue Nov 15, 2024 · 4 comments
Open
4 of 5 tasks

LLM video understanding #10720

Dongnc1017 opened this issue Nov 15, 2024 · 4 comments
Labels
💪 enhancement New feature or request

Comments

@Dongnc1017
Copy link

Self Checks

  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

Version 0.11.1 adds support for LLM video understanding. According to #9828, qwen mode requires users to submit a work order application and only supports urlsend mode. Please tell me how to do this and where to submit a work order?
In addition, what model is used to test qwen that supports video understanding?

2. Additional context or comments

No response

3. Can you help us with this feature?

  • I am interested in contributing to this feature.
Copy link

dosubot bot commented Nov 15, 2024

I found two similar discussions that might be relevant to your question about LLM video understanding in Dify:

  1. How to use/process video files in workflow? - This discussion is still open and might provide insights into processing video files within Dify workflows.

  2. Not support for mp4? - This is another open discussion that addresses issues related to video file support, specifically MP4 format.

Regarding your specific questions about submitting a work order application for qwen mode and the model used for testing qwen with video understanding, these discussions might not directly address them. You might need to check the official documentation or reach out to the maintainers for detailed guidance on submitting work orders and the specific models used.

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the 💪 enhancement New feature or request label Nov 15, 2024
@hjlarry
Copy link
Contributor

hjlarry commented Nov 15, 2024

the doc is here

@Copilotes
Copy link

the doc is here

What if the qwen-vl model is deployed locally by xinferent?

@hjlarry
Copy link
Contributor

hjlarry commented Nov 16, 2024

What if the qwen-vl model is deployed locally by xinferent?

Not supported yet. Seems xinferent use openAI's API, and openAI not support video yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💪 enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants