Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch Processing not implemented for LlavaStreamGenerator #216

Open
rahulthakur319 opened this issue Aug 12, 2024 · 0 comments
Open

Batch Processing not implemented for LlavaStreamGenerator #216

rahulthakur319 opened this issue Aug 12, 2024 · 0 comments

Comments

@rahulthakur319
Copy link

Currently, the LlavaStreamGenerator function at tinychat/stream_generators/llava_stream_gen.py processes inputs one at a time. To improve performance and throughput, we should implement batch processing capabilities. This will allow the function to handle multiple inputs simultaneously, potentially leading to significant speed improvements.

Quick action plan is to adjust the main generation loop to work with multiple sequences. But before starting wanted to confirm, Are there any model-specific considerations for batch processing with VILA models?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant