Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter out input ids not being used in inference. Fixes issue #1294 #1310

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

priyank9320
Copy link

@priyank9320 priyank9320 commented Dec 2, 2024

This pull request resolves the below error which arises when using outlines with Llama-3.2-11B-Vision model:

IndexError: index 128256 is out of bounds for dimension 1 with size 128256

which arises because the Llama vision processor includes the image token (id 128256) which is not used during inference but this gets included in the allowed_tokens tensor while the logits don't have any value for this index. Fixes issue #1294

@priyank9320 priyank9320 changed the title Filter out input ids not being used in inference. Fixes issue #1294 Filter out input ids not being used in inference Dec 2, 2024
@priyank9320 priyank9320 changed the title Filter out input ids not being used in inference Filter out input ids not being used in inference. Fixes issue #1294 Dec 2, 2024
@@ -110,6 +111,9 @@ def process_logits(
allowed_tokens = self.guide.get_next_instruction(guide_state).tokens.to(
mask.device, non_blocking=True
)
allowed_tokens = allowed_tokens[
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a safe assumption or is there a more direct way to get this information?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants