-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't try to add special tokens to the matcher in XGrammar. #11060
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Jeff Cook <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My hunch is that this felt like a bit of a hack
@Ubospica is there any specific reason why special tokens will crash the engine here?
@@ -229,6 +229,7 @@ def __call__(self, input_ids: list[int], | |||
scores: torch.Tensor) -> torch.Tensor: | |||
if self.ctx is None: | |||
self._ensure_ctx() | |||
assert self.ctx is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there, I don't think this assert is necessary.
@@ -243,6 +244,9 @@ def __call__(self, input_ids: list[int], | |||
else: | |||
for i, matcher in enumerate(self.matchers): | |||
if not matcher.is_terminated(): | |||
if input_ids[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move to under L250 and use sampled_token to avoid additional access here.
Hi @sjuxax, thanks for your contribution to vLLM! I think these special tokens should be forbidden from generated by xgrammar. As I discussed in this previous comment, I am wondering if you could provide the specific code that caused the error, or the model, prompt, and output structure. I believe by investigating the reasons behind these token generations, we can further resolve the issue. |
Prevent XGrammar from attempting to match on special tokens. XGrammar throws the assertion on the C++ side if we send it a special token for acceptance, which crashes the whole engine. This fix uses XGrammar's
tokenizer_info
to skip over these tokens before we submit them and get crashed.FIX #11044 ; see that issue for the original traceback.
There may be a better way to do this than just continuing over the last token while it's special, but this is sufficient to resolve the crash for me and I'm noticing no slowdown or additional issues in outputs. Thanks!