-
Notifications
You must be signed in to change notification settings - Fork 294
fix: handle None input_other in token usage to stop cli from crashing #265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add defensive handling for cases where result.usage.input_other is None by setting it to 0. This prevents potential issues when provider returns token counts as None and ensures the app does not crash.
|
|
|
According to the OpenAI SDK definition of |
|
This is for the Anthropic API endpoint but your point still makes sense as the Anthropic API also requires and integer for input tokens. However, the Kimi CLI should not crash and exit because the usage input_token is missing and instead gracefully deal with the problem. Claude Code deals with this and the Chutes Anthropic API endpoint has no such issue there. Therefore, to make the Kimi CLI more robust either this change and/or the pull request in the Kosong library (MoonshotAI/kosong#21) should be made
|
|
OK. Maybe you can contribute this to https://github.com/MoonshotAI/kosong/blob/main/src/kosong/contrib/chat_provider/anthropic.py, adding a note comment stating that it's specific to Chutes Anthropic API. Thanks! |
|
Hey @stdrc Yes, the fix needs to be there, I have looked into this and there's a bit more to it. The reason for the crash was mainly due to the idea that for stream messages not all message types require input tokens to be included in the usage field, in fact this is only included in the first message "MessageStartEvent" the following events will not include this except for specific conditions. So most MessageDeltaEvent will have the input token field set as None in their MessageDeltaUsage. You can find more info on this here: The current code is not set to handle this and therefore will fail not only for Chutes Anthropic API but most likely for all other providers using the Anthropic API given this is how the standard is suppose to work. This pr should fix the issue above and stop the Kimi CLI from crashing I have also identified a second issue with this and that is the MessageStartEvent is not used to updated the input token usage (this is used for context management and therefore this will also likely lead to an error). Furthermore, the usage update from MessageDeltaEvent is used to replace the usage object instead of updating it and although the data from MessageDeltaUsage is cumulative it does not necessarily have all the fields populated including the input tokens field. So currently the usage info comes in only from the MessageDeltaUsage and therefore the context limit is not calculated correctly (massively underestimated as the initial input tokens are discarded/not counted) which will probably result in an error from the provider for input being too long. I have a fix for this (make sure to use MessageStartEvent data to set the initial usage then update usage data from MessageDeltaUsage correctly) I can update the current pr above or open a new one? I am a big fan of the work your team has done with Kimi K2 Thinking and I hope this helps the kimi cli get better support/adoption by the community, keep up the great work! |
Could you please directly push the fix of the second issue to MoonshotAI/kosong#21? I guess it would be better to have them fixed together. Thanks for your time! |
|
That should be ready for you to review now. |

Add defensive handling for cases where result.usage.input_other is None by setting it to 0. This prevents potential issues when provider returns token counts as None and ensures the app does not crash.
Related Issue
#264
Resolve #(issue_number)
Description
Added a check if the input_other field is None it gets replaced by 0