Skip to content

Support for returning Logits and Calculating Perplexity During Model Evaluation? #1314

Closed Answered by merrymercy
hxer7963 asked this question in Q&A
Discussion options

You must be logged in to vote

They are well supported. Some related docs:

  • Return logprob for a generation request
    # Whether to return logprobs.
    return_logprob: Optional[Union[List[bool], bool]] = None
    # The start location of the prompt for return_logprob.
    logprob_start_len: Optional[Union[List[int], int]] = None
    # The number of top logprobs to return.
    top_logprobs_num: Optional[Union[List[int], int]] = None
  • The full OpenAI API spec around logprob is supported.
  • Other ways to use it in the frontend language.
    https://github.com/sgl-…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by merrymercy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants