Skip to content

Commit

Permalink
clarify med/low presets for get trust score
Browse files Browse the repository at this point in the history
  • Loading branch information
jwmueller authored Aug 29, 2024
1 parent 5ed28d0 commit aafb204
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions cleanlab_studio/studio/trustworthy_language_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ class TLM:
Higher presets have increased runtime and cost (and may internally consume more tokens).
Reduce your preset if you see token-limit errors.
Details about each present are in the documentation for [TLMOptions](#class-tlmoptions).
Avoid using "best" or "high" presets if you primarily want to get trustworthiness scores, and are less concerned with improving LLM responses.
These presets have higher runtime/cost and are optimized to return more accurate LLM outputs, but not necessarily more reliable trustworthiness scores.
Avoid using "best" or "high" presets if you primarily want trustworthiness scores (i.e. are using `tlm.get_trustworthiness_score()` rather than `tlm.prompt()`), and are less concerned with improving LLM responses.
These "best" and "high" presets have higher runtime/cost, and are optimized to return more accurate LLM outputs, but not more reliable trustworthiness scores than the "medium" and "low" presets.
options (TLMOptions, optional): a typed dict of advanced configuration options.
Available options (keys in this dict) include "model", "max_tokens", "num_candidate_responses", "num_consistency_samples", "use_self_reflection".
Expand Down

0 comments on commit aafb204

Please sign in to comment.