Skip to content

Commit

Permalink
Merge pull request #300 from cleanlab/jwmueller-trustpreset
Browse files Browse the repository at this point in the history
clarify users of get trust score should stick with med/low presets
  • Loading branch information
mturk24 committed Aug 29, 2024
2 parents 5ed28d0 + aafb204 commit 1f8c95f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions cleanlab_studio/studio/trustworthy_language_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ class TLM:
Higher presets have increased runtime and cost (and may internally consume more tokens).
Reduce your preset if you see token-limit errors.
Details about each present are in the documentation for [TLMOptions](#class-tlmoptions).
Avoid using "best" or "high" presets if you primarily want to get trustworthiness scores, and are less concerned with improving LLM responses.
These presets have higher runtime/cost and are optimized to return more accurate LLM outputs, but not necessarily more reliable trustworthiness scores.
Avoid using "best" or "high" presets if you primarily want trustworthiness scores (i.e. are using `tlm.get_trustworthiness_score()` rather than `tlm.prompt()`), and are less concerned with improving LLM responses.
These "best" and "high" presets have higher runtime/cost, and are optimized to return more accurate LLM outputs, but not more reliable trustworthiness scores than the "medium" and "low" presets.
options (TLMOptions, optional): a typed dict of advanced configuration options.
Available options (keys in this dict) include "model", "max_tokens", "num_candidate_responses", "num_consistency_samples", "use_self_reflection".
Expand Down

0 comments on commit 1f8c95f

Please sign in to comment.