You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've just landed onto this repo and I saw that you have a section that reads "How do we make sure that the output is factual and not hallucinated?". Well, I might know a couple of things about it:
As long as the temperature is not zero, an LLM will not always hallucinate the same response.
I've been able to detect hallucinations by these means:
Ask many times, and the more different the questions the better:
Temperature +0
Change LLM if possible (different weights = different biases)
Change prompt
Have an LLM evaluate the different answers. You basically tell it that if the answers aren't consistent it has to answer "yes", and that's how you get it to detect hallucinations for you.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I've just landed onto this repo and I saw that you have a section that reads "How do we make sure that the output is factual and not hallucinated?". Well, I might know a couple of things about it:
I hope this works.
Beta Was this translation helpful? Give feedback.
All reactions