You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(I posted a very similar question in the ollama repo. This question is how or if this can be achieved with the inference engine code that comes with code llama).
For a research project, I am interested in exploring the effect of different prompts. The problem is, when I change the prompt even slightly, and I get a different result, I am unable to say how much has changed in the output because I changed the prompt input and how much has changed in the output because of the random and pseudo-random effects because of concepts such as top-k, top-n and temperature.
Is it possible, in principle, to get a deterministic output? Is it technically possible to get a deterministic output in practice with the code provided together with code llama?
Basically, I want to get responses in a way that the same prompt generates the same output, at any temperature. There can and should be pseudo-randomness but it must be necessary for me to fix the seed. I want only changes that are caused by the prompt. Is that possible with code llama?
The text was updated successfully, but these errors were encountered:
(I posted a very similar question in the ollama repo. This question is how or if this can be achieved with the inference engine code that comes with code llama).
For a research project, I am interested in exploring the effect of different prompts. The problem is, when I change the prompt even slightly, and I get a different result, I am unable to say how much has changed in the output because I changed the prompt input and how much has changed in the output because of the random and pseudo-random effects because of concepts such as top-k, top-n and temperature.
Is it possible, in principle, to get a deterministic output? Is it technically possible to get a deterministic output in practice with the code provided together with code llama?
Basically, I want to get responses in a way that the same prompt generates the same output, at any temperature. There can and should be pseudo-randomness but it must be necessary for me to fix the seed. I want only changes that are caused by the prompt. Is that possible with code llama?
The text was updated successfully, but these errors were encountered: