Some interferences even expose a way to set the “temperature” - higher values of that mean more randomized (feels creative) output, lower values mean less randomness. A temperature of 0 will make the model deterministic.
Where in the process does that seed play a role and what do you even mean with numerical noise?
Edit: I feel like I should add that I am very interested in learning more. If you can provide me with any sources to show that GPTs are inherently random I am happy to eat my own hat.
Unfortunately the API docs are incomplete (insert obi wan meme here). The seed value is both optional and irrelevant when setting the temperature to 0. I just tested it.
Yes it is intentional.
Some interferences even expose a way to set the “temperature” - higher values of that mean more randomized (feels creative) output, lower values mean less randomness. A temperature of 0 will make the model deterministic.
even at 0 temperature the model will not be deterministic, because it depends on the seed used as well as things like numerical noise.
Yeah no, that’s not how this works.
Where in the process does that seed play a role and what do you even mean with numerical noise?
Edit: I feel like I should add that I am very interested in learning more. If you can provide me with any sources to show that GPTs are inherently random I am happy to eat my own hat.
https://github.com/ollama/ollama/blob/main/docs/api.md#request-reproducible-outputs
LLMs are prompted with a seed. If you change the seed you get a different answer.
I appreciate the constructive comment.
Unfortunately the API docs are incomplete (insert obi wan meme here). The seed value is both optional and irrelevant when setting the temperature to 0. I just tested it.
Addendum:
The docs say
But what they should say is
Easy mistake to make