Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.


I’m not good at math, so someone please help me.
If a model hallucinates 1% of the time for every question in a chat window that has 100 prompts in it, what is the chance of receiving a hallucination at some point in the chat?
If I understand you correctly: 63.4% odds of having at least one hallucination.
The simple way to calculate the odds of getting at least one error is to calculate the odds of having ZERO, and then inverting that.
If the odds of a single instance being an error is 1%, that means you have a 99% chance of having no errors. If you repeat that 100 times, then it’s 99% of 99% of 99%…etc. In other words, 0.99^100 = 0.366. That’s the odds of getting zero errors 100 times in a row. The inverse of that is 0.634, or 63.4%.
This is the same way to calculate the odds of N coin flips all coming up heads. It’s going to be 0.5^N. So the odds of getting 10 heads in a row is 0.5^10 = ~0.0977%, or 1:1024.
Edit: This is assuming independence of all 100 prompts, which is not generally true in a single chat window, where each prompt follows the last and retains both the previous prompts and answers in its context. As the paper explains, error rate tends to increase with context length. You should generally start a new chat rather than continue in an existing one if the previous context is not highly relevant.
Thanks, I also wonder how context collapse affects the fabrication rate.
One in 100. However, that is simple a measure of probability, so do not expect that to always be true for every 100 prompts.
For example, if you rolled a 100-sided die 100 times, it’s possible to get a one every time. In practice, it would likely be a mix. You might have a session where you get no wrong answers and times when you get several.
The problem is that ignorant people trust these models implicitly, because they sound convincing and authoritative, and many people are not equipped to be able to vet the information being generated (also notice I didn’t say “retrieved”).