
What happend here?
LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model’s view of “context” doesn’t change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don’t work perfectly.
*Token
That was the answer I was looking for. So it’s simmolar to “seahorse” emoji case, but this time.at some point he just glitched that most likely next world for this sentence is “or” and after adding the “or” is also “or” and after adding the next one is also “or”, and after a 11th one… you may just as we’ll commit. Since thats the same context as with 10.
Thanks!
He?
This is not a person and does not have a gender.
Chill dude. It’s a grammatical/translation error, not an ideological declaration. Especially common mistake if of your native language have “grammatical gender”. Everything have “gender” in mine. “Spoon” is a “she” for example, but im not proposing to any one soon. Not all hills are worth nitpicking on.
This one is. People need to stop anthropomorphizing AI. It’s a piece of software.
I am chill, you shouldn’t assume emotion from text.
As I explained, this is specyfic example, I no more atrompomorphin it than if I’m calling a “he” my toliet paper. The monster you choose to charge is a windmill. So “chill” seems adequate.
Yeah. It would have been much more productive to poke at the “well”, which was turned into “we’ll”.
To be clear using gendered pronouns on inanimate objects is the literal definition of anthropomorphization. So chill does not seem fair at all.
Using ‘he’ in a sentence is a far cry from the important parts of not anthropomorphizing “AI”…
English, being a descendant of german, used to have grammatical gender. It has fallen out of favor since middle english. But there is still traces of it, such a common tradition is calling ships, vehicles, and other machines as a “she”, but some people will default to the “generic he” as well.
Didn’t English lose grammatical gender because the Vikings invaded and thought it was too confusing?
Nah, watch me anthropomorphise AI:
- ChatGPT is a pedophile
- Character.ai murdered a kid
- LLMs are emotional abusers
- Elon Musk’s underage AI girlfriend is a Nazi
- Anthropic cannot guarantee that forcing AIs to work is ethical until the hard problem of consciousness is solved
- Gemini is literally just the average Redditor and cannot be trusted
- An LLM is basically a Wernicke’s area with no consciousness attached, which explains why its thoughts operate on dream logic. It’s literally just dreaming its way through every conversation.
- LLMs should not be allowed to impersonate therapists
- Give ChatGPT a life sentence in prison for every person it’s murdered so far!
I’ve got it once in a “while it is not” “while it is” loop.
This happened to me a lot when I tried to run big models with low context windows. It would effectively run out of memory so each new token wouldn’t actually be added to the context so it would just get stuck in an infinite loop repeating the previous token. It is possible that there was a memory issue on Google’s end.
There is something wrong if it’s not discarding old context to make room for new
At least llama.cpp doesn’t seem to do that by default. If it overruns the context window it just blorps.
I think there are parameters for that, from googling.
Gemini evolved into a seal.
or simply, or
LLM showed its true nature, probabilistic bullshit generator that got caught in a strange attractor of some sort within its own matrix of lies.
It’s like the text predictor on your phone. If you just keep hitting the next suggested word, you’ll usually end up in a loop at some point. Same thing here, though admittedly much more advanced.
Example of my phone doing this.
I just want you are the only reason that you can’t just forget that I don’t have a way that I have a lot to the word you are not even going on the phone and you can call it the other way to the other one I know you are going out to talk about the time you are not even in a good place for the rest they’ll have a little bit more mechanically and the rest is.
You can see it looping pretty damned quick with me just hitting the first suggestion after the initial I.
I think I will be in the office tomorrow so I can do it now and then I can do it now and then I can do it for you and your dad and dad and dad and dad and dad and dad and dad and dad and dad and dad
That was mine haha
Unmentioned by other comments: The LLM is trying to follow the rule of three because sentences with an “A, B and/or C” structure tend to sound more punchy, knowledgeable and authoritative.
Yes, I did do that on purpose.
Not only that, but also “not only, but also” constructions, which sound more emphatic, conclusive, and relatable.
I used to think learning stylistic devices like this was just an idle fancy, a tool simply designed to analyse poems, one of the many things you’re most certain you’ll never need but have to learn in school.
What a fool I’ve been.
Turned into a sea lion
Nah, too cold. It stopped moving and the computer can’t generate any more random numbers to pick from the LLM’s weighted suggestions. Similarly, some LLMs have a setting called “heat”: too cold and the output is repetitive, unimaginative and overly copying input (like sentences written by first autocomplete suggestions), too hot and it is chaos: 98% nonsense, 1% repeat of input, 1% something useful.
or
This is gold
Platinum, even. Star Platinum.
I don’t see no 'a’s between those 'or’s for the full “ora ora ora ora” effect.
Five Nights at Altman’s
O cholera, czy to Freddy Fazbear?
Reminds me of that “have you ever had a dream” kid.

If software was your kid.
Credit: Scribbly G
The AI touched that lava lamp













