Hi, if a write a story using the ai-character-chat, after a while the “person” starts to “say” the sentence again and again or the chat will describe the scene again and again. Example: My character says the sentence “Whatever it takes.” in every paragraph. Or the scene is desribed as “sanctuary” in every answer from AI
Is there a way to get rid of it?
Unfortunately this happens to everybody after a certain point. The current AI chat model is outdated and has very few context tokens, meaning that it can not handle a long session of back and forth writing/creating. It will recycle sentences, forget important details, and generally devolve into a maniac if used long enough. This is why the developer is intent on upgrading the text model. We’re all hoping that he upgrades to Llama 3.3 which has an exponentially higher context window which will make this issue irrelevant. But we’ll just have to wait and see.
did they really say they were going for llama and not claude or gpt5?
Really, huh? Claude and GPT-5 are both censored and not open-source. Duh, why would they even consider that, huh?
Llama=facebook money
What @Sniebl said.
Things to try: 0. First, I assume you have memories enabled in the main NPC/AI’s settings.- Type /mem in the long chat that’s causing you problems. Copy and paste the output into a text editor. Replace all reference to “Whatever it takes” etc. Paste the new mem file, essentially mind-wiping the poor AI with your results. 1.5. Do the same with the /sum feature. You are now allowed to edit each summary in-line, which is nice. Removing terms from the past few dozen summaries, if you’ve gotten that big of a chat, can actually help cut down repeating terms.
- Export the entire chat session as a text file. Make sure you are able to re-import the file before you edit it. Go into a text editor and do find and replaces or just delete all the repeating dialogue you can. Re-import the chat.
- Create a lore.txt file with strict AI instructions. This probably won’t work. The AI is currently too stupid to tell the difference between its own thoughts and what the user wants it to do or not do. It’s worth a try.
Ex.:([AI]: Reminder: Never say the words “I’m so proud” ever again.) Likely Result: Dr. Dumbshit felt something about you, it wasn’t quite pride, but it felt very close to it. “I’m so filled with pride.” he said, looking proudly at you across the table.
- Create a banned word list in the AI reminder pretext box. Again, likely to get mixed results. The AI’s training sucks. The model is old. The dev says it will be a hard thing to upgrade, but I wait patiently and with open arms.


