

It will be back when it’s back.


It will be back when it’s back.


Ew. Why would you want to add something that encourages hate?


Lol


ALL generators on the site use the same AI plugin for text, which means all of them are using the same exact same model.


The dev is NOT using any of the output from the text gen for training data. Nobody does that. It’s literally the worst thing you can do to an AI model and will cause it to degrade rapidly.


As I understand it (please, correct me if I’m wrong), aren’t gallery posts global? From what little I know, a named gallery (default is “public”) can be any name. That name becomes the identifier for that gallery. This means that anyone may use that same name and access that gallery from their own generator. If they’re public like that, how would this removal power work. Since you would affectively have the right to remove gallery images that don’t actually belong to the generator?


Well, I did see an em-dash in there…
Anyway, I do understand the sentiment. It’s not very fun being left in the dark, especially when we’re not even given a chance to participate.


That’s a strange question from someone with a “legal background”, especially since this is readily available information.


What you are suggesting is something called an “editor phase”, some COT (chain of thought, commonly known as “thinking” models) do this to an extent. It’s also something that can be done via JavaScript in the current AAC right now. To do this in AAC, you can either fork the current chat and make your own changes, or you can leverage the JavaScript function on characters to have the AI respond twice. Once to perform a standard completion, and the next, passing the entire chat again, this time including the last response along with instructions on what to “edit”. The AI makes a new response, and you replace the last response with the modified one. AI isn’t actually capable of self-analysis or of thinking about what its going to do as its responding, so to help mitigate this issue, you must break things into steps.


AI is stateless, which means it’s seeing everything for the first time every time it responds. Writing instructions, characters, lore, reminders, your response, chat history, all of it is just sent as a big block of text with little header text for each section. The AI then has to parse through all of it, try to make sense of it, and then come up with a response to send back to you. What you’re asking for is just another block of text to send along with everything else. Really no different than a reminder.
This is a simple limitation of the current AI model and it will improve (probably by a fairly good amount) with the text upgrade, but it still won’t be perfect because AI doesn’t actually understand.
You’re providing the AI with information that sounds like China, which means the AI is going to look at the data it, see stuff that sounds like China, and that stuff is related to China, so it’s going to talk about China.


No, current image gen does not support image input as reference. It’s a text-to-image only system.


That’s what I said. What point are you getting at?


When it’s back… I imagine they are working as quickly as the realistically can.


Assuming you are talking about ai-character-chat: Not built in. It creates them automatically as the context gets full. What problem are you trying to solve? There may be an alternative option.


I could see you being upset if perchance was some paid service with deadlines and shit. But, it’s not. It’s free. What do you even have to be upset about? Besides you’re own entertainment, what do you even have invested on perchance?


You clearly don’t know, otherwise you wouldn’t be saying that Llama 3 has a bigger token count. The two are not related. Token count is directly related to how much VRAM you through at the model (barring special exceptions like Gemini 2.5 and some special builds using ROPE to extend context).
I also took a look at the dev’s post history. At no point do they make mention of what model they plan to use. The only references to Llama are old mentions of the reddit channel “Local Llama” and stating that the current model is a popular 70b variant. That’s it.


The token length (context window) is not directly linked to the model currently in use. There also needs to be enough VRAM, which costs more money to host. Unless the dev finds some way to reduce VRAM usage and/or finds a better deal for hosting with more VRAM, then the context isn’t going to be increased by just changing the model used. Also, where did you hear it’s going to be Llama 3 or 3.3? Neither of those are much of an upgrade.


Just slap a reminder in place that Python coding is done and she should move on. Then for the next few responses, remove any Python code. LLMs are pattern programs, so changing the pattern will fix the issue.


When the dev finishes… I mean, what are you expecting? It’s a free service developed primarily by one person. I’m just happy they still care enough to attempt keeping things up-to-date.
Golly gee, dev. I know you’re providing a completely free service and ads barely cover provided services, but go ahead and add these extra features that will raise costs further.
It’s free! What the hell? 🙄