• 1 Post
  • 31 Comments
Joined 2 months ago
cake
Cake day: November 6th, 2025

help-circle
  • Correct, that’s what I implied, since otherwise, past the 1Mb you’ll experience “groundhog day” unable to escape the loop no matter what you do.

    Now… let me tell you buddy, you just scratched the tip of the iceberg with the model new obsessions. Just to showcase a couple:

    • Knuckles turning white (a classic you quoted).
    • The ambient smelling like ozone and petrichor (it always rains btw).
    • It always smells or tastes of regret and bad decisions.
    • The bot or an NPC will always lean to whisper something conspiratorially.
    • Eyes gleam with mischief very often.
    • Predatory amusement seems to be a normal mood no matter the context.
    • Some dialogue constructions are “cursed” as if you let one slide, it will repeat ad nauseam:
      • “Tell me, <text>”
      • “Though I suppose <text>”
    • Don’t even let me get started on the “resonance” or “crystallization” rabbit hole…

    You are in the money with one thing, all this is product of the training data, and not even the one that comes pre-packed with DeepSeek (I still hold that this is the current model being used, if I’m wrong, I’ll gladly accept the failure on my prediction), this is product of the dataset being used to re-train the model into working for dev’s end. For example, the “knuckles turning white” phrase appeared rarely with the old Llama model, but it was a one in a hundred occurrence as the model didn’t care for that construction and rather focused on a different set of obsessions.

    This is a never ending problem with all LLMs though, as in all languages, some constructions are more often than others, and since in both AI Chat and ACC the model is constrained by the “Make a story/roleplay” context, it produces those pseudo-catchphrases incredibly often. In the past we had to deal with “Let’s not get ahead of ourselves” or “We should tread carefully” appearing always no matter the situations, now “knuckles turning white” or similar are the new catchphrases in town.

    In an older post I warned about this, since DeepSeek trying to be more “smart” will take everything to face value, so the “correct” answer for many situations tends to be any of these constructions cited, and performing extreme training will yield us a model as dumb and stubborn as Llama was, but with a new set of obsessions plus the inability to move forward which Llama could despite it being exasperating at times. There is progress with the new model, I won’t deny it, but the threshold from were we entered “groundhog day” has been reduced from 1Mb+ to barely 250-500kb and I suspect it will keep reducing if the training is done on top of the existing one, rendering the model pointless for AI Chat, AI RPG or ACC.

    Then again, I could be wrong and a future update will allow the context window to hold further as Llama where 15Mb+ was possible and manageable without much maintenance. Some degree of obsession on any LLM is impossible to avoid, what is important is that the model doesn’t turn it into a word salad that goes nowhere. That I think is one of the biggest challenges the development of ai-text-plugin has.


  • There is a better explanation for the behavior you are experiencing, and yes, it is one if not the biggest hurdle the new model has yet to overcome: You have hit a log long enough that the model is starting to make a word salad of its past inputs as it “inbreeds”.

    What I mean by this is something explained before: For generators such as AI Chat and ACC, the input will be mostly 70% AI made and only 30% handwritten (95%-5% in AI RPG which crashes faster), because the whole log is an input for the next output. Of course, the shortest the log is, the less you’ll feel the effect of the model being insufferable because you still have the long instruction block “holding back” the manic behavior.

    I agree, this is something that has to be worked on from the development side, otherwise generators such as AI Chat or Story Generator are rendered short-lived as the point of them is to grow progressively, and as today, instability can happen as soon as 150kB-200kB, being significantly lower that what this model was able to hold in the past. However, a temporary fix on our side of things is to just make a “partition” of your log/story. Meaning:

    • Plan and start your run as usual.
    • Save constantly, monitoring the size of the log.
    • When you hit the 100kB mark, try to get to a point where you can “start over” as a point where you can keep moving without requiring the context prior.
    • Make a copy, delete all prior to that desired state, load the save and continue pretending that nothing happened.

    That will keep the model “fresh” at the cost of losing “memory”, which can be worked around as you can update the bios or instructions which will have better chances of working now under a clean slate.

    It is not the best way to work around this, but it is better than wrestling with all the nonsense that the model will produce past the 250kB threshold.

    Hope that helps and… also hole that a future update would make the model more stable rather than more unstable. At least something that was fixed and that the dev deserves more credit for making it work, is that at least now the English has improved significantly compared with the first release. In terms of grammar, content and consistency. I know, past the 250kB it is “allegories” or “crazy man ramblings”, but… it is good English! 😅


  • So… Garth01 called me here, so first of all, thanks for the vote of confidence, buddy! I don’t know if I’m as experienced as you all think but I try my best! 😅

    Anyways, about names and why some like the ones repeat a lot. If you are talking about a generator that does not uses the ai-text-plugin like this one, you’ll see on the edit side of things that the names are fixed, passed literally as an array of names as you mentioned:

    However, in the case of generators using the ai-text-plugin like ACC when coming up with new characters, or others that write you a long character sheet from a simple input to then make an image or whatnot, that’s because of the training data.

    To put it simply, all models require data to work as intended, and depending on such data, it can generate bias. For example, in a random test using the Prompt Tester, you can see this:

    You may recognize some of these names depending on what model you use, since as you can see in the prompt, the only “context” given to produce the names is “is for a story”. Changing the context changes the result, as for example, if the context is South America, the model favors “Carlos” or “Maria”, while if the context is Russia, you’ll see it producing “Boris” and “Petrova” often. Note that this is independent of what is the most common name of the region, as the bias is dependent on the training data, which none of us knows what it contains.

    It’s the same effect as how the model decides to handle certain situations, for example, if you let it chose the weather, it will pick rain because it has bias towards it. If you let it pick a random encounter against a wild animal, a boar will be more likely. It is not that the model does not recognizes the name, it is just that it has no priority compared to others. Another example would be that even with proper context, you will be extremely unlikely (or even never) get the model to randomly give you the name “Petronilda”, but it recognizes it, as if you ask it about the name, it will give you excruciating detail about its etymology, origin and all.

    Contrast to the older model, the new one has more options and is more “creative” as Garth01 mentions. Something many would remember from the old model is that Elara Castellanos and Charles McAllister were omnipresent on all stories to the point that if you dig on the code of some generators such as AI Chat, you’ll see how those were hard banned in the code itself. Then again “more creative” still holds a lot of bias.

    Personally, naming is one of the things I don’t let the model pick, because while the new model has more range, it is still limited for many standards and trying to make it “more creative” is a headache that i simply not worth it. Something I did in the past when the old model tried to place a name that was repeated already, was to just change it to something obtained from the Fantasy Name Generator (not by Perchance, this is a third party free service) which contains a large database for pretty much every context you may need.

    Hope that helps!


  • Partially, in the case of Story Generator, since the instruction passed to the LLM is outright “make four paragraphs, less than 400 words” as seen in the code, the output will be abruptly cut. A similar phenomena happens in AI Chat for example, where the order is “write ten paragraphs” but the code makes it so the displayed output is only the first one and the other nine are discarded. A “fun” consequence of this that happened repeatedly in the past with the Llama model and that still happens sometimes, was an output being literally just:

    Bot: Bot:
    

    As sometimes the LLM would put the input after a line skip, and the code would throw away the first paragraph due to how the pipeline works. Again, this is a very rare occurrence so it is not worth worrying about it.

    Now… there is a bit more on this, but this is just speculation in my side, so take this with a grain of salt since I’m no expert in neural networks, nor in the particularities of some models.

    DeepSeek (I still firmly believe that the new model is DeepSeek, even if some argue it may not be) takes some instructions more literally than others. Llama for example had absolutely no regard for length nor consistency in writing style, so you could have one output that was just a line or two, and then the next was a gargantuan thesis that would pretty much advance your story too far from comfort, to then go back to short replies. DeepSeek in contrast looks at the past inputs and tries to gauge how to control lengths. Ironically, something that DeepSeek does in longs runs is try to “extend” the output slowly, hence why if you audit summaries in ACC, AI Chat or AI RPG, you’ll see first very short ones, while later they begin exploding into longer ones until reaching instability and derail in madness.

    Also, believe it or not, the model takes all your input, it is not that it doesn’t reach it, it’s just that it decides to ignore it in favor of the context or where your story is because the primary instruction in Story Generator as well as in AI Chat or similar is “continue the story”.

    To me here is the biggest difference of the new model and the old one. Llama had almost “written in stone” what a story was meant to be and how to continue it from were you are standing (again, this is speculation from my side having a back catalog of massive logs done in AI Chat and seeing how things progressed there contrast to how they do now). The way Llama “thought” was the following:

    • A story must follow the medicine/hero story formula.
    • Check the last state and what was prior.
    • If there are no stakes, nor clear goal, invent one via a “random happening”.
    • If there is a goal but no clear solution, present the “medicine” (random quest, magical MacGuffin, person to go kill).
    • If the solution is being worked on, present a method (often “trials to obtain the MacGuffin”)
    • If all is solved, then there are no stakes, so rinse and repeat.

    While on paper this should work flawlessly as you can put most stories under that formula, it was something that infuriated many users as doing something more “complex” such as adding unforeseen consequences to a method, betrayals, or stories that would not follow that formula was tricky. It was doable, but it required tricking the LLM into a state and making it do your bidding. And as it would require more maintenance and attention to context than just going “auto”, it was something heavily complained in the past.

    The new model however, has absolutely no concept of a “formula” for stories, allowing for absolute free-form, making DeepSeek process on how to deal with this task as follows:

    • Check the state were the story stands.
    • Parse the story prior until there is a precedent on how to continue it.
    • If there is none, extrapolate from the data bank.

    This is why two things happen: if you are in a state that is vaguely similar to something before, you’ll experience endless deja vu, and if you are faced with the “unknown”, there is the random chance of the LLM to pull a “dark scenario”. Sadly, according to other users, the story itself seems to have precedence over explicit instructions of “no, do this instead”, hence why running in circles forever is a bigger threat and can happen as early as a 20kB log as today (current record of mine at the fourth input in ACC Chloe).

    We can hope that this all is improved in the future, but that’s more or less why things happen in my opinion. At least with the new scheme, and seeing how some succeed where I and others fail, I can only deduce that the best way to make the new model “work” is via interpolation, meaning, give it a “target” in the description as “the story purpose is to X get Y, or Z to happen”, so when parsing through the data bank, the LLM will select a similar case as were you are standing and work on it without derailing, granted, this removes completely the “surprise” element, but it’s a decent workaround. Then again, always check the story as is, since the “running in circles forever” is a bigger threat I believe.

    Anyways, sorry for the long posts, and good luck in your runs!


  • Alright, Story Generator is indeed a very tricky one, because even if the model would work as intended, it offers little control.

    For the record, don’t trust that much an LLM reply on “why things are how they are” as, for starters, an LLM doesn’t think logically, it just interprets a reply based on the combination of words it faces, and more importantly, the generator itself controls how things are shown and passed, but the LLM just takes one big input and gives one big output, it is not as dynamic as you think it is.

    Now, back to Story Generator, something I can advise you to try getting a better experience is to edit in the code from the Perchance side of things Line 21 which restricts to “only four paragraphs” and make it longer to ten or twenty, and also Line 45 which restrict the output to “about only about 400 words”.

    The reason for this, is because if the output is short, and the input is gargantuan, the LLM will have a hard time contextualizing what is going on and trying to make something “coherent” within the restrains, this is only true now since the model is still unstable, and in the future it should not be a problem, but for now it may be wise to experiment with longer outputs so the “derailing” is not abrupt.

    And another thing that actually will remain true as long as the new model persists: your story as presented IS an input, so before you set instructions, you have to manually edit what you don’t like, or outright prune out a whole section you find out of place. This is because your instructions and the story itself are passed together, so again, if the story is a sad dark one and you insist in the instruction “no, make it happy!”, it won’t happen because the model will look at the story and decide that the only “logical” step is to double down. So yeah, manual work it is. In hindsight, that gives you lee way to see the story itself as an input, as if you manually add a turning point, the LLM will latch on it and work around it instead of following a path and behavior you don’t want in your characters.

    Then again, I still think that Story Generator is a really tricky one to work around, I’d put it along with AI Text Adventure which even with the old model would derail into madness as soon as the second input due to how much the context would make the LLM fall into its obsessions quick. Still, with a bit of patience, all can be done, it’s just that it becomes demanding and tiresome, hence why most of us don’t bother anymore in trying fun long runs.

    I can’t promise to mod a generator for you now (I owe someone a generator, and time in my side is not nice) but I hope that with those directions you can make the Story Generator give you what you need! Best of luck!


  • If you use duck.ai, why not Blackbox then or the free version of DeepSeek? Also there are many LLM resources that are free in Helicard. Now, I should warn you, the privacy issue is going to be a lingering demon always. As sad as it may sound, even this site (Lemmy) is heavily compromised on that area, so if privacy is indeed a concern, the best alternative is to go offline.

    Then again, I know that hosting an own LLM can be bankrupting expensive, personally it is something I will never be able to do due to economical constrains, so I get it. So… sadly we pay with data, or with cash, one way or another.

    Maybe a better idea would be to acquire an API key from a big service such as Gemini or other you may find in HuggingFace with a group of friends to share between many you may trust to cover expenses. Again, I’m just thinking outloud there since I’m unsure what fits your needs.

    I would recommend AI Dungeon if the classic version was still available, but it is not, and perhaps you already know of that one and I really don’t like how restrictive it is either.


  • I don’t know why the heavy backlash on this post. Everyone can ask for an alternative, and it’s not like we are going to pretend that Perchance can make everyone happy.

    For alternatives as is… I recall in a post someone mentioning character.ai and Sekai. Personally, I’m not fond of either, as they are very limiting on what can you do and I guess the privacy factor is sketchy on those.

    However, while this is going to sound counterintuitive, there is something that Perchance offers us all that no other service offers, which ironically is the answer to what you are looking for:

    • Perchance has a whole open source platform for its generators, meaning that it is possible to audit exactly what each generator does and how it passes the information to the model, making anyone able to replicate the exact prompts and pipeline for any LLM you wish to use, locally, with an API key, or using a third party UI.

    Meaning that you can turn something as the default “online test for DeepSeek”, “ChatGPT free trial” or “Blackbox AI” into what any of your favorite Perchance generators did. All you need to do is get the prompt and input manually and you are good!

    Granted, it is tedious, and for going that route with no coding knowledge, it may be better to try something like SillyTavern, which is just the frontend with no LLM behind.

    Then again, while I am also not happy with the update, I’d encourage you and others to be patient. After all, we are given a free LLM to use with almost unlimited tokens, and I believe that the biggest challenge that the dev faces there is not to make the model “literally/story/RP appealing”, but rather “all encompassing while catering to most needs” because the same model that powers ACC, AI Chat, AI RPG and others, is the same model that in other generators has to work as standard AI model that can provide code, information, summaries from documents, etc. So making it work for the generators we use while not destroying its functionality is indeed a heavy challenge.


  • There was an update very recently that (at least on my side) made the model worse than in the prior (which ironically, made the model work the best at the time, about four days ago). As the dev said in the pinned post, the model is still being worked on, and we are in for a very bumpy ride until things stabilize, but there is at least work being done.

    Now, regarding the personality changes, there is something you may want to keep in mind because this may remain true even after the model is perfected: The context of the input has prevalence over descriptions and the recommendation instructions, so it is very difficult to have a character remain happy and joyful if the context forces the model to opt for a more “logical” approach changing it’s character (“logical” in what the LLM training dictates, which often is “moon logic”, but with trial and error it is possible to deduce the word combinations that causes a switch in the wild).

    Here is a lengthy guide on the topic. It covers most of the pitfalls you may find. The only thing I believe is no longer a problem (although I may be wrong), is that the “caveman speak” problem seems to be patched already, but again, it is still in the guide in case you run into it and how to restore it. Hope that helps!


  • Well… that whole thing is an entire rabbit hole. You see (and I’m trying to be as compact as possible, but there are a million of videos and documentation on the matter), an LLM and similar try to take the inputs and order of inputs to “correlate” them with something in a data bank. This whole is called “tokenization”, and basically it turns “The orange cat is sleeping” into “A + B + C + D + E” where each variable is a “token” and often times, a single word as in the backend, the model breaks the tokens by whitespace, although, with some training “The cat” can be a single token, leading to a whole other universe of possible replies branching “cat” from “The cat”. This is why (naively), some people recommend “add as much detail” in the sense of something like “An old lady in Paris, discussing an intellectually difficult topic such as philosophy with a young blonde man”, instead of “old lady with blonde young man, discussing, focused, Paris”. Both yield different results, but one is driven a lot by the context of articles, prepositions, and whatnot, making it a nightmare to debug. Again, be very descriptive, but separating things allow for easier “debugging” if you will. Also, I should mention that repeating a word does have an effect, as you’ll see that the results from “old lady, scarf, drinking wine” is not the same as “old lady, scarf, scarf, scarf, drinking wine”. That’s why I emphasize that the “grocery list” approach is better, as you can take generating an image as “building a Lego” and see what piece does what.

    Now, regarding the seed… that’s another whole problem. There is a better explanation in a video by Wolfram but I don’t remember which one it was, but pretty much, the seed locks you into a “potential state”, and not a single output, if that makes sense. So, if you reroll a seeded image, you’ll get potentially 5 diametrically different outputs with some accessory chances, plus some eldritch abomination of the model mixing them, but no more. So with a seed, you can find the exact granny you found once, but you may still require the luck of the draw. The reason for this is actually a bit complex and I’ll admit I don’t get it fully, but I recall it being also an issue in other neural network models such as Random Forest and similar, where seeds would not yield a 1:1 result always.

    Then again, nothing beats downloading the image! A fun feature that perchance has, is that all images are coded in base64, so you can right click a generated image, do “Copy Link”, take the gargantuan link, put it on a .txt and then use that gargantuan string of text to pass it to a converter and have it on your drive or even use it directly on an app or HTML!


  • Real, by the way. 🤣

    To be fair, this is something the old model did at times, and this is something one can force any LLM to do under the “make a story” context due to it’s necessity to have an answer to any incognita, as unless told explicitly, most if not all LLMs will refuse to give you an “I don’t know” kind of answer so if they are faced with a “weird” situation, they’ll hallucinate to fill the gaps… or could be a bad draw too.

    From the top of my head, with the new model I had two notorious cases like that:

    • The bot MacGyvering a trebuchet using only pancake mix.
    • The bot launching himself from a 30 story building into the ground, and surviving unharmed as if working in cartoon logic (without the context being cartoon logic).

    But I’ll admit, I have no idea what situation could make the model hallucinate someone having 40 fingers! That’s a new record in my book! 🤣


  • The main issue is that you are dealing with an LLM at the end of the day, so what works for example in a Craiyon would not work here 1:1. Keep in mind that what happens under the hood, is that the model takes your input and tries to relate it to what is tagged in those terms to its training data. Probably, the prevalence of “Instagram plastic dolls” and similar, is due to the input having some detailed anatomical descriptors.

    That being said, the best way to debug this, is just by checking what works for others in other generators, for example, here is a quick run in AI Photo Generator with an apparent very minimal prompt:

    Probably this is far from the quality that you want, but it gives you a hint on “how” those are being made if you click on the top left corner of any. There you may see something like this:

    Just to copy the prompt:

    Old lady drinking coffee in a Parisian bistro, cinematic shot, dynamic lighting, 75mm, Technicolor, Panavision, cinemascope, sharp focus, fine details, 8k, HDR, realism, realistic, key visual, film still, cinematic color grading, depth of field.
    
    Overall, it's an absolute world-class cinematic masterpiece. It's an aesthetically pleasing cinematic shot with impeccable attention to detail and impressive composition.
    

    You see that there is a lot more than what it is actually in the original prompt? Probably if you use one of those generators, the inclusion of those photographic terms such as “cinemascope” or “HDR” may yield results that can be beneficial or harmful. Ideally, you want to just take a look at the full prompts, and then test on a bare bones image generator so you have more control of the output.

    Now, text to image is different than text to text or text to code, you want to be as terse as possible, almost as if you were making a shopping list. For example, the following prompt:

    - Realism
    - Realistic
    - Photographic shot
    - Middle class
    - 56 year old French woman
    - Slim with broad hips
    - Graying hair
    - Prominent crow’s feet around her eyes
    - Dressed with casual
    - Silk scarf
    - Leather jacket
    - Baggy cords
    - Standing at the bar of a cafe in Paris
    

    Yields the following for seed: 354188953 and guidanceScale: 1

    And I get it, it may not be up to your expectations, but you see how it makes infinitely easy to debug what term leads the model where you want it to go.

    The best advise I can give you, is to look at the many different generators that there are and check what prompt is linked to a “style”, because surely, what you are exactly looking for, someone has figured out and pasted it into some “Photograph realistic style”, or at least it can serve as a reference point.


  • I guess that the drop is the luck of the draw, my friend! Wrangling an LLM is very tricky so as the dev said, we are in for a bumpy ride for the next couple of months!🤣

    But you are on point with the diagnostic. I use more AI Chat, so I can’t speak much of the particularities of ACC, at least in AI Chat the decay seems to be at the 20-30 input, and then spaced three paragraphs as you said. It could be due to the raw input in ACC is significantly longer than the one in AI Chat, but then you compare it to AI RPG where the raw input is even shorter, and the decay happens even in the fifth input and sticks forever. It’s hard to tell, and most of the times it’s actually due to what is being “played” as the moment, as like with the old LLM, some topics and write-styles were easier than others.

    Just for personal experience, personally the current model “peaked” two times: right after release when the “ultra violencia” mode was patched two months ago, and then yesterday, but it could have been the luck of the draw too, so it could be that the waters are still being tested to know how to lead the model in a proper way without falling into its pitfalls. But hey! At least we know that the project is not being abandoned, and that some stuff that we thought (at least me personally) was impossible, may be actually possible!

    Also, something that most people don’t realize, is how hard is to debug this, because while I keep referencing numbers of log sizes and all, I don’t know the rest of the people that use this service, but due to time and since I treat this just as a game and not in any sort of “professional” usage, the most I can produce a day is just 30kB, 70kB if I’m lucky and locked in playing a run, so imagine how rough it would be to the dev to try going past 1Mb in different scenarios while maintaining the site and trying to wrangle the LLM. Personally, I wouldn’t even try! 🤣

    I know that many of the people complaining on the new model latch on it being unable to run “comfort scenarios” which… in some runs I had absolutely no problem! (Except of course the issue of repetition and running in circles, which is still universal) So what I think would be an excellent exercise, as well as a proper debug tool to know when and how things break with the current LLM is to try different runs in different topics and check what conditions in particular make things break and when (with when I mean after what input, or log size), since I have the feeling that as now, the LLM breaks faster in certain contexts and decides to stay focused and creative with one particular style, that could point to bias in the training (BTW, is not the violent ones, I tried and those break like paper very quick).

    But overall, posts and threads like this do aid a lot. Input, positive or negative, is always good so long it is supported and not just “all is perfect, lol” or “all is crap, lmao”. Otherwise, how to know what is working or not? 😅


  • I’m resurrecting this fossil of a thread just in case you are still curious about what happened with the “double personality trick” since… while my original tests failed, I think I have an answer now, but I don’t know how true it will hold as I only know it works when this thread was made and now after the last update. Still! It’s interesting to me at least. 😆

    So, I ran this in AI Chat, not ACC, so emulating the [SYSTEM] Two paragraph is not that straight forward, but the nearest equivalent is the box that says “short responses” because it controls (in theory) only the length, but in practice, it toggles a “style” of writing, and… oh boy, is this a rabbit hole!

    Turns out, that maybe on some update or in the training data, two things where done separately, as what is RP like text (e.g. *Laughs* Ay, *Laughing harder* Lmao) and what is book/story like text (e.g. Bot laughed "Ay", then even harder "Lmao") and that’s what it is toggling a pseudo double personality. This can be better seen on how runs go in AI RPG compared to ACC when you give them no prompt.

    Again, it is a whole rabbit hole, and I’m unsure if its worth writing a long post about it, just pass it to you via PM, or hijack Basti0n post on feedback in the new update to detail this “feature”. It was fun to try making sense of it for me at least! 🤣


  • I thought I was imagining things, but since others seem to be doing better, I guess that the update really improved the model then! That’s awesome

    From my side, at least two things have improved: the English no longer decays into caveman speak, and the head-start is infinitely easier with minimal directions to the model. Also, some contradicting descriptions tend to work better. This all is actually a great improvement, but I’d be lying if I’d say that on my side I tested them thoroughly.

    Something I tried as a quick test was to check how the model reacts with long logs and… yep, it still get stuck and running in circles due to weave patterns that repeat ad nauseam. It may be me having bad samples, but problems are still lingering past the 200kB, heavy past the 500kB mark, and unbearable on the 1Mb mark. By this I just mean having to deal with unsticking the LLM by editing heavily, not that it is impossible to continue. If someone has a long log that is fluid, please share what conditions allow for it.

    But yeah, Basti0n is right! There was indeed a notorious improvement even if we are not there yet. Maybe there is future for DeepSeek after all!



  • Oh no, I’m not the maintainer of AI Chat! That would be the dev of Perchance himself I believe, as it is credited in the ai-text-plugin description. I’m just a random user like anyone else! 😅

    But the good thing about the whole Perchance site is that it is possible to fork generators and projects allowing anyone to mode them to your needs! Hence how I made that other link. Again, most I can promise is a “copy”, but what happens with the canon version is not up to me.

    I’ll still try making a button to toggle colors of the style there sometime I guess. But I’m glad that the link I had was enough to solve the problem.


  • I think I know what you want to do and why, and while there is a way to achieve it by tinkering with the code of existing generators… that could be a bit tricky and I can’t promise you to make one for this now, so sorry in advance.

    But I can give you the steps to achieve this manually. For those purposes, I use this version of Image Generator Professional, but this method should work with any generator that you may find in the site.

    Let’s say you filled the prompt and the options there are to generate an image like shown here:

    Ignore the fact that the generated image looks nothing like what the prompt describes, you know how LLMs are.

    If you hover your mouse over the generated image, you’ll see in the top left corner an 🛈 symbol. If you click it, you’ll see this:

    This is all the metadata you need to recreate the image, as these are the orders passed to the LLM, so to replicate the result, you just need to paste this in the prompt, and this time remove all the styles and optional options (this varies depending on the generator you use, in this example it is just setting Art Style to “No Style” and Art Style Mixing to “No Mix”.

    By doing this, you may get now something like this:

    Notice that the output is very similar, albeit nor a carbon copy of the original.

    Again, this is pretty much the “caveman” way of doing, and yes, it is possible to implement this pipeline in a generator, but I think that would be overkill when all that it is required is to copy and paste the orders in a plain .txt

    Hope that helps though!


  • I don’t know why anyone would use Reddit, personally, I’ve never found anything of value there nor a good solution for any problem on any topic. 🤣

    Jokes aside, I get the problem now, but for some reason I can’t replicate it. Probably due to the issue that I’m locked to an old PC and I don’t have a working phone that can handle webpages, so I’ll ask you to be a bit patient with me on this one, since on my end, a quick test looks like this:

    Again, this is skill issue on my side. Now, if this is recent and the code was updated, then please try this version I made a while ago to deal with some of the new LLM unexpected behavior. You should not have any meaningful difference using this as the canon AI Chat.

    If that doesn’t work, the I suspect that in AI Chat, the culprit line is now Line 849 which reads as follows.

    {match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#00539b, #4eb5f7);"},
    

    This is my wild guess as testing the HEX, these are the only that are blue. So changing it to:

    {match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#000000, #ffffff);"},
    

    Should do the same as the method described in AI RPG.

    I’ll try to see how hard it is to implement a “toggle”, but I’d ask you for some patience as I’m going blind in this one since the hardware I got doesn’t let me replicate the issue. If by some miracle, the link I gave is more than enough, please confirm me so I don’t need to waste much time implementing a button for no purpose. 😅

    Again, sorry for not having a foolproof solution yet.



  • Are you sure this is in AI Chat? I checked it and the text is still gray as always under any format, unless I’m using an old link. If so, could you post the link and an image of the problem?

    I do know that AI RPG has the blue text since quite a while, and if that’s the one you are referring to, here is the edited version with no blue text, and here is how to achieve it:

    In the HTML side of the code, you’ll notice that Line 59 reads:

    {match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#00539b, #4eb5f7); font-style:italic;"},
    

    And Line 72 reads:

    document.querySelector(':root').style.setProperty(`--text-style-rule-quote-color`, darkMode ? "#4eb5f7" : "#00539b");
    

    Those two control the colors of the text that will be in quotes. All you need to do is change the HEX values to the colors you want (first for light mode, second for dark mode).

    Here is how it looks after the change, again, ideally you’d edit this to whatever style you want:

    Hope that helps!