Seen lots of people at the comments section complain about chinese characters, but I got these:
doubling over until his cap nearly slips off. "ableView finally catching on
The Magic Sphere hums on a pedestal woven from licorice ropes, its surface swirling with imprisoned starlight “…/static/placeholder.png”…/static/placeholder.png"and fractured reflections
Why was the old model even replaced in first place?
At the moment, there just seem to be a few problems, which can happen. Take a look at my comment on this post. It all happened in less than 24 hours: https://lemmy.world/post/41181133
It seems as if this model is clearly showing its flaws. As of yet, no one has figured out what is causing this exactly, could be a glitch, bug, I doubt even justpassing can explain this.
… and I took this personally. 😆
Jokes aside, what we all are seeing is something akin to the demonstration in this video:
https://www.youtube.com/shorts/WP5_XJY_P0Q
A theory of why this happens is because the model can’t find a proper way to extrapolate a large large input in a coherent way so it picks random connections causing it to “speak in tongues”.
In the video, the exploit performed is to make the model think it gave as legit advice something outlandish, so it “short circuits” and just gargles random data.
In the case of perchance… I’d be lying if I say I know exactly why it happens now often while in the past this was rare (mind you all, Llama exhibited this too in very niche cases at 20Mb+ and the current model at release I think at 2Mb+, don’t quote me on that, I’m just going from memory), is because again the model is being “hyper-trained” so it fixates only and exclusively on the new training data and not the original data bank it had from fabric. Again, this is just my theory as the opposite could be true, if this model is a “clean” one but with a default language that is not English, it may be struggling with large inputs. Then again I bet more on the earlier given how this model behaved at release compared to today.
Luckily, the workaround is the same as how to cause this artificially: editing the “tongue speak” out and carry on until the model cannot link random parts of its database. It is extremely annoying, but it is not impossible to deal with unless it gets worse and all inputs cause this. In such case, then the model would be broken beyond repair and I hope we don’t get there.
Two days ago Dev said in another thread (AI text bug “〖”) that there was a bug in recent changes they made, so it’s probable further patching was done. So there.
As for the question:
Why was the old model even replaced in first place?
Ah, the sweet whispers of nostalgia mingling with the smell of roasting meats, incense, and maybe, just maybe, freshly baked bread, a stark contrast to the candles casting shadows on the tapestry, a grim tableau of cobblestone streets. But let’s not get ahead of ourselves… her voice, which was a soothing balm, grew determined as her eyes narrowed to slits, eyes gleaming with mischief and never leaving yours, leaning in to whisper: “let me in”.
( ~_~)
Look, I get what you mean, but I used it quite frequently for nigh a year and the old model would absolutely, 100% shit the bed at any task that was even remotely complex, which made prompt engineering an outright fool’s errand at times, so just a hunch, but I presume that had something to do with it.
Don’t know how to put this politely: thing was dumb as a fook’n rock, is what I’m trying to say.
Now, it did have the ability to regurgitate cheap literary cliché, which I found extremely charming in case you can’t tell, but the issues it had brought complaints and memes galore, so perhaps there’s a bit too much of the ol’ rose-tinted peepers going on here, eh.
OK, peace.
I can’t agree with anything you said here.
Ah, the sweet whispers of nostalgia mingling with the smell of roasting meats, incense, and maybe, just maybe, freshly baked bread, a stark contrast to the candles casting shadows on the tapestry, a grim tableau of cobblestone streets. But let’s not get ahead of ourselves… her voice, which was a soothing balm, grew determined as her eyes narrowed to slits, eyes gleaming with mischief and never leaving yours, leaning in to whisper: “let me in”.
Opposed to our current model? “Her dark eyes narrow—knuckles whitening—plum blossom intensifying—as her nails dig moon crescents into his flesh.”
Suffers from the exact same issues efven if worse.
Look, I get what you mean, but I used it quite frequently for nigh a year and the old model would absolutely, 100% shit the bed at any task that was even remotely complex, which made prompt engineering an outright fool’s errand at times, so just a hunch, but I presume that had something to do with it.
Don’t know how to put this politely: thing was dumb as a fook’n rock, is what I’m trying to say.
I really never ever found this to be the case. If anything the new model is worse at writing out elaborate texts because it cuts off faster.
Honestly might be just to save money because the new model seems briefer-I don’t blame the dev and super appreciate this amazing free tool-but don’t stand here and tell us it’s somehow better.
I have to agree here. The new model is only marginally better in some areas. Yet it still suffers from:
- Overly metaphorical descriptions of useless/non-plot driving environmental objects and phenomena. Still seems to have a hardcoded obsession with smells (Even in dialogue)
- Obsession with stage directions. (Knuckles whiten etc.)
- Still has a limited pool of names, referentials, examples.
I also found quite a few problems with dialogue generation
- Tweeness and Sentimentality: “Remember when Grandma (Always referred to as a proper noun regardless of relationship to speaker) used to bring us freshly baked cookies to school?”
- Gaslighting and Negation: I don’t know if this is a personal problem, but the characters seem to be extremely gaslighting (I didn’t tell them to be that) and constantly negate my statements with ‘Bullshit’. Gets very agitating after a while.
- Tries way to hard to be funny and falls completely flat: “This place smells like regret and stale beer.” (Two problems in one: obsession with smells and an overused AI trope) Other than that, the dialogue is slightly better and less all-out than the previous model.
Outside knowledge is satisfactory but not fantastic.
So yeah, it’s better and perfectly fine for a free tool, but not nearly up to ChatGPT levels.
Suffers from the exact same issues efven if worse.
Prediction engines are simply lacking when it comes to dramatic writing and that’s just that, you’re going to get verbose, formulaic, sensory-heavy, unoriginal and uninspired bullshit because what it’s doing is moreso like autocomplete and a lot less like conscious narrative choices.
So if you hyperfixate on the chat and story generators, as if that was all an LLM is for? Well, tough luck, I guess.
But there is a huge difference in what you can do via prompting: new model’s capabilities to understand complex prompts is definitely a huge improvement, and I’m gonna stand here and hammer you all over the head with that. It is better, you just ain’t looking in the right place.



