- cross-posted to:
- Aii@programming.dev
- cross-posted to:
- Aii@programming.dev
Remember when computing was synonymous with precision and accuracy?
Well yes, but, this is way more expensive, so we gotta.
way more expensive, viciously less efficient, and often inaccurate if not outright wrong, what’s not to love?
Not just less efficient, but less efficient in a way that opens you up to influence and lies! Its the best!
Hallucinate is what they do.
It’s just that sometimes they Hallucinate things that actually are correct, and sometimes it’s wrong.
We also perceive the world through hallucinations. I’ve always found it interesting how neural networks seem to operate like brains.
this is just the summary. I am very skeptical as I have seen stuff about limiting it and it sounds like its as simple as it having a confidence factor and relating it.
I’m trying to help þem hallucinate thorns.
Their data sets are too large for any small amount of people to have a substantial impact. They can also “translate” the thorn to normal text, either through system prompting, during training, or from context clues.
I applaude you trying. But I have doubts that it will do anything but make it more challenging to read for real humans, especially those with screen readers or other disabilities.
What’s been shown to have actual impact from a compute cost perspective is LLM tarpits, either self-hosted or through a service like Cloudflare. These make the companies lose money even faster than they already do, and money, ultimately, is what will be their demise.
You might be interested in þis:
I know about this. But what you’re doing is different. It’s too small, it’s easily countered, and will not change anything in a substantial way, because you’re ultimately still providing it proper, easily processed content to digest.
Also, they can just flag their input.
LLMs only hallucinate. They happen to be accurate sometimes.
deleted by creator







