- cross-posted to:
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- hackernews@lemmy.smeargle.fans
What will it take until people get it through their thick skulls that ChatGPT isn’t intelligent, doesn’t learn and is a tool that can only generate plausible gibberish.
Using the same tools to detect such gibberish will give you more gibberish.
Garbage in, Garbage out has been true since the difference engine, it’s just that today the garbage smells like English words, still garbage, but not knowledge, intelligence or anything like it.
The machine learning approach for building models, used to produce so called large language models like ChatGPT is also used to create weather forecasting models that are bigger, better and orders of magnitude faster than available until now.
The tools have changed life, but I’m unconvinced that it’s a suitable, sustainable or realistic way to create artificial intelligence, despite claims to the contrary.
People are so insistent that it’s ai that it all reminds me of Blockchain. It’s new! It’ll change everything!
It’ll change some things. What we are seeing now is business forcing it into everything when really, right now, there are only a handful of things it makes sense to use.
It’s really great at giving you a starting point a very rough outline of something. That is the easy part. The hard part is turning that into something new and coherent, and for that I think modern AI is nowhere close. That needs a human
I think it’s definitely a bubble that will burst eventually.
At the same time, I don’t think there’s any way to put the toothpaste back in the tube. This technology is out there, and even once the hype has died down, we’re going to be dealing with it forever.
It is by definition AI
In the sense that AI is an extremely general term that involves many different technologies, yes. Generative AI/LLMs are not true AGI, which is what people think it is. It cannot think, it cannot learn, it can only predict.
People think it AI intelligence is comparable to how a hovercraft hovers, as in the word is taken literally, but it is actually comparable to a Hoverboard.
That’s actually pretty good… the techbro equivalent of “We did it!”
It cannot think, it cannot learn, it can only predict.
That’s a distinction without a difference. If it can predict what a AGI would do in a given situation, then it is an AGI.
I’m not saying that it is an AGI, but the reason it it isn’t is more than “it can only predict”.
Nobody who’s not an engineer seems to give a shit - or, indeed, even understand - the nuance of LLM technology, or the technical reasons behind its limitations and the implications thereof. Hell, I know a lot of engineers who don’t care or understand it at a meaningful level.
And some of the engineering types are busy kissing the feet of people like Altman and Musk so they don’t get a chance to even notice.
Nothing, it seems close enough to most that they actually can’t think about it any other way apart from human.
I manage computing for a large university. One of my recently graduated students told me that he thought that technology just worked until he worked for me and saw the problems that come up. He was already a very tech-aware person and is going for a PhD in Infomatics, so if even he didn’t understand this, then what can we expect from the general public?
I don’t just want AI news to fail, I want it to take the web-scraping trending post news bots down with it.
Bring investigative journalists back to news media.
Been very impressed by the quality of reporting by 404 Media and they seem to be making it work financially so feeling cautiously optimistic!
preach
noo, this one dude on twitter said the state of contemporary journalism is great
AI Detector Companies:
but ai has electrolytes!
It’s what plants crave!
deleted by creator