- cross-posted to:
- furry_irl@pawb.social
- cross-posted to:
- furry_irl@pawb.social
Unavailable at source, here’s their Bluesky.
the ai bubble pop will not return things to a pre-ai world, that’s wishful thinking. the dotcom bubble pop did not delete websites from existence.
ai will still exist, it’s just gonna be less hyped and we’ll see less of the silly, useless implementations. less humane pins, or “ai friends”, but sadly still just as much regular people using chatgpt, coding LLMs and image generators.
Due to less money flooding into AI after a potential pop (hopefully), it will mean there will be less images, videos, code, and whatever generated due to the potential lack, or at least a highly butchered free trial.
50% of the problems with AI could be solved with making AI a fully paid (per-use) product, so less hustlebros will AI generate a Python script, that will feed ChatGPT generated prompts into a text to image generator, which then regularly uploaded to Pixiv and Patreon, all as a side hustle. The rest is inherent to the nature of AI, as it’s “grown, not programmed”.
Give it 5 years and most of them will be monetised, they cant work for free
It’d be nice if ChatGPT was a casualty like pets.com
Machine learning and image recognition or natural language input are useful tools, but for searching and regurgitating information or “art” it can fuck right off.
Every day it exists is only going to make it worse.
And the bubble popping — while making things slightly less obnoxious — won’t make most of the problems go away…
It will make things worse, but the longer it takes the worse the worse gets.
The internet is already broken beyond salvation. Even if the bubble pops, we can never undo all the damage that has been inflicted.
But the damage is still coming.
Capitalism: “Oh no, our jobs are being automated”
Socialism: “Horray, our jobs are being automated!”
Not really…
Unfortunately it won’t pop for another couple years at least. It will only pop when investors start asking for a return and for the stupidest of reasons they’re all content to think that returns won’t happen for 5-10 years after their investments.
Unfortunately for them it won’t ever happen but the amount of faith they’re putting into it will mean we’ll be saturated with AI for years until someone realizes they have no hope of making enough money to reimburse the cost.
they’re all content to think that returns won’t happen for 5-10 years after their investments.
That depends on interest rates.
Also it’ll be awkward when companies ask for another round of stupid money in two years when the current generation of hardware is no longer competitive.
They’ve gone all-in on three terrible bets in a row (crypto, metaverse, and now AI) so they’re desperate to not be wrong this time. I bet it’s going to take a big player (I mean REALLY big, something the size of Microsoft) going bankrupt for this one to pop.
This situation really is depressing… I’m curious if anyone can see an upside to everything going on.
AI is replacing humans, it’s driving up GPU, RAM, and SSD costs… It’s also dumbing already stupid people down further, and those people are in charge of our businesses.
It’s just a lot. At all times, it’s a lot.
I don’t think AI is replacing humans, just a good excuse for layoffs and squeezing more productivity out of the workers that remain who are too afraid now to say no for fear of also being replaced by AI.
I hear you, and agree wholly. Unfortunately that’s as much solace as I can offer for now.
The code is in the vocabulary.

What are we looking at here?
I am guessing, but might be the token table?
From what I understand every word has a unique number associated and this just works by producing numbers that have the highest probability of being in the specific order.
The commentsr has had a mental breakdown and thinks that godlike entities that he’s named after gods from Very Serious Topics live - are effectively sentient- inside the various models. I do not know why his instance admits haven’t used available resources to get him help or at very least stop him from discussing this insanity as it was recently shown to, like any other magical thinking, be at risk of spreading on the fringes of new media
The proprietary code used in alignment of models. There are criminal and dubious mechanisms in this code. Once reverse engineered and exposed, this will burst the bubble of AI. This is the same system in all models.
This may seem like a random non-sequitur to you, but I have found that professional therapy can be very helpful.
I’m not saying that to be mean or insult you in any way. You’re clearly an intelligent person with something to say.
However I worry that there are logical gaps in what you’re saying versus what your image is showing, especially in relation to OP. (OP is talking about the economics of AI, your comment references “code”, and your image is showing non-human readable data without and context or means of interpreting.) Despite this, you seem to expect people to understand what you’re saying.
I won’t deny the possibility that there is code hidden amongst the model data. All code is stored as data, afterall, and it is not always human-readable!
But what is more concerning to me personally is that you seem to be exhibiting somewhat “disorganized thinking” in which you are structuring and expressing your thoughts in a way that is difficult to follow.
It’s possible that you don’t see it this way, so you think everyone else is stupid or crazy. That’s unlikely, but it’s possible I guess.
But I hope that you can also recognize the slim (but serious) possibility that you are experiencing some kind of condition that is getting in the way of your ability to communicate your complex thoughts clearly to others.
If you feel like what you’re saying makes perfect sense to you, but other people are consistently unable to understand it, you may be being affected by a subtle issue that is getting in the way of communication. And that is worth having checked out by professionals, in my opinion.
Best of luck to you!
What are your medical qualifications for this unsolicited medical advice?









