When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.
At university, I faked a paper on economics (not actually my branch of study, but easily to fake) and put it on the shelf in their library. It was filled with nonsense formulas that, if one took the time and actually solved the equations properly, would all produce the same number as a result: 19920401 (year of publication, April Fools Day). I actually got two requests from people who wanted to use my paper as a basis for their thesis.
Congratulations! You are now a practicing economist. This is exactly how that field works.
Economy is just applied psychology.
It really isn’t even that.
The closest historical equivalent of an economic professional is the Haruspex.
You’re all thinking of business. Economics is an empiric field of study that is based on science.
Downvotes while being right, the hive mind churns I guess. Man are you people idiotic.
Since when is a Nobel prize winner admitting they were wrong and did not know enough any sort of “proof” that a whole discipline is not real? Get off your high horse…
I studied economics, none of it was purely psychology, and it was quite intense mathematically. It seems people conflate the discipline with business courses and assume it is just buzzwords and wishful thinking. When it is based on empirical data and experiments, just look at fields like how data science is applied or econometrics…
Don’t insult me with the e-word!
How did you respond?
I told them to actually solve the equations and think about the results.
Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?
Wouldn’t it be OCR in this case? At least the scanning?
Yes, but the LLM does the writing. Someone probably carelessly copy pasta’d some text from OCR.
Fair enough, though another possibility I see is that the automated training process for LLMs used OCR for those papers (Or an already existing text version in the internet was using bad OCR) and those papers with the mashed word were written partially or fully by an LLM.
Either way, the blanket term “AI” sucks and it’s honestly getting kind of annoying. Same with how much LLMs are used.
For some weird reason, I don’t see AI amp modelling being advertised despite neural amp modellers exist. However, the very technology that was supposed to replace the guitarists (Suno, etc) are marketed as AI.
I think that’s because in the first case, the amp modeller is only replacing a piece of hardware or software they already have. It doesn’t do anything particularly “intelligent” from the perspective of the user, so I don’t think using “AI” in the marketing campaign would be very effective. LLMs and photo generators have made such a big splash in the popular consciousness that people associate AI with generative processes, and other applications leave them asking, “where’s the intelligent part?”
In the second case, it’s replacing the human. The generative behaviors match people’s expectations while record label and streaming company MBAs cream their pants at the thought of being able to pay artists even less.
Is there anything like suno that can be locally hosted?
Scientists who write their papers with an LLM should get a lifetime ban from publishing papers.
I played around with ChatGTP to see if it could actually improve my writing. (I’ve been writing for decades.)
I was immediately impressed by how “personable” the things are and able to interpret your writing and it’s able to detect subtle things you are trying to convey, so that part was interesting. I also was impressed by how good it is at improving grammar and helping “join” passages, themes and plot-points, it has advantages that it can see the entire writing piece simultaneously and can make broad edits to the story-flow and that could potentially save a writers days or weeks of re-writing.
Now that the good is out of the way, I also tried to see how well it could just write. Using my prompts and writing style, scenes that I arranged for it to describe. And I can safely say that we have created the ultimate “Averaging Machine.”
By definition LLM’s are designed to always find the most probable answers to queries, so this makes sense. It has consumed and distilled vast sums of human knowledge and writing but doesn’t use that material to synthesize or find inspiration, or what humans do which is take existing ideas and build upon them. No, what it does is always finds the most average path. And as a result, the writing is supremely average. It’s so plain and unexciting to read it’s actually impressive.
All of this is fine, it’s still something new we didn’t have a few years ago, neat, right? Well my worry is that as more and more people use this, more and more people are going to be exposed to this “averaging” tool and it will influence their writing, and we are going to see a whole generation of writers who write the most cardboard, stilted, generic works we’ve ever seen.
And I am saying this from experience. I was there when people started first using the internet to roleplay, making characters and scenes and free-form writing as groups. It was wildly fun, but most of the people involved were not writers, but many discovered literation for the first time there, it’s what led to a sharp increase in book-reading and suddenly there were giant bookstores like Barns & Noble popping up on every corner. They were kids just doing their best, but that charming, terrible narration became a social standard. It’s why there are so many atrocious dialogue scenes in shows and movies lately, I can draw a straight line to where kids learned to write in the 90’s. And what’s coming next is going to harm human creativity and inspiration in ways I can’t even predict.
I am a young person who doesn’t read recreationally, and I avoid writing wherever I can. Thank you for sharing your insight as well as sparking an interesting discussion in this thread.
Reading is incredibly important for mental development, it teaches your brain how to have the language tools to create abstractions of the world around you and then use those abstractions to change perspectives, communicate ideas and understand your own thoughts and feelings.
It’s never too late to start exercising that muscle, and it really is a muscle, a lot of people have a hard time getting started reading later in life because they simply don’t have the practice in forming words into images and scenes… but think about how strong that makes your brain when you can form text into whole vivid worlds, when you can create images and people and words and situations in your mind to explore the universe around you and invent simulated situations with more accuracy… I cannot scream enough how critically important it is for us to exercise this muscle, I hope you keep looking for things that spark your interest just enough that you get a foothold in reading and writing :)
Yup, it’s something I myself recently started to realise and have been forcing myself to read things that actually interest me.
While in elementary and middle school every 2 months we had a specific book we had to read and then would discuss it in class and would be graded based on our input.
Reading books and writing essays has been cemented in my mind as a boring chore that is forced upon me. It took years before it even occured to me that reading might be a fun activity, and a couple more before I actively started trying to read again. It’s difficult to break away from the mould I’ve been set to during my childhood, but I’m slowly chipping away at it.
Children SHOULD read, but how can we get them to WANT to read?
I can confirm that a lot of student’s writing have become “averaged” and it seems to have gotten worse this semester. I am not talking about students who clearly used an AI tool, but just by proximity or osmosis writing feels “cardboardy”. Devoid of passions or human mistakes.
This is how I was taught to write up to highschool. Very “professional”, persuasive essays, arguing in favor of something or against it “objectively”. (Assignment seemed to dictate what side I could be on LOL.) Limit humor and “emotional speech.” Cardboard.
I was taken aback in my first political science course at the local community college, where I was instructed to convey my honest arguments about a book assignment on polarization in U.S politics. “Whether you think it’s fantastic or you think it sucks, just make a good case for your opinion.” Wait, what?! I get to write like a person?!
I was even more shocked when I got a high mark for reading the first few chapters, skimming the rest, and truthfully summarizing by saying it was plain that the author just kept repeating their main point for like 5 more chapters so they could publish a book, and it stopped being worth the time as that poor horse was already dead by the 3rd chapter.
It was when it hit me, that writing really was about communication, not just information.
I worry about that these days: That this realization won’t come to most, and they’ll use these Ai tools or be influenced by them to simply “convey information” that nobody wants to read, get their 85%, and breeze through the rest of their MBA, not caring about what any of this is actually for, or for what a beautiful miracle writing truly is to humanity.
That isn’t what I mean by cardboard. Persuasive, research, argumentative essays have been taught to be written the way tou described. They are meant to be that way. But even then, the essays I have read and graded still have this cardboard feel. I have read plenty of research essays where you can feel the emotion, you can surmise the position and most of all passion of the author. This passion and the delicate picking of words and phrases are not there. It is “averaged”.
I think we’re saying a similar thing, but I understand your point better.
I have read plenty of research essays where you can feel the emotion, you can surmise the position and most of all passion of the author.
Exactly! That’s what I mean. There’s so many subjects I expected to be incredibly dry, but the writing reminded me it was written by a person who obviously cares about other people reading the text. One can communicate any subject without giving up their soul.
(I am always surprised, but I find this in programming books often, haha.)
But that’s what I meant by cardboard as well, I think we might be in agreement:
We expect to see a lot more writing that comes across like “This is what writing should look like, right?”
Writing that understands words, and “averages” the most likely way to convey information or fill a requirement, but doesn’t know how to wield language as an art to share ideas with another person.
the writing reminded me it was written by a person who obviously cares about other people reading the text.
This is what’s missing being discussed in nearly every online argument about AI art that I read online, there are rarely people who make the actual argument that the whole purpose of art and writing is to share an experience, to give someone else the experience that the author or artist is feeling.
Even if I look at a really bad poem or a terrible drawing, if the artist was really doing their best to share the image in their head or the feeling they were having when they wrote it, it will be 1000X more significant and poignant than a machine that crushes the efforts of thousands of people together and averages them out.
Sure there are billions of people who are content with looking at a cool image and think no deeper of it and are even annoyed at criticism of AI work, but on some level I think everyone prefers content made by another human trying to share something.
I know exactly what you mean, I still frequent a lot of writing communities and that “cardboard” feeling is spreading. Most young people who have an interest in writing are basically sponges for absorbing how their peers write, so it’s tragic when their peers are machines designed to produce advertiser-friendly ad-copy.
tbf school-goers nowadays are mostly taught to not make mistakes.
I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.
Your conjecture that bad writing is due to roleplaying on the early internet is a bit more… speculative. Lacking any numbers comparing writing trends over time I don’t think one can draw such a conclusion.
Large discord groups and forums are still the proving ground for new, young writers who try to get started crafting their prose to this day, and I have watched it for over 30 years. It has changed, dramatically, and I would be remiss to say I have no idea where the change came from if I didn’t also see the patterns.
Yes it’s entirely anecdotal, I have no intention of making a scientific argument, but I’m also not the only one worried about the influence of LLM’s on creators. It’s already butchering the traditional artistic world, just for the very basic reason that 14-year-old Mindy McCallister who has a crush on werewolves at one time would have taught herself to draw terrible, atrocious furry art on lined notebook paper with hearts and a self-inserted picture of herself in a wedding dress. This is where we all get started (not specifically werewolf romance but you get the idea) with art and drawing and digital art before learning to refine our craft and get better and better at self-expression, but we now have a shortcut where you can skip ALL of that process and just have your snarling lupine BF generated for you within seconds. Setting aside the controversy over if it’s real art or not, what it’s doing is taking away the formative process from millions of potential artists.
I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.
For image generation models I think a good analogy is to say it’s not drawing, but rather sculpting - it starts with a big block of white noise and then takes away all the parts that don’t look like the prompt. Iterate a few times until the result is mostly stable (that is it can’t make the input look much more like the prompt than it already does). It’s why you can get radically different images from the same prompt - the starting block of white noise is different, so which parts of that noise look most prompt-like and so get emphasized are going to be different.
BuT tHE HuMAn BrAin Is A cOmpUteEr.
Edit: people who say this are vegetative lifeforms.
Vegetative electron microscopes!
It immediately demonstrates a lack of both care and understanding of the scientific process.
I recently reviewed a paper, for a prestigious journal. Paper was clearly from the academic mill. It was horrible. They had a small experimental engine, and they wrote 10 papers about it. Results were all normalized and relative, key test conditions not even mentioned, all described in general terms… and I couldn’t even be sure if the authors were real (korean authors, names are all Park, Kim and Lee). I hate where we arrived in scientific publishing.
To be fair, scientific publishing has been terrible for years, a deeply flawed system at multiple levels. Maybe this is the push it needs to reevaluate itself into something better.
And to be even fairer, scientific reviewing hasn’t been better. Back in my PhD days, I got a paper rejected from a prestigious conference for being too simple and too complex from two different reviewers. The reviewer that argue “too simple” also gave a an example of a task that couldn’t be achieved which was clearly achievable.
Goes without saying, I’m not in academia anymore.
Startups on the other hand have people pursuing ideas that have been proven to not work. The better starups mostly just sell old innovations that do work.
People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.
People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.
Somehow I briefly got her and Pluckrose reversed in my mind, and was still kinda nodding along.
If you don’t know who I mean, Pluckrose and two others produced a bunch of hoax papers (likening themselves to the Sokal affair) of which 4 were published and 3 were accepted but hadn’t been published, 4 were told to revise and resubmit and one was under review at the point they were revealed. 9 were rejected, a bit less than half the total (which included both the papers on autoethnography). The idea was to float papers that were either absurd or kinda horrible like a study supporting reducing homophobia and transphobia in straight cis men by pegging them (was published in Sexuality & Culture) or one that was just a rewrite of a section of Mein Kampf as a feminist text (was accepted by Affilia but not yet published when the hoax was revealed).
My personal favorite of the accepted papers was “When the Joke Is on You: A Feminist Perspective on How Positionality Influences Satire” just because of how ballsy it is to spell out what you are doing so obviously in the title. It was accepted by Hypatia but hadn’t been published yet when the hoax was revealed.
People shit in Hossenfelder much more for her non-academic takes.
Her video on trans issues has made it very difficult to take her seriously as a thinker. The same types of manipulative half truths and tropes I see from TERFs pretending they have the “reasonable” view, while also spreading the hysteric media narrative about the kids getting transed.
I didn’t even see that. Just a few clips of her rants about other things she confidently knows nothing about, like a less incoherent Jordan Peterson.
She sucks when overextendeding her aura of expertise to domains she’s not good in (eg metaphysics and esp pan-psychism which she profoundly misunderstands yet self-assuredly talked about). Her criticism of academia is good, but she reproduces some of that nonsense herself.
As someone who just looked at the Wikipedia article, I too am an expert in this field, unironically, because it’s woo woo nonsense.
Can you explain how you reached that conclusion? Since you’re a rigorous thinker, no doubt it would be trivial for you. After all, you’re notably up against Bertrand Russell, one of the writers of the first attempt to ground maths onto rigorous foundations, so since it only took you a few minutes to come to your conclusion, you must have a very powerful mind indeed. Explaining your reasoning would be as easy as breathing is for us the lesser-minded.
Aristotle believed in it too, along with the four humors and classical elements.
Doesn’t make his thoughts on rhetoric irrelevant, but those also don’t make his mystical solutions to problems he didn’t have the tools to solve correct.
That someone like Russell subscribed to a form of protopanpsychism is not a proof that his position is right. It does indicate, on the other hand, that it could be a kind of metaphysical position that’s more serious than you believe it is, serious enough that vaguely recognizing a few words in a few sentences on wikipedia is not enough to actually understand it. Not only that but it’s had actual scientific productivity through ergonomics (eg “How the cockpit remembers its speed”), biology (biosemiotics), sociology (actor network theory), and even arguably in physics through Ernst Mach and information theory.
No it doesn’t.
Hossenfelder is fine but tries to educate way outside her realm. Her cryptocurrency episode made me lose all respect for her.
Do you usually get to see the names of the authors you are reviewing papers of in a prestigious journal?
I try to avoid reviews, but the editor is a close friend of mine and i’m an expert of the topic. The manuscript was only missing the date
It is worthwhile to note that the enzyme did not attack Norris of Leeds university, that would be tragic.
It is by no spores and examined!
This early draft for The Last of Us just gets weirder and weirder.
At least part of it was not known!
At least they’ve obtained exosporium in Clos. I know they’ve been working hard at it.
Another basic demonstration on why oversight by a human brain is necessary.
A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers
To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit
Give it a few billion years.
Realistic timeline
Management would like to push up this timeline. Can you deliver by end of week?
My wife did not react kindly to that request when she was pregnant.
As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.
The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.
It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor.
Notably, neither of those two disciplines are computer science. Silicon computers are Turing complete. They can (given enough time and scratch space) compute everything that’s computable. The brain cannot be more powerful than that you’d break causality itself: God can’t add 1 and 1 and get 3, and neither can god sort a list in less than O(n log n) comparisons. Both being Turing complete also means that they can emulate each other. It’s not a metaphor: It’s an equivalence. Computer scientists have trouble telling computers and humans apart just as topologists can’t distinguish between donuts and coffee mugs.
Architecturally, sure, there’s massive difference in hardware. Not carbon vs. silicon but because our brains are nowhere close to being von Neumann machines. That doesn’t change anything about brains being computers, though.
There’s, big picture, two obstacles to AGI: First, figuring out how the brain does what it does and we know that current AI approaches aren’t sufficient,secondly, once understanding that, to create hardware that is even just a fraction as fast and efficient at executing erm itself as the brain is.
Neither of those two involve the question “is it even possible”. Of course it is. It’s quantum computing you should rather be sceptical about, it’s still up in the air whether asymptotic speedups to classical hardware are even physically possible (quantum states might get more fuzzy the more data you throw into a qbit, the universe might have a computational upper limit per unit volume or such).
Notably, computer science is not neurology. Neither is equipped to meddle in the other’s field. If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains. But they are not equivalent. Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing. And Turing completeness is irrelevant. The brain is not a Turing machine. It does not process tokens one at a time. Turing completeness is a technology term, it shares with Turing machines the name alone, as Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.
If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains.
Does not follow. Different architectures require different specialisations. One is research into something nature presents us, the other (at least the engineering part) is creating something. Completely different fields. And btw the analytical tools neuroscientists have are not exactly stellar, that’s why they can’t understand microprocessors (the paper is tongue in cheek but also serious).
But they are not equivalent.
They are. If you doubt that, you do not understand computation. You can read up on Turing equivalence yourself.
Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing.
The fuck has “fast” to do with “complex”. Also the mechanisms probably aren’t terribly complex, how the different parts mesh together to give rise to a synergistic whole creates the complexity. Also I already addressed the distinction between “make things run” and “make them run fast”. A dog-slow AGI is still an AGI.
The brain is not a Turing machine. It does not process tokens one at a time.
And neither are microprocessors Turing machines. A thing does not need to be a Turing machine to be Turing complete.
Turing completeness is a technology term
Mathematical would be accurate.
it shares with Turing machines the name alone,
Nope the Turing machine is one example of a Turing complete system. That’s more than “shares a name”.
Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.
You’re probably thinking of the Turing test. That doesn’t have to do anything with Turing machines, Turing equivalence, or Turing completeness, yes. Indeed, getting the Turing test involved and confused with the other three things is probably the reason why you wrote a whole paragraph of pure nonsense.
Re: quantum computing, we know quantum advantage is real both for certain classes of problems, e.g. theoretically using Grover’s, and experimentally for toy problems like bosonic sampling. It’s looking like we’re past the threshold where we can do error correction, so now it’s a question of scaling. I’ve never heard anyone discuss a limit on computation per volume as applying to QC. We’re down to engineering problems, not physics, same as your brain vs computer case.
From all I know none of the systems that people have built come even close to testing the speedup: Is error correction going to get harder and harder the larger the system is, the more you ask it to compute? It might not be the case but quantum uncertainty is a thing so it’s not baseless naysaying, either.
Let me put on my tinfoil hat: Quantum physicists aren’t excited to talk about the possibility that the whole thing could be a dead end because that’s not how you get to do cool quantum experiments on VC money and it’s not like they aren’t doing valuable research, it’s just that it might be a giant money sink for the VCs which of course is also a net positive. Trying to break the limit might be the only way to test it, and that in turn might actually narrow things down in physics which is itching for experiments which can break the models because we know that they’re subtly wrong, just not how, data is needed to narrow things down.
Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.
I don’t get how the ethics of that are questionable. It’s not like they’re taking brains out of people and using them. It’s just cells that are not the same as a human brain. It’s like taking skin cells and using those for something. The brain is not just random neurons. It isn’t something special and magical.
We haven’t yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.
There wouldn’t be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.
Reading about those studies is pretty interesting. Usually the neurons do most of the heavy lifting, adapting to the I/O chip input and output. It’s almost an admittance that we don’t yet fully understand what we are dealing with, when we try to interface with our rudimentary tech.
I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.
It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren’t close to doing yet, but that could change quickly.
There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don’t even know about yet.
I’d give it a 50-50 chance for singularity this century, if development isn’t stopped for some reason.
We would have to direct it in specific directions that we don’t understand. Think what a freak accident we REALLY are!
EDIT: I would just copy-paste the human brain in some digital form, modify it so that it is effectively immortal inside the simulation, set simulation speed to * 10.000.000, and let it take it’s revenge for being imprisoned into an eternal void of suffering.
Straight out from Pantheon. Actually a part of the plot of the show
What does “better” mean in that context?
Dankest memes
I strongly encourage you to at least scratch the surface on human memory data.
The human brain has a pattern recognition system. It is not just a pattern recognition system.
deleted by creator
The LLM systems are pattern recognition without any logic or awareness is the issue. It’s pure pattern recognition, so it can easily find some patterns that aren’t desired.
Said the species that finds Jesus on toast every other week.
pattern recognition without any logic or awareness is the issue.
Sounds like American conservatives
It is by no spores either
The peer review process should have caught this, so I would assume these scientific articles aren’t published in any worthwhile journals.
One of them was in Springer Nature’s Environmental Science and Pollution Research, but it has since been retracted.
The other journals seem less impactful (I cannot truly judge the merit of journals spanning several research fields)
“Science” under capitalism.
https://theanarchistlibrary.org/library/paul-avrich-what-is-makhaevism
Lysenko did nothing wrong?
Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?
AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.
Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That’s repugnant.
I don’t think it’s even a hot take
It’s a hot take, but it’s also objectively the correct opinion
Unfortunately, the former is rather what should be the case, although so many times it is not:-(.
Yeah, this is a hot take: I think it’s totally fine if researchers who have done their studies and collected their data want to use AI as a language tool to bolster their paper. Some researchers legitimately have a hard time communicating, or English is a second language, and would benefit from a pass through AI enhancement, or as a translation tool if they’re more comfortable writing in their native language. However, I am not in favor of submitting it without review of every single word, or using it to synthesize new concepts / farm citations. That’s not research because anybody can do it.
It is also a somehow hot take because it kinda puts the burden of systemic misconfiguration on individuals shoulders (oh hey we’ve seen this before, after and all the time, hashtag (neo)liberalism).
I agree people who did that fucked up. But having your existence as an academic, your job, maybe the only thing you’re good at rely on publishing a ton of papers no matter what should be taken into account.
This is a huge problem for science not just since LLM’s.
Yeah, when you build the hoops you must jump through to maintain your livelihood to be based on a publication machine is it any surprise people gameify it and exploit what they can
Even hotter take:
You should be abke to sue these peer-reviewed journals that let this kind of errors slip through. And they should lose the ability to call themselves “peer-reviewed”.
I have an actual hot take: the ability to communicate productive science shouldn’t be limited by the ability to write.
if you’re contribution is a paper that you don’t even proof read to ensure it makes any sense at all then your contribution isn’t “productive science”; it’s a waste of everyone’s time
*your
well at least you know my comment wasn’t written by AI 😞
Gottem
There are people in academia now that just publish bullshit incomprehensible papers that may be wrong just to justify continuing funding and not rock the boat. It keeps them employed and paid. I belive this person discussed this
I knew who this was going to be before I even clicked, and I highly suggest you ignore her. She speaks well outside of fields she has any knowledge about (she’s a physicist but routinely extrapolates that to other fields in ways that aren’t substantiated) and is constantly spreading FUD about academia because it drives clicks. She essentially hyper-amplifies real problems present in academia in a way that basically tells the public not to trust science.
I think you can use vegetative electron microscopy to detect the quantic social engineering of diatomic algae.
My lab doesn’t have a retro encabulator for that yet, unfortunately. 😮💨
The most disappointing timeline.
Thank you for highlighting the important part 🙏
Most articles from the 2020s, just about, but one from 1959, and it seems to talk about the same stuff as OP’s screenshot.
My dear posters, I think this may be the source.
I mean, they did not have LLM’s in the late 1950s, so if there’s anywhere where “vegetative electron microscopy” could have come from, it would be that article. And if you look in the Scholar search results, you’ll find the same words around that phrase as are in the screenshot, soooooooo…
Well yes, obviously. It’s even in the top post that the original was in 1959.
Let’s just say that for the interested, I found the original paper, so now you all can see precisely where AI learned this shit.
Yep, page 4, seventh line from the bottom. That’s the one in the screenshot