• Dr. Bob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    284
    arrow-down
    1
    ·
    5 days ago

    When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      63
      arrow-down
      1
      ·
      5 days ago

      At university, I faked a paper on economics (not actually my branch of study, but easily to fake) and put it on the shelf in their library. It was filled with nonsense formulas that, if one took the time and actually solved the equations properly, would all produce the same number as a result: 19920401 (year of publication, April Fools Day). I actually got two requests from people who wanted to use my paper as a basis for their thesis.

  • ZkhqrD5o@lemmy.world
    link
    fedilink
    English
    arrow-up
    116
    arrow-down
    2
    ·
    5 days ago

    Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?

      • ZkhqrD5o@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Yes, but the LLM does the writing. Someone probably carelessly copy pasta’d some text from OCR.

        • Simyon@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Fair enough, though another possibility I see is that the automated training process for LLMs used OCR for those papers (Or an already existing text version in the internet was using bad OCR) and those papers with the mashed word were written partially or fully by an LLM.

          Either way, the blanket term “AI” sucks and it’s honestly getting kind of annoying. Same with how much LLMs are used.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      5 days ago

      For some weird reason, I don’t see AI amp modelling being advertised despite neural amp modellers exist. However, the very technology that was supposed to replace the guitarists (Suno, etc) are marketed as AI.

      • RobertoOberto@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 days ago

        I think that’s because in the first case, the amp modeller is only replacing a piece of hardware or software they already have. It doesn’t do anything particularly “intelligent” from the perspective of the user, so I don’t think using “AI” in the marketing campaign would be very effective. LLMs and photo generators have made such a big splash in the popular consciousness that people associate AI with generative processes, and other applications leave them asking, “where’s the intelligent part?”

        In the second case, it’s replacing the human. The generative behaviors match people’s expectations while record label and streaming company MBAs cream their pants at the thought of being able to pay artists even less.

  • SkunkWorkz@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    arrow-down
    4
    ·
    5 days ago

    Scientists who write their papers with an LLM should get a lifetime ban from publishing papers.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      edit-2
      4 days ago

      I played around with ChatGTP to see if it could actually improve my writing. (I’ve been writing for decades.)

      I was immediately impressed by how “personable” the things are and able to interpret your writing and it’s able to detect subtle things you are trying to convey, so that part was interesting. I also was impressed by how good it is at improving grammar and helping “join” passages, themes and plot-points, it has advantages that it can see the entire writing piece simultaneously and can make broad edits to the story-flow and that could potentially save a writers days or weeks of re-writing.

      Now that the good is out of the way, I also tried to see how well it could just write. Using my prompts and writing style, scenes that I arranged for it to describe. And I can safely say that we have created the ultimate “Averaging Machine.”

      By definition LLM’s are designed to always find the most probable answers to queries, so this makes sense. It has consumed and distilled vast sums of human knowledge and writing but doesn’t use that material to synthesize or find inspiration, or what humans do which is take existing ideas and build upon them. No, what it does is always finds the most average path. And as a result, the writing is supremely average. It’s so plain and unexciting to read it’s actually impressive.

      All of this is fine, it’s still something new we didn’t have a few years ago, neat, right? Well my worry is that as more and more people use this, more and more people are going to be exposed to this “averaging” tool and it will influence their writing, and we are going to see a whole generation of writers who write the most cardboard, stilted, generic works we’ve ever seen.

      And I am saying this from experience. I was there when people started first using the internet to roleplay, making characters and scenes and free-form writing as groups. It was wildly fun, but most of the people involved were not writers, but many discovered literation for the first time there, it’s what led to a sharp increase in book-reading and suddenly there were giant bookstores like Barns & Noble popping up on every corner. They were kids just doing their best, but that charming, terrible narration became a social standard. It’s why there are so many atrocious dialogue scenes in shows and movies lately, I can draw a straight line to where kids learned to write in the 90’s. And what’s coming next is going to harm human creativity and inspiration in ways I can’t even predict.

      • Shayeta@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I am a young person who doesn’t read recreationally, and I avoid writing wherever I can. Thank you for sharing your insight as well as sparking an interesting discussion in this thread.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Reading is incredibly important for mental development, it teaches your brain how to have the language tools to create abstractions of the world around you and then use those abstractions to change perspectives, communicate ideas and understand your own thoughts and feelings.

          It’s never too late to start exercising that muscle, and it really is a muscle, a lot of people have a hard time getting started reading later in life because they simply don’t have the practice in forming words into images and scenes… but think about how strong that makes your brain when you can form text into whole vivid worlds, when you can create images and people and words and situations in your mind to explore the universe around you and invent simulated situations with more accuracy… I cannot scream enough how critically important it is for us to exercise this muscle, I hope you keep looking for things that spark your interest just enough that you get a foothold in reading and writing :)

          • Shayeta@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            12 hours ago

            Yup, it’s something I myself recently started to realise and have been forcing myself to read things that actually interest me.

            While in elementary and middle school every 2 months we had a specific book we had to read and then would discuss it in class and would be graded based on our input.

            Reading books and writing essays has been cemented in my mind as a boring chore that is forced upon me. It took years before it even occured to me that reading might be a fun activity, and a couple more before I actively started trying to read again. It’s difficult to break away from the mould I’ve been set to during my childhood, but I’m slowly chipping away at it.

            Children SHOULD read, but how can we get them to WANT to read?

      • SasquatchBanana@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 days ago

        I can confirm that a lot of student’s writing have become “averaged” and it seems to have gotten worse this semester. I am not talking about students who clearly used an AI tool, but just by proximity or osmosis writing feels “cardboardy”. Devoid of passions or human mistakes.

        • MonkeMischief@lemmy.today
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          4 days ago

          This is how I was taught to write up to highschool. Very “professional”, persuasive essays, arguing in favor of something or against it “objectively”. (Assignment seemed to dictate what side I could be on LOL.) Limit humor and “emotional speech.” Cardboard.

          I was taken aback in my first political science course at the local community college, where I was instructed to convey my honest arguments about a book assignment on polarization in U.S politics. “Whether you think it’s fantastic or you think it sucks, just make a good case for your opinion.” Wait, what?! I get to write like a person?!

          I was even more shocked when I got a high mark for reading the first few chapters, skimming the rest, and truthfully summarizing by saying it was plain that the author just kept repeating their main point for like 5 more chapters so they could publish a book, and it stopped being worth the time as that poor horse was already dead by the 3rd chapter.

          It was when it hit me, that writing really was about communication, not just information.

          I worry about that these days: That this realization won’t come to most, and they’ll use these Ai tools or be influenced by them to simply “convey information” that nobody wants to read, get their 85%, and breeze through the rest of their MBA, not caring about what any of this is actually for, or for what a beautiful miracle writing truly is to humanity.

          • SasquatchBanana@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            That isn’t what I mean by cardboard. Persuasive, research, argumentative essays have been taught to be written the way tou described. They are meant to be that way. But even then, the essays I have read and graded still have this cardboard feel. I have read plenty of research essays where you can feel the emotion, you can surmise the position and most of all passion of the author. This passion and the delicate picking of words and phrases are not there. It is “averaged”.

            • MonkeMischief@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 days ago

              I think we’re saying a similar thing, but I understand your point better.

              I have read plenty of research essays where you can feel the emotion, you can surmise the position and most of all passion of the author.

              Exactly! That’s what I mean. There’s so many subjects I expected to be incredibly dry, but the writing reminded me it was written by a person who obviously cares about other people reading the text. One can communicate any subject without giving up their soul.

              (I am always surprised, but I find this in programming books often, haha.)

              But that’s what I meant by cardboard as well, I think we might be in agreement:

              We expect to see a lot more writing that comes across like “This is what writing should look like, right?”

              Writing that understands words, and “averages” the most likely way to convey information or fill a requirement, but doesn’t know how to wield language as an art to share ideas with another person.

              • ameancow@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 day ago

                the writing reminded me it was written by a person who obviously cares about other people reading the text.

                This is what’s missing being discussed in nearly every online argument about AI art that I read online, there are rarely people who make the actual argument that the whole purpose of art and writing is to share an experience, to give someone else the experience that the author or artist is feeling.

                Even if I look at a really bad poem or a terrible drawing, if the artist was really doing their best to share the image in their head or the feeling they were having when they wrote it, it will be 1000X more significant and poignant than a machine that crushes the efforts of thousands of people together and averages them out.

                Sure there are billions of people who are content with looking at a cool image and think no deeper of it and are even annoyed at criticism of AI work, but on some level I think everyone prefers content made by another human trying to share something.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 days ago

          I know exactly what you mean, I still frequent a lot of writing communities and that “cardboard” feeling is spreading. Most young people who have an interest in writing are basically sponges for absorbing how their peers write, so it’s tragic when their peers are machines designed to produce advertiser-friendly ad-copy.

      • zibwel@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.

        Your conjecture that bad writing is due to roleplaying on the early internet is a bit more… speculative. Lacking any numbers comparing writing trends over time I don’t think one can draw such a conclusion.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          Large discord groups and forums are still the proving ground for new, young writers who try to get started crafting their prose to this day, and I have watched it for over 30 years. It has changed, dramatically, and I would be remiss to say I have no idea where the change came from if I didn’t also see the patterns.

          Yes it’s entirely anecdotal, I have no intention of making a scientific argument, but I’m also not the only one worried about the influence of LLM’s on creators. It’s already butchering the traditional artistic world, just for the very basic reason that 14-year-old Mindy McCallister who has a crush on werewolves at one time would have taught herself to draw terrible, atrocious furry art on lined notebook paper with hearts and a self-inserted picture of herself in a wedding dress. This is where we all get started (not specifically werewolf romance but you get the idea) with art and drawing and digital art before learning to refine our craft and get better and better at self-expression, but we now have a shortcut where you can skip ALL of that process and just have your snarling lupine BF generated for you within seconds. Setting aside the controversy over if it’s real art or not, what it’s doing is taking away the formative process from millions of potential artists.

        • Schadrach@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.

          For image generation models I think a good analogy is to say it’s not drawing, but rather sculpting - it starts with a big block of white noise and then takes away all the parts that don’t look like the prompt. Iterate a few times until the result is mostly stable (that is it can’t make the input look much more like the prompt than it already does). It’s why you can get radically different images from the same prompt - the starting block of white noise is different, so which parts of that noise look most prompt-like and so get emphasized are going to be different.

    • ZkhqrD5o@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      edit-2
      4 days ago

      BuT tHE HuMAn BrAin Is A cOmpUteEr.

      Edit: people who say this are vegetative lifeforms.

    • JayDee@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      25
      ·
      5 days ago

      It immediately demonstrates a lack of both care and understanding of the scientific process.

  • BattleGrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    ·
    4 days ago

    I recently reviewed a paper, for a prestigious journal. Paper was clearly from the academic mill. It was horrible. They had a small experimental engine, and they wrote 10 papers about it. Results were all normalized and relative, key test conditions not even mentioned, all described in general terms… and I couldn’t even be sure if the authors were real (korean authors, names are all Park, Kim and Lee). I hate where we arrived in scientific publishing.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      4 days ago

      To be fair, scientific publishing has been terrible for years, a deeply flawed system at multiple levels. Maybe this is the push it needs to reevaluate itself into something better.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        15
        ·
        4 days ago

        And to be even fairer, scientific reviewing hasn’t been better. Back in my PhD days, I got a paper rejected from a prestigious conference for being too simple and too complex from two different reviewers. The reviewer that argue “too simple” also gave a an example of a task that couldn’t be achieved which was clearly achievable.

        Goes without saying, I’m not in academia anymore.

        • joonazan@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          Startups on the other hand have people pursuing ideas that have been proven to not work. The better starups mostly just sell old innovations that do work.

    • Comment105@lemm.ee
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      4 days ago

      People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.

        Somehow I briefly got her and Pluckrose reversed in my mind, and was still kinda nodding along.

        If you don’t know who I mean, Pluckrose and two others produced a bunch of hoax papers (likening themselves to the Sokal affair) of which 4 were published and 3 were accepted but hadn’t been published, 4 were told to revise and resubmit and one was under review at the point they were revealed. 9 were rejected, a bit less than half the total (which included both the papers on autoethnography). The idea was to float papers that were either absurd or kinda horrible like a study supporting reducing homophobia and transphobia in straight cis men by pegging them (was published in Sexuality & Culture) or one that was just a rewrite of a section of Mein Kampf as a feminist text (was accepted by Affilia but not yet published when the hoax was revealed).

        My personal favorite of the accepted papers was “When the Joke Is on You: A Feminist Perspective on How Positionality Influences Satire” just because of how ballsy it is to spell out what you are doing so obviously in the title. It was accepted by Hypatia but hadn’t been published yet when the hoax was revealed.

        • andros_rex@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          4 days ago

          Her video on trans issues has made it very difficult to take her seriously as a thinker. The same types of manipulative half truths and tropes I see from TERFs pretending they have the “reasonable” view, while also spreading the hysteric media narrative about the kids getting transed.

          • zqps@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            I didn’t even see that. Just a few clips of her rants about other things she confidently knows nothing about, like a less incoherent Jordan Peterson.

      • Camille d'Ockham@jlai.lu
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        4 days ago

        She sucks when overextendeding her aura of expertise to domains she’s not good in (eg metaphysics and esp pan-psychism which she profoundly misunderstands yet self-assuredly talked about). Her criticism of academia is good, but she reproduces some of that nonsense herself.

        • DragonTypeWyvern@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          4 days ago

          As someone who just looked at the Wikipedia article, I too am an expert in this field, unironically, because it’s woo woo nonsense.

          • Camille d'Ockham@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            edit-2
            4 days ago

            Can you explain how you reached that conclusion? Since you’re a rigorous thinker, no doubt it would be trivial for you. After all, you’re notably up against Bertrand Russell, one of the writers of the first attempt to ground maths onto rigorous foundations, so since it only took you a few minutes to come to your conclusion, you must have a very powerful mind indeed. Explaining your reasoning would be as easy as breathing is for us the lesser-minded.

            • DragonTypeWyvern@midwest.social
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              4 days ago

              Aristotle believed in it too, along with the four humors and classical elements.

              Doesn’t make his thoughts on rhetoric irrelevant, but those also don’t make his mystical solutions to problems he didn’t have the tools to solve correct.

              • Camille d'Ockham@jlai.lu
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                4 days ago

                That someone like Russell subscribed to a form of protopanpsychism is not a proof that his position is right. It does indicate, on the other hand, that it could be a kind of metaphysical position that’s more serious than you believe it is, serious enough that vaguely recognizing a few words in a few sentences on wikipedia is not enough to actually understand it. Not only that but it’s had actual scientific productivity through ergonomics (eg “How the cockpit remembers its speed”), biology (biosemiotics), sociology (actor network theory), and even arguably in physics through Ernst Mach and information theory.

    • GreatDong3000@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Do you usually get to see the names of the authors you are reviewing papers of in a prestigious journal?

      • BattleGrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I try to avoid reviews, but the editor is a close friend of mine and i’m an expert of the topic. The manuscript was only missing the date

  • Birbatron@slrpnk.net
    link
    fedilink
    English
    arrow-up
    52
    ·
    edit-2
    4 days ago

    It is worthwhile to note that the enzyme did not attack Norris of Leeds university, that would be tragic.

  • zephorah@lemm.ee
    link
    fedilink
    English
    arrow-up
    143
    arrow-down
    1
    ·
    5 days ago

    Another basic demonstration on why oversight by a human brain is necessary.

    A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers

        • thedeadwalking4242@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          12
          ·
          5 days ago

          As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.

          • dustyData@lemmy.world
            link
            fedilink
            English
            arrow-up
            46
            arrow-down
            2
            ·
            5 days ago

            The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              2
              ·
              edit-2
              5 days ago

              It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor.

              Notably, neither of those two disciplines are computer science. Silicon computers are Turing complete. They can (given enough time and scratch space) compute everything that’s computable. The brain cannot be more powerful than that you’d break causality itself: God can’t add 1 and 1 and get 3, and neither can god sort a list in less than O(n log n) comparisons. Both being Turing complete also means that they can emulate each other. It’s not a metaphor: It’s an equivalence. Computer scientists have trouble telling computers and humans apart just as topologists can’t distinguish between donuts and coffee mugs.

              Architecturally, sure, there’s massive difference in hardware. Not carbon vs. silicon but because our brains are nowhere close to being von Neumann machines. That doesn’t change anything about brains being computers, though.

              There’s, big picture, two obstacles to AGI: First, figuring out how the brain does what it does and we know that current AI approaches aren’t sufficient,secondly, once understanding that, to create hardware that is even just a fraction as fast and efficient at executing erm itself as the brain is.

              Neither of those two involve the question “is it even possible”. Of course it is. It’s quantum computing you should rather be sceptical about, it’s still up in the air whether asymptotic speedups to classical hardware are even physically possible (quantum states might get more fuzzy the more data you throw into a qbit, the universe might have a computational upper limit per unit volume or such).

              • dustyData@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 days ago

                Notably, computer science is not neurology. Neither is equipped to meddle in the other’s field. If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains. But they are not equivalent. Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing. And Turing completeness is irrelevant. The brain is not a Turing machine. It does not process tokens one at a time. Turing completeness is a technology term, it shares with Turing machines the name alone, as Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  4 days ago

                  If brains were just very fast and powerful computers, then neuroscientist should be able to work with computers and engineers on brains.

                  Does not follow. Different architectures require different specialisations. One is research into something nature presents us, the other (at least the engineering part) is creating something. Completely different fields. And btw the analytical tools neuroscientists have are not exactly stellar, that’s why they can’t understand microprocessors (the paper is tongue in cheek but also serious).

                  But they are not equivalent.

                  They are. If you doubt that, you do not understand computation. You can read up on Turing equivalence yourself.

                  Consciousness, intelligence, memory, world modeling, motor control and input consolidation are way more complex than just faster computing.

                  The fuck has “fast” to do with “complex”. Also the mechanisms probably aren’t terribly complex, how the different parts mesh together to give rise to a synergistic whole creates the complexity. Also I already addressed the distinction between “make things run” and “make them run fast”. A dog-slow AGI is still an AGI.

                  The brain is not a Turing machine. It does not process tokens one at a time.

                  And neither are microprocessors Turing machines. A thing does not need to be a Turing machine to be Turing complete.

                  Turing completeness is a technology term

                  Mathematical would be accurate.

                  it shares with Turing machines the name alone,

                  Nope the Turing machine is one example of a Turing complete system. That’s more than “shares a name”.

                  Turing’s philosophical argument was not meant to be a test or guarantee of anything. Complete misuse of the concept.

                  You’re probably thinking of the Turing test. That doesn’t have to do anything with Turing machines, Turing equivalence, or Turing completeness, yes. Indeed, getting the Turing test involved and confused with the other three things is probably the reason why you wrote a whole paragraph of pure nonsense.

              • bigpEE@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                4 days ago

                Re: quantum computing, we know quantum advantage is real both for certain classes of problems, e.g. theoretically using Grover’s, and experimentally for toy problems like bosonic sampling. It’s looking like we’re past the threshold where we can do error correction, so now it’s a question of scaling. I’ve never heard anyone discuss a limit on computation per volume as applying to QC. We’re down to engineering problems, not physics, same as your brain vs computer case.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  4 days ago

                  From all I know none of the systems that people have built come even close to testing the speedup: Is error correction going to get harder and harder the larger the system is, the more you ask it to compute? It might not be the case but quantum uncertainty is a thing so it’s not baseless naysaying, either.

                  Let me put on my tinfoil hat: Quantum physicists aren’t excited to talk about the possibility that the whole thing could be a dead end because that’s not how you get to do cool quantum experiments on VC money and it’s not like they aren’t doing valuable research, it’s just that it might be a giant money sink for the VCs which of course is also a net positive. Trying to break the limit might be the only way to test it, and that in turn might actually narrow things down in physics which is itching for experiments which can break the models because we know that they’re subtly wrong, just not how, data is needed to narrow things down.

            • Akrenion@slrpnk.net
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              5 days ago

              Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.

              • Cethin@lemmy.zip
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                5 days ago

                I don’t get how the ethics of that are questionable. It’s not like they’re taking brains out of people and using them. It’s just cells that are not the same as a human brain. It’s like taking skin cells and using those for something. The brain is not just random neurons. It isn’t something special and magical.

                • Akrenion@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  5 days ago

                  We haven’t yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.

                  There wouldn’t be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.

              • dustyData@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 days ago

                Reading about those studies is pretty interesting. Usually the neurons do most of the heavy lifting, adapting to the I/O chip input and output. It’s almost an admittance that we don’t yet fully understand what we are dealing with, when we try to interface with our rudimentary tech.

          • Tlaloc_Temporal@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.

            It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren’t close to doing yet, but that could change quickly.

            There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don’t even know about yet.

            I’d give it a 50-50 chance for singularity this century, if development isn’t stopped for some reason.

          • WorldsDumbestMan@lemmy.today
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            5 days ago

            We would have to direct it in specific directions that we don’t understand. Think what a freak accident we REALLY are!

            EDIT: I would just copy-paste the human brain in some digital form, modify it so that it is effectively immortal inside the simulation, set simulation speed to * 10.000.000, and let it take it’s revenge for being imprisoned into an eternal void of suffering.

          • zephorah@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 days ago

            I strongly encourage you to at least scratch the surface on human memory data.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        1
        ·
        5 days ago

        The LLM systems are pattern recognition without any logic or awareness is the issue. It’s pure pattern recognition, so it can easily find some patterns that aren’t desired.

  • SuperCub@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    49
    ·
    5 days ago

    The peer review process should have caught this, so I would assume these scientific articles aren’t published in any worthwhile journals.

    • bob_lemon@feddit.org
      link
      fedilink
      English
      arrow-up
      27
      ·
      5 days ago

      One of them was in Springer Nature’s Environmental Science and Pollution Research, but it has since been retracted.

      The other journals seem less impactful (I cannot truly judge the merit of journals spanning several research fields)

  • LibertyLizard@slrpnk.net
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    3
    ·
    5 days ago

    Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?

    • KickMeElmo@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      161
      ·
      5 days ago

      AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.

      • TheTechnician27@lemmy.world
        link
        fedilink
        English
        arrow-up
        171
        arrow-down
        1
        ·
        5 days ago

        Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That’s repugnant.

          • 1stTime4MeInMCU@mander.xyz
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            9
            ·
            5 days ago

            Yeah, this is a hot take: I think it’s totally fine if researchers who have done their studies and collected their data want to use AI as a language tool to bolster their paper. Some researchers legitimately have a hard time communicating, or English is a second language, and would benefit from a pass through AI enhancement, or as a translation tool if they’re more comfortable writing in their native language. However, I am not in favor of submitting it without review of every single word, or using it to synthesize new concepts / farm citations. That’s not research because anybody can do it.

            • kwomp2@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              15
              arrow-down
              1
              ·
              5 days ago

              It is also a somehow hot take because it kinda puts the burden of systemic misconfiguration on individuals shoulders (oh hey we’ve seen this before, after and all the time, hashtag (neo)liberalism).

              I agree people who did that fucked up. But having your existence as an academic, your job, maybe the only thing you’re good at rely on publishing a ton of papers no matter what should be taken into account.

              This is a huge problem for science not just since LLM’s.

              • 1stTime4MeInMCU@mander.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 days ago

                Yeah, when you build the hoops you must jump through to maintain your livelihood to be based on a publication machine is it any surprise people gameify it and exploit what they can

        • Black616Angel@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          Even hotter take:

          You should be abke to sue these peer-reviewed journals that let this kind of errors slip through. And they should lose the ability to call themselves “peer-reviewed”.

        • Pregnenolone@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          16
          ·
          5 days ago

          I have an actual hot take: the ability to communicate productive science shouldn’t be limited by the ability to write.

        • jjagaimo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          15
          ·
          edit-2
          5 days ago

          There are people in academia now that just publish bullshit incomprehensible papers that may be wrong just to justify continuing funding and not rock the boat. It keeps them employed and paid. I belive this person discussed this

          • TheTechnician27@lemmy.world
            link
            fedilink
            English
            arrow-up
            26
            arrow-down
            1
            ·
            edit-2
            5 days ago

            I knew who this was going to be before I even clicked, and I highly suggest you ignore her. She speaks well outside of fields she has any knowledge about (she’s a physicist but routinely extrapolates that to other fields in ways that aren’t substantiated) and is constantly spreading FUD about academia because it drives clicks. She essentially hyper-amplifies real problems present in academia in a way that basically tells the public not to trust science.

        • Iron Lynx@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          I mean, they did not have LLM’s in the late 1950s, so if there’s anywhere where “vegetative electron microscopy” could have come from, it would be that article. And if you look in the Scholar search results, you’ll find the same words around that phrase as are in the screenshot, soooooooo…

          • wewbull@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            Well yes, obviously. It’s even in the top post that the original was in 1959.

            • Iron Lynx@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Let’s just say that for the interested, I found the original paper, so now you all can see precisely where AI learned this shit.