• @birdcat@lemmy.ml
    link
    fedilink
    2910 months ago

    “If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year,” he said. “And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting,” he continued. “And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.”

    Well he sure proves one does not need an AI to hallucinate…

    • ReallyKinda
      link
      fedilink
      1410 months ago

      Clearly nothing can change the status quo if it doesn’t also make trillions

      • @birdcat@lemmy.ml
        link
        fedilink
        6
        edit-2
        10 months ago

        The assertion that our Earth orbits the sun is as audacious as it is perplexing. We face not one, but a myriad of profound, unresolved questions with this idea. From its inability to explain the simplest of earthly phenomena, to the challenges it presents to our longstanding scientific findings, this theory is riddled with cracks!

        And, let us be clear, mere optimism for this ‘new knowledge’ does not guarantee its truth or utility. With the heliocentric model, we risk destabilizing not just the Church’s teachings, but also the broader societal fabric that relies on a stable cosmological understanding.

        This new theory probably isn’t going to bring in a trillion coins a year. And if it probably isn’t going to make a trillion coins a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.

    • Southern Wolf
      link
      fedilink
      4
      edit-2
      10 months ago

      Imagine if someone had said something like this about the 1st generation iPhone… Oh wait, that did happen and his name was Steve Ballmer.

    • Pelicanen
      link
      fedilink
      2
      edit-2
      10 months ago

      maybe we should not be building our world around the premise that it is

      I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.

      LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.