We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

  • SkyNTP@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    It’s implied in the analogy that this is the first time Person A and Person B are talking about being attacked by a bear.

    This is a very simplistic example, but A and B might have talked a lot about

    • being attacked by mosquitos
    • bears in the general sense, like in a saying “you don’t need to outrun the bear, just the slowest person” or in reference to the stock market

    So the octopuss develops a “dial” for being attacked (swat the aggressor) and another “dial” for bears (they are undesirable). Maybe there’s also a third dial for mosquitos being undesirable: “too many mosquitos”

    So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

    ChatGPT might get it right some of the time, but a broken clock is also right twice a day, that doesn’t make it useful.

    Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.