• @Karmmah@lemmy.world
    link
    fedilink
    6711 months ago

    I also noticed that chatGPT can’t actually correct itself. It just says “oh sorry, here’s something different” and gives you another crap answer. I noticed it with code specifically. If I remember correctly it was better when it was brand new.

    • squiblet
      link
      fedilink
      2111 months ago

      The apology thing is sort of hilarious. I wonder what exactly they did to make it eternally apologetic. There was an article on HN recently about how it is basically impossible to get Chat GPT to stop apologizing, as in, if you ask it to stop, it will apologize for apologizing.

      • SokathHisEyesOpen
        link
        fedilink
        1111 months ago

        I experienced exactly that! I told it to stop apologizing for everything and just respond with correct answers and it apologized for not being able to stop apologizing.

        • squiblet
          link
          fedilink
          511 months ago

          From HN,

          Me: Can you stop using the word apologize whenever you are talking to me?

          ChatGPT: I apologize (I will try to stop using the word “apologize” in my responses to you) if the use of the word “apologize” in my responses has been inappropriate or has caused any discomfort. As an AI language model, I am programmed to try to be as helpful and accurate as possible, and I apologize (I will try to use a different word) when I make mistakes or do not provide the information or assistance that is requested.

          I will make an effort to avoid using the word “apologize” in my responses to you in the future. Please do not hesitate to let me know if you have any further questions or need any additional information. I am here to help and will do my best to provide accurate and helpful responses to your questions.

          Also here is a recent HN discussion: https://news.ycombinator.com/item?id=36949931

      • @relevants@feddit.de
        link
        fedilink
        811 months ago

        It’s because humans have rated potential responses and ChatGPT has been trained to generate the kind of responses that most consistently get preferred rating. You can imagine how an AI trained to say what people want to hear would become a people pleaser.

      • @CeruleanRuin@lemmy.world
        link
        fedilink
        English
        311 months ago

        That’s what frustrates me the most whenever I try to use it. I tell it to be less verbose, stop over explaining and apologizing every time I correct it, and it just spits out another four paragraphs explaining why it’s sorry.

      • @XEAL@lemm.ee
        link
        fedilink
        111 months ago

        The only solution I can think of is using it via API with Python and make a call with the final reply asking it to remove apologies from the text, but the token usage will increase.

        I do something similar when I need to tell the model to keep the language of a text before performing a task with that text. I send the model a chunk of text and ask it to respond with single word, indicating the language of the text and then I include that in the next prompt like “Your output must be in SPANISH”, or whatever.

  • @BadRS@lemmy.world
    link
    fedilink
    2711 months ago

    It cannot read. It doesn’t see words or letters. It works with Tokens which words are converted into. It cant count the number of letters in a word because it can’t see them. OpenAI has a Tokenizer you can plug a prompt into to see how its broken up, but youre asking a fish to fly.

    • ParkingPsychology
      link
      fedilink
      511 months ago

      And a single “S” is also a token. Which has vectors to all other words that start with an S.

      One thing to point out here is that the word sentences is severely mistyped as “sententences”. That’s not going to help.

    • @CeruleanRuin@lemmy.world
      link
      fedilink
      English
      111 months ago

      Is there a workaround to “trick” it into understanding letters? I’d love to use it to play with language and brainstorm some riddles or other wordplay, but if it literally can’t understand language on a human level, that’s a fools errand.

  • SimpleDev
    link
    English
    1311 months ago

    Idk what I’m doing wrong, thankfully it always seems to listen and work fine for me lmao

      • SimpleDev
        link
        English
        11
        edit-2
        11 months ago

        Now it’s broken, I guess I I don’t use it this way often enough. Interesting nonetheless!

        Edit - it’s very semantic, it matters if I include an uppercase “S” or not. That’s amusing.

        I wonder if the temperature settings adjustment would fix that or just make it even weirder.

          • SimpleDev
            link
            English
            3
            edit-2
            11 months ago

            But it’s able to correct unlike what’s shown in the OP messages.

            Extremely semantically it seems but it clearly listens. It’s neat to see how different each person experience is.

            Also different tuning parameters etc…could make outputs different. That might explain why mine is seemingly a bit better at listening.

      • SimpleDev
        link
        English
        111 months ago

        deleted by creator

      • SimpleDev
        link
        English
        111 months ago

        Look at the first question in the my first screenshot. It gets that question correct for “mayonnaise” lol

          • SimpleDev
            link
            English
            1
            edit-2
            11 months ago

            I feel like ChatGPT itself probably has a fairly loose temp setting (just a hunch) and I tend to set my conversations up to be more on the strict side

            I imagine that’s why our results differ, it’s strange OpenAI doesn’t let ChatGPT site users or at least premium users adjust anything really yet.

  • @CeruleanRuin@lemmy.world
    link
    fedilink
    English
    411 months ago

    ChatGPT doesn’t understand letters, or phonetics, or most other aspects of speech. I tried for an hour to train it to understand what a palindrome is, with the hopes of getting it to generate some new ones. Nothing stuck. It was like trying to teach a dog to write its name.

  • kratoz29
    link
    fedilink
    English
    311 months ago

    To be fair that feature sucked since the very beginning, at least for me.

    • SokathHisEyesOpen
      link
      fedilink
      011 months ago

      It has not. ChatGPT has been a monumental achievement and has been capable of performing previously impossible and highly impressive tasks. This is new behavior for it.