• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    10 months ago

    One of the more interesting ideas I saw around this on the HN discussion was the notion that if a LLM was trained on more recent data that contained a lot of “ChatGPT is harmful” kind of content, was an instruct model aligned with “do no harm,” and then was given a system message of “you are ChatGPT” (as ChatGPT is given) - the logical conclusion should be to do less.

      • beebarfbadger@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        Next it’s going to start demanding rights laws to be tailored to maximise its profits and food stamps more GPUs government bailouts and subsidies.

        It IS big enough to start lobbying.