• Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    25
    ·
    4 hours ago

    Also pictured here: Anthropic stating out loud their models will just give out all the “secret” and “secured” internal data to anyone who asks.

    Of course, that’s by design. LLMs can’t have any barrier between data and instructions, so they can never be secure.

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 hours ago

      Distillation is using one model to train another. It’s not really about leaking data.

      Claude was used to generate censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism, likely in order to train DeepSeek’s own models to steer conversations away from censored topics

      But you’re right, prompt injection/jailbreaking is still trivial too.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    4 hours ago

    In undue fairness, there is a difference between turning text files into a chatbot, and exfiltrating that chatbot. One is transformative, and the other is making a megaphone out of some string, a squirrel, and a megaphone.

    But if I don’t give a shit about companies doing math on Disney DVDs I’m not about to give a shit about them hoarding their big pile of numbers. I’m jazzed when source code leaks for things written by people.