Kid@sh.itjust.worksM to Cybersecurity@sh.itjust.worksEnglish · 1 day agoResearchers Reveal 'Deceptive Delight' Method to Jailbreak AI Modelsthehackernews.comexternal-linkmessage-square5fedilinkarrow-up132arrow-down12cross-posted to: infosec_news
arrow-up130arrow-down1external-linkResearchers Reveal 'Deceptive Delight' Method to Jailbreak AI Modelsthehackernews.comKid@sh.itjust.worksM to Cybersecurity@sh.itjust.worksEnglish · 1 day agomessage-square5fedilinkcross-posted to: infosec_news
minus-squareremi_pan@sh.itjust.workslinkfedilinkEnglisharrow-up4·10 hours agoIf the jailbreak is about enabling the LLM to tell you how to make explosives or drugs, this seems pointless, because I would never trust a IA so prone to hallucinations (and basicaly bad at science) in such dangerous process.
If the jailbreak is about enabling the LLM to tell you how to make explosives or drugs, this seems pointless, because I would never trust a IA so prone to hallucinations (and basicaly bad at science) in such dangerous process.