Researchers discovered multiple vulnerabilities in Google’s Gemini Large Language Model (LLM) family, including Gemini Pro and Ultra, that allow attackers to manipulate the model’s response through prompt injection. This could potentially lead to the generation of misleading information, unauthorized access to confidential data, and the execution of malicious code. The attack involved feeding the LLM […] The post Google’s Gemini AI Vulnerability let Hackers Gain Control Over Users’ Queries appeared first on Cyber Security News.

  • Nativeridge @aussie.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    Gemini already generates misleading information, when responding by telling it information it gave was incorrect it falls back on the “Hur Dur I am just learning.”

    No I don’t like AI