• 29 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


























  • I think access keys are a legacy authentication mechanism from a time where the objective was increasing cloud adoption and public clouds wanted to support customers to transition from on prem to cloud infra.

    But for cloud native environments there are safer ways to authenticate.

    A data point: for GCP now Google also advise new customers to enable from the start the org policy to disable service account key creation.





  • 0xCBEOPtoBlue TeamNVD damage continued
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I found it interesting because starting from NVD, CVSS etc we have a whole industry (Snyk, etc) that is taking vuln data, mostly refuse to contextualize it and just wrap it in a nice interface for customers to act on.

    The lack of deep context shines when you have vulnerability data for os packages, which might have a different impact if your workloads are containerized or not. Nobody seems to really care that much, they sell a wet blanket and we are happy to buy for the convenience.




  • 0xCBEtoAI InfosecIn Escalating Order of Stupidity
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This stuff is fascinating to think about.

    What if prompt injection is not really solvable? I still see jailbreaks for chatgpt4 from time to time.

    Let’s say we can’t validate and sanitize user input to the LLM, so that also the LLM output is considered untrusted.

    In that case security could only sit in front of the connected APIs the LLM is allowed to orchestrate. Would that even scale? How? It feels we will have to reduce the nondeterministic nature of LLM outputs to a deterministic set of allowed possible inputs to the APIs… which is a castration of the whole AI vision?

    I am also curious to understand what is the state of the art in protecting from prompt injection, do you have any pointers?