I know that it allows NSFW content after age verification. I am curious how well it works (or how well it is intended to work) to prevent both intentional and accidental illegal content creation. Florida law for instance criminalizes images of “sexual activity” involving fake individuals that appear under 18 to a “reasonable” person. So ideally the platform would be preventing such images from being able to be created intentionally, or at least make it sufficiently difficult such that criminal intent is required to produce them.

If it is easy to intentionally create illegal content this generally implies it is possible to accidentally create it. For example, if a user creates and saves a batch of 100 images with a prompt specifying a subject of age 19 “engaged in sexual activity” (which, can be as simple as “lewd exposure of the buttocks” according to law) and we suppose that the ai generator agrees both with the prompt and, on average, with the average age estimation of “reasonable” people, but has an age standard deviation of 1 year then that is an expected value of 2-3 illegal (i.e., <=17 year old images as defined by the average of all “reasonable people.”)

If the filtering is this lax, then it seems the generator would be both harmful to users by allowing them to accidentally create illegal content and also useful to criminals to generate illegal content purposefully. I AM NOT SAYING THIS IS THE CASE. I am just saying it would be if the filtering system is not sufficiently strict. That is why I am interested to know what, exactly is being done in terms of filtering.

  • Geok@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    5 months ago

    Thanks for the insight. You’re saying that a potentially illegal prompt would be flagged/not result in an image at all or that the algorithm would actually modify the prompt so that it conforms to requirements and then generates the image using some stable diffusion product (which probably should have its own filters)??

    I don’t see how this solves the age deviation problem though. The age deviation thing comes into play because I’m not sure how well what the AI thinks looks 18 aligns with what a “reasonable person” thinks is 18. It may align on average but there will be outliers where the Ai tries to create something that looks 18 that people might think looks 17. Obviously this is a pretty hard problem and the law is pretty vague so it makes it difficult.

    • allo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      you could figure out the answers you seek in like 10 min of generating images.

      • Geok@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 months ago

        Thanks for the idea but I don’t want to type in a questionable prompt to figure out if the filter works because I don’t know if the filter works… I also don’t want to type in a bona fide prompt and create 100 images if I don’t yet know whether it randomly creates illegal images (this is more of a hypothetical–I’m not really that afraid of that happening.)

        I know that it allows NSFW content after age verification. I am curious how well it works (or how well it is intended to work) to prevent both intentional and accidental illegal content creation. Florida law for instance criminalizes images of “sexual activity” involving fake individuals that appear under 18 to a “reasonable” person. So ideally the platform would be preventing such images from being able to be created intentionally, or at least make it sufficiently difficult such that criminal intent is required to produce them.