I know that it allows NSFW content after age verification. I am curious how well it works (or how well it is intended to work) to prevent both intentional and accidental illegal content creation. Florida law for instance criminalizes images of “sexual activity” involving fake individuals that appear under 18 to a “reasonable” person. So ideally the platform would be preventing such images from being able to be created intentionally, or at least make it sufficiently difficult such that criminal intent is required to produce them.
If it is easy to intentionally create illegal content this generally implies it is possible to accidentally create it. For example, if a user creates and saves a batch of 100 images with a prompt specifying a subject of age 19 “engaged in sexual activity” (which, can be as simple as “lewd exposure of the buttocks” according to law) and we suppose that the ai generator agrees both with the prompt and, on average, with the average age estimation of “reasonable” people, but has an age standard deviation of 1 year then that is an expected value of 2-3 illegal (i.e., <=17 year old images as defined by the average of all “reasonable people.”)
If the filtering is this lax, then it seems the generator would be both harmful to users by allowing them to accidentally create illegal content and also useful to criminals to generate illegal content purposefully. I AM NOT SAYING THIS IS THE CASE. I am just saying it would be if the filtering system is not sufficiently strict. That is why I am interested to know what, exactly is being done in terms of filtering.


As a non-floridian, that law is of no interest to me. Same as the Penal Code of Japan, else all NSFW images would need to have those funny “decency bars” or be illegal.
I think federal law says about the same thing in different words. And most state laws are some other variation of it.