Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead::Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.
Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead::Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.
There will be no way to watermark all AI images, as someone could just mod stable diffusion to remove the watermark. The best we can do is to doubt any photographic evidence we see.
Intentionally they spotage and killed journalists. Defunded public media, and privatized the rest. Bought out and censored social media and now its hard to tell which image is real or not.
The only option in my opinion is for camera manufacturer to include a cryptic hash that can be pass to an algorithm to authenticate a photograph metadata.
That could very easily be abused as some sort of DRM or vendor lockin for photos. I would rather not.
Well, not necessarily. How about just embedding the following in the EXIF tag: digital signatures from the original camera; digital hashes of the original image; digital sigs for the publisher and the article where the pics will appear.
Any additional processing by a “social media content creator” - for example, adding captions to make a meme out of it - will also include the prior chain of digital sigs and hashes.
Now when it pops up on social media sites/apps, there can be little info bubbles that link to the original pic or article, or provide info on ownership of the camera along with date and timestamps of the pics.
Garbage will always exist on social media, but at least we can have these little tools to verify authentic images.
Film a high res screen projection then