Google Search will soon label images as AI-generated, edited with photo editing software or if it was taken with a camera in the image search results. This label will be added to the about this image feature, according to The Verge who spoke to Laurie Richardson, vice president of trust and safety at Google.
It’s not that simple. It’s not just a “this is or isn’t AI” boolean in the metadata. Hash the image, then sign the hash with digital signature key. The signature will be invalid if the image has been tampered with, and you can’t make a new signature without the signing key.
Once the image is signed, you can’t tamper with it and get away with it.
The vulnerability is, how do you ensure an image isn’t faked before it gets to the signature part? On some level, I think this is a fundamentally unsolvable problem. But there may be ways to make it practically impossible to fake, at least for the average user without highly advanced resources.
Cameras don’t cryptographically sign the images they take. Even if that was added, there are billions of cameras in use that don’t support signing the images. Also, any sort of editing, resizing, or reencoding would make that signature invalid. Almost no one is going to post pictures to the web without any sort of editing. Embedding 10+ MB images in a web page is not practical.
You don’t even need a hacked camera to edit the metadata, you just need exiftool.
It’s not that simple. It’s not just a “this is or isn’t AI” boolean in the metadata. Hash the image, then sign the hash with digital signature key. The signature will be invalid if the image has been tampered with, and you can’t make a new signature without the signing key.
Once the image is signed, you can’t tamper with it and get away with it.
The vulnerability is, how do you ensure an image isn’t faked before it gets to the signature part? On some level, I think this is a fundamentally unsolvable problem. But there may be ways to make it practically impossible to fake, at least for the average user without highly advanced resources.
Cameras don’t cryptographically sign the images they take. Even if that was added, there are billions of cameras in use that don’t support signing the images. Also, any sort of editing, resizing, or reencoding would make that signature invalid. Almost no one is going to post pictures to the web without any sort of editing. Embedding 10+ MB images in a web page is not practical.
We aren’t talking about current cameras. We are talking about the proposed plan to make cameras that do cryptographically sign the images they take.
Here’s the link from the start of the thread:
https://arstechnica.com/information-technology/2024/09/google-seeks-authenticity-in-the-age-of-ai-with-new-content-labeling-system
This system is specifically mentioned in the original post: https://www.seroundtable.com/google-search-image-labels-ai-edited-38082.html when they say “C2PA”.