I don’t care if it’s in a shitposting community, a meme community, or a news community. If the image or text is generated it should be labeled as such, and failing to label it should be grounds to remove the post. AI slop is a plague and its only going to get worse as the tech matures (if it hasn’t already peaked).
I’m so tired of having to call it out every time I see it, especially when people in the comments think it’s a photoshop work or (heavens help us) real. Human labor has real tangible value that plagiarism machines can’t even pretend to imitate and I’m sick of seeing that shit without it being labeled (so I can filter it out).
I think people should be free to choose whatever they want. But I also think it should be easy for them to make that choice. Currently there’s no easy way to identify all the AI images.
Maybe if we had some sort of intelligent algorithm that could filter things… (I kid. Crowdsourcing tags would probably be easier and more accurate.)
As part of training AI you also create a second AI that detects whether something is AI or not. The solution is to use AI to detect AI. However running this on every single image is computationally very expensive.
There’s an upper limit on detecting generative AI before the generative AI can generate content that is indistinguishable from real content. Not that we’re there yet; perhaps the current approach can’t even get there and it will require models that understand lighting, materials, anatomy, etc. But considering even real images are just approximations based on sample rate/resolution, AI only has to get to the point where it “stimulates” accurately at a subpixel level to be as undetectable as text too small for a camera to pick up, no matter how many times a hacker says “enhance”.
It also only detects images generated using that specific model, so you’d need an entire library of those detectors, which compounds the problem further.