I don’t care if it’s in a shitposting community, a meme community, or a news community. If the image or text is generated it should be labeled as such, and failing to label it should be grounds to remove the post. AI slop is a plague and its only going to get worse as the tech matures (if it hasn’t already peaked).
I’m so tired of having to call it out every time I see it, especially when people in the comments think it’s a photoshop work or (heavens help us) real. Human labor has real tangible value that plagiarism machines can’t even pretend to imitate and I’m sick of seeing that shit without it being labeled (so I can filter it out).
There’s an upper limit on detecting generative AI before the generative AI can generate content that is indistinguishable from real content. Not that we’re there yet; perhaps the current approach can’t even get there and it will require models that understand lighting, materials, anatomy, etc. But considering even real images are just approximations based on sample rate/resolution, AI only has to get to the point where it “stimulates” accurately at a subpixel level to be as undetectable as text too small for a camera to pick up, no matter how many times a hacker says “enhance”.