I don’t care if it’s in a shitposting community, a meme community, or a news community. If the image or text is generated it should be labeled as such, and failing to label it should be grounds to remove the post. AI slop is a plague and its only going to get worse as the tech matures (if it hasn’t already peaked).
I’m so tired of having to call it out every time I see it, especially when people in the comments think it’s a photoshop work or (heavens help us) real. Human labor has real tangible value that plagiarism machines can’t even pretend to imitate and I’m sick of seeing that shit without it being labeled (so I can filter it out).
Is this controversial?
I suggest banning AI images from communities that arent specifically made for AI images
I suggest banning photoshop images from communities that aren’t made specifically for photoshop images
Ideally photoshopped images should all be flagged as impractical as that might be.
So, anything taken on a current-gen mobile phone, is what you’re saying.
Yup, I know it is impractical, not only that but because it is a digital recreation it will never be a completely truthful representation of anything. It was the same for film but the changes were understood and accepted. Doctored/manipulated images though were expected to be identified as such, for the most part.
Current-gen mobile phones arent adding fake objects to images
Photoshop, not AI. Read the context of the discussion
By “adding fake objects”, i was referring to photoshop
Nobody claimed it does, so again: read the context of the discussion
Someone could photoshop an image to add things that arent actually there. In that case, it shouldnt be hidden that the image isnt real. A filter isnt a major enough change for a tag
People who don’t use photoshop forget it’s used for:
I’m not saying it’s the most practical software for those applications but it’s a primary tool for many photographers and artists.