I don’t care if it’s in a shitposting community, a meme community, or a news community. If the image or text is generated it should be labeled as such, and failing to label it should be grounds to remove the post. AI slop is a plague and its only going to get worse as the tech matures (if it hasn’t already peaked).

I’m so tired of having to call it out every time I see it, especially when people in the comments think it’s a photoshop work or (heavens help us) real. Human labor has real tangible value that plagiarism machines can’t even pretend to imitate and I’m sick of seeing that shit without it being labeled (so I can filter it out).

    • DavidGA@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      14
      ·
      5 days ago

      You would think so, but in my experience so far most group admins don’t give a shit, or even like it.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      5
      ·
      5 days ago

      I think people should be free to choose whatever they want. But I also think it should be easy for them to make that choice. Currently there’s no easy way to identify all the AI images.

      Maybe if we had some sort of intelligent algorithm that could filter things… (I kid. Crowdsourcing tags would probably be easier and more accurate.)

      • KiwiHuman@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        As part of training AI you also create a second AI that detects whether something is AI or not. The solution is to use AI to detect AI. However running this on every single image is computationally very expensive.

        • Buddahriffic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          There’s an upper limit on detecting generative AI before the generative AI can generate content that is indistinguishable from real content. Not that we’re there yet; perhaps the current approach can’t even get there and it will require models that understand lighting, materials, anatomy, etc. But considering even real images are just approximations based on sample rate/resolution, AI only has to get to the point where it “stimulates” accurately at a subpixel level to be as undetectable as text too small for a camera to pick up, no matter how many times a hacker says “enhance”.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          It also only detects images generated using that specific model, so you’d need an entire library of those detectors, which compounds the problem further.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    4 days ago

    Nah, we’re cool. You and I can tell but AI won’t. So it will enshitificate it self into uselessness.

    We should all strive to cause confusion in all sorts of databases so AI can’t unfuck itself.

  • RandomVideos@programming.dev
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    4
    ·
    4 days ago

    Is this controversial?

    I suggest banning AI images from communities that arent specifically made for AI images

        • Ceedoestrees@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          People who don’t use photoshop forget it’s used for:

          1. Editing and colour-correcting images.
          2. Graphic Design and digital painting.
          3. Straight up drawing.

          I’m not saying it’s the most practical software for those applications but it’s a primary tool for many photographers and artists.

          • johncandy1812@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            4 days ago

            Yup, I know it is impractical, not only that but because it is a digital recreation it will never be a completely truthful representation of anything. It was the same for film but the changes were understood and accepted. Doctored/manipulated images though were expected to be identified as such, for the most part.

  • rustyfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    3
    ·
    5 days ago

    I agree with you so hard, I actually have to downvote your post because of community.

    • chuckleslord@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      5 days ago

      You’d be surprised how many times I’ve gotten “well, it doesn’t matter cause this is such and such community”, which is why I posted it here.

  • KiwiHuman@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    10
    ·
    4 days ago

    There is a paradox here, as there are 2 possibilities either

    A) AI generated “slop” is obviously bad quality, theirfor a label is unnecessary as it is obvious.

    Or

    B) the AI generated content looks as good as human creations therfore is not slop and a label is unnecessary.

    • johncandy1812@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      4 days ago

      If someone makes an ai clip of a politician saying something they didn’t should we believe it cause the ai was convincing enough?

      Really photoshopped images meant to seem as real as possible should be flagged. It sounds ridiculous just because it has become the norm to accept them.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago
        It is absolutely critical that the capabilities are broadly adopted because in the future, the difference will be indiscernible.

        This is absolutely a lose lose nuclear proliferation-like situation and cannot be avoided. If the technology is unknown to the general populous it holds great power over them. There are more nuanced uses of this new toolset than anyone has yet realized. One could respond to populous media and digital social misalignment in complex ways that none of us can see or filter. The tool use is not a simple polar dichotomy. One could use a tool to monitor social sentiment and respond in ways to steer the conversation using one’s own likeness and social presence at any instance of qualia from a corporate account, to think tank, or political figure.

        Those of us with the time and ability to explore such things should be welcomed and listened to carefully. Most people in this space are not the assholes you are angry or frustrated with. I don’t give a @#$% about tricking anyone or replacing anything. I only care about what I am curious about and learning new things to occupy my time in social isolation from physical disability. I could share a lot more, but when people act stupid, I do not share much at all. I’m capable of independence in exploring unique paths and applications. The more grounded I am from engaging with others the more effective I am at doing useful things and sharing them. I’m not some savant genius type at all. I’m a persistent rogue that explores off the beaten path in empirically useful but often unexpected ways. It is very easy to misunderstand the context of things I talk about and might share. I am often wrong about several assumptions and details, but if one takes the time to look into my results, the empirical patterns that ground what I am saying will emerge and those nuggets are often useful. This is the real, messy edge of amateur and hobby culture. When I encounter negative prejudice, I’m not going to endure the stupidity of those that fail to contextualize and see the value of my abstractions through the haze of my explorations. I just want to share something I find interesting or useful as I understand it in my contiguously moving target of learning. Anyone that responds to that kind of post or comment negatively, as if a person’s knowledge is some kind of static state is beyond useless and stupid to me. I do not care about egos and narcissism. I do not care about oversimplified idealism of right or wrong. I care about curiosity and empirical usefulness because we live in the universe of irrational numbers where booleans and integers do not exist except in fantasies of the mind and the limited registers of computational machines that are always wrong in their truncation of reality.

        It is just a tool. Some are sour because evolution dictates they must be. The culture of artificial scarcity and unnecessary pressure produces and rewards assholes. It is this culture that is the problem, not the tool. We live in a dystopia that is reigned by assholes. Sam Altmann is the asshole funding the culture of blaming this new tool. Monopoly in this space can be used to exploit the status quo for more profit. This exploitation only works in a monopoly where the tool is proprietary. In the real world with an open source tool, the time it saves opens up great wealth to the average person and business. Our culture can expand by reinvesting our newly acquired wealth. This is the intelligent use of the new tool. Those that can only see the present as some kind of final state to extract value are idiotic parasites of humanity. We can become something more like has occurred for thousands of years of human innovation. These proprietary parasites of humanity are twisting reality to subject us to their vampirism of extracted wealth and subjugation. I reject this narrative and stupidity because I can clearly see the big picture. I wish y’all would disconnect, set back, and see the big picture too. Nothing about AI tools is a negative unless you fall in line with Altmann’s dystopian vision.

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      A) Some people are really really bad at noticing AI slop. I’ve seen some really obvious AI generated images with people debating if it’s real or not. Unless those comments were AI and I’m the one who can’t tell…

      B) Honestly even good AI generated content should come with a disclaimer IMO.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    5
    ·
    edit-2
    5 days ago

    I spent hours Photoshopping Elon Musk’s face onto Scarlett O’Hara (took so long because I made myself do it with Gimp 3). If I could have done it with AI, the results would likely have been better and that time wasted making a meme is something I won’t ever get back.

    • n3m37h@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      8
      ·
      4 days ago

      Id take the terrible photoshop meme over the AI slop meme any day. 1 takes effort the other wastes electricity

    • chuckleslord@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      18
      ·
      5 days ago

      The result would’ve been worthless trash because that’s all ai is.

      Thank you for not contributing to the decay of artistic ability and creativity by actually taking the time to do it yourself. I’d rather a mountain of human made low effort shit over even one “good” generative “art”. Plagiarism machines can all die now

      • uranibaba@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        4 days ago

        The result would’ve been worthless trash because that’s all ai is.

        So an image created by an AI bad because it was created by an AI? Regardless of the the content?
        How about an image created by an AI and then worked on by a human?

        • chuckleslord@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          5
          ·
          4 days ago

          If your process is to have a plagiarism machines output crap, and then you work on top of that, that’s your fucking choice. I wouldn’t do it, but to each their own.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            4 days ago

            I took a digital photo of Musk off of Google and cut/pasted it onto a frame grab from Gone with the Wind.

            I hand crafted the plagiarism using plagiarism.

            • SmoothLiquidation@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              4 days ago

              This is it. AI is a tool just like anything else. Before AI people would complain that a photo was ‘shopped and before that it was that the models in magazines were airbrushed.

              All of these are tools that are at an artist’s fingertips and a good artist can do something great with if they put the time into it.

              Yes, lazy people can create crap with it if they want but you really can’t be blaming the tool for what stupid humans do with it.

  • Tyoda@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    5 days ago

    You and the previous2 poster who complained about people complaining about AI slop should have a rap battle.

  • ooli2@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 days ago

    so what AI detection tool should we use to detect it?

  • southsamurai@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    4 days ago

    Dammit, I hated down voting this because I agree wholeheartedly.

    But this is a common sentiment, it just isn’t getting reported about as heavily as all the advertising disguised as reporting. Even outside of lemmy, people are bitching about exactly this. Not just online either, and not just tech minded people.