I heard a bunch of explanations but most of them seem emotional and aggressive, and while I respect that this is an emotional subject, I can’t really understand opinions that boil down to “theft” and are aggressive about it.

while there are plenty of models that were trained on copyrighted material without consent (which is piracy, not theft but close enough when talking about small businesses or individuals) is there an argument against models that were legally trained? And if so, is it something past the saying that AI art is lifeless?

  • oce 🐆
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    Isn’t this point also valid for any kind of automation? Machines removed worked from manual workers, software engineers remove work from manual and office workers since they started, way before LLMs. The point that artists actual love their work could also be made for other people whose work have been automated before.
    I think the real issue is that automation should benefit everyone equally, and not only its owners.

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      21 hours ago

      The key in my mind is that this technology cannot work independently. A bucket excavator can replace the work of many people digging by hand. But the operation of the machine truly replaces the laborers. Some hand labor is still required in any excavation, but the machine itself is capable of operating just fine without the workers it is replacing.

      But LLM image generators? They are only possible from the work of artists. They are directly trained off of artists’ work. Even worse, the continued existence of LLMs requires the never-ending continual contribution of humans. When AI image generators are trained off the results from AI image generators, things rapidly generate into literal static. It’s making a copy of a copy. If all art becomes made by LLMs, then the only recent data to train future models will be the output of other LLMs, and the whole thing collapses like a snake devouring its own tail.

      This is also the crucial difference between how image generators and actual artists work. Some will say that how LLMs work is simply the same learning process that humans learn through. The image generator trains off pre-existing art, and so does a human artist, proponents of AI will say.

      But we can see the flaw in this in that real artists do not suffer generational decay. Human artists have trained off the work of other artists, in a chain unbroken since before the rise of civilization. Yes, artists can learn technique and gain inspiration from the work of other artists, but humans are capable of true independent creation. Image generators OTOH are just blindly copying and summarizing the work of others. They have no actual sense of what art is, what makes it good, or what gives it soul. They don’t even have a sense of what makes an image comprehensible. They’re just playing a big blind correlation game of inputs and outputs. And so, if you train one AI off another AI’s output, it decays like making a copy of a copy.

      This is a crucial difference between AI “art” and human art. Human art is an original creation. As such, new art can be endlessly created. AI “art” can only blindly copy. So unless the AI can get continual references from actual real human art, it quickly diverges into uselessness.

      The ditch digger replaced by an excavator has no real means to legally object. They were paid for their previous jobs, and are simply no longer needed. But real human artists and AI? This software is going to be a never-ending vampire on their creative output. It has only been created by stealing their past work, and it will only remain viable if it can continue to steal their work indefinitely into the future.

      • MTK@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Wow, thank you, I think this is the first argument that clicked for me.

        But it does raise for me 2 questions:

        • If the technology ever gets to a point where it does not degenerate into static by creating its own feedback loop, would it then be more like an excavator?
        • What if this is the start of a future (understandably a bad start) where you have artist who get paid to train AI models? Kind of like a an engineer that designs a factory
        • WoodScientist@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 hour ago
          • Can it ever get to the point where it wouldn’t be vulnerable to this? Maybe. But it would require an entirely different AI architecture than anything that any contemporary AI company is working on. All of these transformer-based LLMs are vulnerable to this.

          • That would be fine. That’s what they should have done to train these models in the first place. Instead they’re all built on IP theft. They were just too cheap to do so and chose to build companies based on theft instead. If they hired their own artists to create training data, I would certainly lament the commodification and corporatization of art. But that’s something that’s been happening since long before OpenAI.

          • MTK@lemmy.worldOP
            link
            fedilink
            arrow-up
            1
            ·
            1 hour ago

            Thank you, out of all of these replies I feel like you really hit the nail on the head for me.

    • Susaga@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      1 day ago

      Technically, yes, but I would argue that this is worse.

      An excavator saves you days of digging a single hole. An assembly line saves you from having to precisely construct a toy. A printer saves you from having to precisely duplicate a sheet of paper. All of this is monotonous and soul-destroying work that people are happy they don’t need to do.

      But you still need to decide where to dig the hole. You still need to design the toy. You still need to fill in the first sheet of paper. All of the work left over is more creatively fulfilling.

      We are now attempting to automate creativity.