• Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.

    • phx@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      It’s like AI inbreeding. Flaws will be amplified over time unless new material is added

      • Nanachi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        AI generation loss? I wonder if this can be dealt with if we were to train different models (linugistic logic instead of word prediction)

      • Boz (he/him)@lemmy.one
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Thanks, now I am just imagining all that code getting it on with a whole bunch of other code. ASCII all over the place.

        • phx@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Oh yeah baby. Let’s fork all day and make a bunch of child processes!