• FMT99@lemmy.world
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    8
    ·
    8 months ago

    Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      17
      ·
      8 months ago

      So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        2
        ·
        edit-2
        8 months ago

        It just means there’s a bias in the data that is probably being amplified during training.

        It answers what’s relevant according to its training.