• remotelove@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    8 个月前

    Math and limited data probably. If the AI “sees” that its forces outnumber an opponent or a nuke doesn’t affect it’s programmed goals, it’s efficient to just wipe out an opponent. To your point, if the training data or inputs have any bias, it will probably be expressed more in the results.

    (Chat bots are trained on data. How that data is curated is going to be extremely variable.)

    • Rentlar@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      8 个月前

      How do we eliminate human violence forever?

      Easy! Just eliminate all of humankind!

      (Bard, ChatGPT, you’d better not be reading this)

    • hangukdise@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      8 个月前

      That data does not contain examples of diplomacy since that stuff is generally discrete/secret