Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    No, it can solve word problems that it’s never seen before with fairly intricate reasoning. LLMs can even play chess at Grandmaster levels without ever duplicating games in the training set.

    Most of Lemmy has no genuine idea about the domain and hasn’t actually been following the research over the past year which invalidates the “common knowledge” on the topic you often see regurgitated.

    For example, LLMs build world models from the training data, and can combine skills from the data in ways that haven’t been combined in the training data.

    They do have shortcomings - being unable to identify what they don’t know is a key one.

    But to be fair, apparently most people on Lemmy can’t do that either.