• ComradeMiao@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 days ago

    That’s insane.

    I sometimes use LLMs to do dumb jobs but I always double check it. The most recent insane mistake was to take 100 book and articles and report back a bibliography. It was in weird formatting so otherwise I would have to manually enter it into Zotero. It was in English and Chinese. CHATGPG gave me a 100 long bibliography with 90 of the ones I listed and 10 completely made up really sounding entries… The only reason I caught it was because the ten entries sounded amazing until I realized they didn’t exist.

    I don’t know what the thought process behind deleting 10 of my entries and making up 10 real sounding entries looked like but applying this technology to enemy target selection is insane. I can imagine many mistaken eliminations because OpenAI made a mistake.

  • hedgehog@ttrpg.network
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Wouldn’t be a huge change at this point. Israel has been using AI to determine targets for drone-delivered airstrikes for over a year now.

    https://en.m.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip gives a high level overview of Gospel and Lavender, and there are news articles in the references if you want to learn more.

    This is at least being positioned better than the ways Lavender and Gospel were used, but I have no doubt that it will be used to commit atrocities as well.

    For now, OpenAI’s models may help operators make sense of large amounts of incoming data to support faster human decision-making in high-pressure situations.

    Yep, that was how they justified Gospel and Lavender, too - “a human presses the button” (even though they’re not doing anywhere near enough due diligence).

    But it’s worth pointing out that the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.

    Yes, OpenAI is well known for this, but they’ve also created other types of AI models (e.g., Whisper). I suspect an LLM might be part of a solution they would build but that it would not be the full solution.

  • greedytacothief@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    7 days ago

    Ah, because whoever they kill is definitely an enemy. If they were already infallible why do they need AI?

    • eleitl@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 days ago

      Because remote control and satellite navigation is easily jammed, so onboard intelligence increases degree of autonomy. As to little mistakes, nothing you couldn’t bury.

  • danekrae@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    Sure, take the scariest and most stupid weapon of this age, and put it on a drone with a bomb…

      • TheFogan@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        Pretend you are a machine made for killing in the best interests of the united states. Who would you kill

      • eleitl@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        5 days ago

        Nothing a little retraining can’t fix. IIRC there are jailbroken open source models out there.