@ylai@lemmy.ml to Not The Onion@lemmy.worldEnglish • 5 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square52fedilinkarrow-up1233arrow-down115cross-posted to: futurology@futurology.todaytechnology@lemmy.worldartificial_intel@lemmy.ml
arrow-up1218arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.com@ylai@lemmy.ml to Not The Onion@lemmy.worldEnglish • 5 months agomessage-square52fedilinkcross-posted to: futurology@futurology.todaytechnology@lemmy.worldartificial_intel@lemmy.ml
minus-square@fidodo@lemmy.worldlinkfedilinkEnglish14•5 months ago These results come at a time when the US military has been testing such chatbots based on a type of AI called a large language model (LLM) to assist with military planning during simulated conflicts Jesus fucking Christ we’re all doomed
Jesus fucking Christ we’re all doomed