BlushedPotatoPlayers@sopuli.xyz to Technology@lemmy.worldEnglish · 11 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square49fedilinkarrow-up1204arrow-down122file-textcross-posted to: futurology@futurology.todaynottheonion@lemmy.worldartificial_intel@lemmy.ml
arrow-up1182arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comBlushedPotatoPlayers@sopuli.xyz to Technology@lemmy.worldEnglish · 11 months agomessage-square49fedilinkfile-textcross-posted to: futurology@futurology.todaynottheonion@lemmy.worldartificial_intel@lemmy.ml
minus-squarekibiz0r@midwest.sociallinkfedilinkEnglisharrow-up4·11 months agoFor AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.
For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.