In a new study, many people doubted or abandoned false beliefs after a short conversation with the DebunkBot.
By Teddy Rosenbluth Sept. 12, 2024 Shortly after generative artificial intelligence hit the mainstream, researchers warned that chatbots would create a dire problem: As disinformation became easier to create, conspiracy theories would spread rampantly.
Now, researchers wonder if chatbots might also offer a solution.
DebunkBot, an A.I. chatbot designed by researchers to “very effectively persuade” users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people’s convictions, according to a study published on Thursday in the journal Science.
Indeed, false theories are believed by up to half of the American public and can have damaging consequences, like discouraging vaccinations or fueling discrimination.
The new findings challenge the widely held belief that facts and logic cannot combat conspiracy theories. The DebunkBot, built on the technology that underlies ChatGPT, may offer a practical way to channel facts. ADVERTISEMENT SKIP ADVERTISEMENT
“The work does overturn a lot of how we thought about conspiracies,” said Gordon Pennycook, a psychology professor at Cornell University and author of the study.
Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of arguing or explaining would pull that person out.
The theory was that people adopt conspiracy theories to sate an underlying need to explain and control their environment, said Thomas Costello, another author of the study and assistant professor of psychology at American University.
But Dr. Costello and his colleagues wondered whether there might be another explanation: What if debunking attempts just haven’t been personalized enough?
ADVERTISEMENT SKIP ADVERTISEMENT
Since conspiracy theories vary so much from person to person — and each person may cite different pieces of evidence to support one’s ideas — perhaps a one-size-fits-all debunking script isn’t the best strategy. A chatbot that can counter each person’s conspiratorial claim of choice with troves of information might be much more effective, the researchers thought.
To test that hypothesis, they recruited more than 2,000 adults across the country, asked them to elaborate on a conspiracy that they believed in and rate how much they believed it on a scale from zero to 100.
ADVERTISEMENT SKIP ADVERTISEMENT
People described a wide range of beliefs, including theories that the moon landing had been staged, that Covid-19 had been created by humans to shrink the population and that President John F. Kennedy had been killed by the Central Intelligence Agency. Image A DebunkBot screen defines conspiracy theories and asks a viewer to describe any conspiracy theories they might find credible or compelling. A screen grab from the Debunkbot website.Credit…DebunkBot Then, some of the participants had a brief discussion with the chatbot. They knew they were chatting with an A.I., but didn’t know the purpose of the discussion. Participants were free to present the evidence that they believed supported their positions.
One participant, for example, believed the 9/11 terrorist attacks were an “inside job” because jet fuel couldn’t have burned hot enough to melt the steel beams of the World Trade Center. The chatbot responded:
“It is a common misconception that the steel needed to melt for the World Trade Center towers to collapse,” it wrote. “Steel starts to lose strength and becomes more pliable at temperatures much lower than its melting point, which is around 2,500 degrees Fahrenheit.”
After three exchanges, which lasted about eight minutes on average, participants rated how strongly they felt about their beliefs again.
ADVERTISEMENT SKIP ADVERTISEMENT
On average, their ratings dropped by about 20 percent; about a quarter of participants no longer believed the falsehood. The effect also spilled into their attitudes toward other poorly supported theories, making the participants slightly less conspiratorial in general.
Ethan Porter, a misinformation researcher at George Washington University not associated with the study, said that what separated the chatbot from other misinformation interventions was how robust the effect seemed to be.
When participants were surveyed two months later, the chatbot’s impact on mistaken beliefs remained unchanged. “Oftentimes, when we study efforts to combat misinformation, we find that even the most effective interventions can have short shelf lives,” Dr. Porter said. “That’s not what happened with this intervention.”
ADVERTISEMENT SKIP ADVERTISEMENT
Researchers are still teasing out exactly why the DebunkBot works so well.
An unpublished follow-up study, in which researchers stripped out the chatbot’s niceties (“I appreciate that you’ve taken the time to research the J.F.K. assassination”) bore the same results, suggesting that it’s the information, not the chatbot itself, that’s changing people’s minds, said David Rand, a computational social scientist at the Massachusetts Institute of Technology and an author of the paper.
“It is the facts and evidence themselves that are really doing the work here,” he said.
The authors are currently exploring how they might recreate this effect in the real world, where people don’t necessarily seek out information that disproves their beliefs.
They have considered linking the chatbot in forums where these beliefs are shared, or buying ads that pop up when someone searches a keyword related to a common conspiracy theory.
For a more targeted approach, Dr. Rand said, the chatbot might be useful in a doctor’s office to help debunk misapprehensions about vaccinations. ADVERTISEMENT SKIP ADVERTISEMENT
Brendan Nyhan, a misperception researcher at Dartmouth College also not associated with the study, said he wondered whether the reputation of generative A.I. might eventually change, making the chatbot less trusted and therefore less effective.
“You can imagine a world where A.I. information is seen the way mainstream media is seen,” he said. “I do wonder if how people react to this stuff is potentially time-bound.”
I did it with the business plot.
It set a staggeringly high standard for evidence and it basically implied that I was extremely emotionally invested in this and that I need to be cautious about overextending my scepticism towards authority figures, as if Smedley Butler wasn’t himself an authority figure.
It was super condescending and it basically just took a blended approach of Motivational Interviewing mixed with concern-trolling over the consequences of my believing that it is extremely likely that a coup was being plotted against FDR because, essentially, won’t somebody think of the democratic institutions and how I engage with them??
Wait…it argued AGAINST the Business Plot? Like the thing people testified in front of congress over? Even lib sources acknowledge it existed, siding with Butler.
In other words, the AI skimmed portions of sources that stated it was originally not believed, but skipped over the Business Plot becoming a concern within a few weeks/months after Butler’s testimony. Then ignored the post-WWII/Depression research into it.
Nice “”“”“AI”“”" you have there.
That’s how it was about the Kennedy assassination. It just kept repeating that like “no story can be verified 100% but the most likely explanation according to experts is that Oswald acted alone”, just assertions without evidence
So, they created a liberal smugbot