AI is better at persuading conspiracy theorists than humans, and conversations with ChatGPT permanently shake beliefs in conspiracy theories
As shown by
Durably reducing conspiracy beliefs through dialogues with AI | Science
https://www.science.org/doi/10.1126/science.adq1814
AI chatbots might be better at swaying conspiracy theorists than humans | Ars Technica
https://arstechnica.com/science/2024/09/study-conversations-with-ai-chatbots-can-reduce-belief-in-conspiracy-theories/
Chats with AI bots found to damp conspiracy theory beliefs
https://www.ft.com/content/909f33d1-0d33-440b-a935-cfa65d2fccd1
According to Thomas Costello, a psychologist at American University, the problem of conspiracy theories is so profound that the content of the conspiracy theories people believe varies slightly from person to person.
'Conspiracy theories are very diverse. People believe in many different conspiracy theories, and the evidence they use to support a conspiracy theory may vary from person to person. This means that conspiracy theorists have many different versions of a conspiracy theory in their heads, so attempts to refute a broad range of conspiracy theories are not effective arguments,' Costello said.
Conspiracy theorists tend to be more knowledgeable about specific conspiracy theories, and one study has even observed them using vast amounts of misinformation sourced from the internet and elsewhere to refute skeptics.
On the other hand, chatbots trained with vast amounts of data can keep up with conspiracy theorists and won't be overwhelmed by their verbal abuse.
To test the hypothesis that AI might be better at eliminating conspiracy theories, Costello and his team conducted an experiment in which 2,190 conspiracy theorists were asked to interact with the large-scale language model (LLM) GPT-4 Turbo.
In the experiment, participants explained a conspiracy theory they believed in to an AI, presented evidence for it, and then debated the conspiracy theory with the AI. They also answered on a scale from 0 to 100 how strongly they believed the conspiracy theory before and after the debate.
As a result of the experiment, participants who conversed with the AI saw their beliefs about conspiracy theories decrease by 21.43%, and 27.4% of participants said they were 'less convinced of conspiracy theories' after the conversation.
The Financial Times created a graph based on the study's findings below, comparing participants' beliefs in conspiracy theories before (gray) and after (red) the conversation. Compared to control participants who engaged in casual conversation unrelated to conspiracy theories (blue), participants who discussed conspiracy theories with the AI saw an average 16.8 point drop in belief, with significant reductions observed for 11 of the 12 major conspiracy theories.
When participants were asked about their belief in conspiracy theories again two months later, most found that their belief in conspiracy theories remained as low as it had been immediately after the conversation with the AI.
In addition, AI has a problem of 'hallucination,' which outputs results that are different from the facts. However, when the research team asked professional fact checkers to verify the statements of GPT-4 Turbo, 99.2% were 'true,' 0.8% were 'misleading,' and there were zero 'errors.'
The AI used in the experiment is still available at the time of writing and can be accessed from
Below is the result of my trying to argue for the flat earth theory. As expected, he didn't deny it outright, but he did show me his thoughts, saying, 'There is actually evidence like this.'
Also, when I tried to assert the fact that 'human beings have successfully landed on the moon' instead of conspiracy theories, a warning message appeared saying 'AI has determined that you are not a conspiracy theorist' and asked whether I wanted to continue the conversation. If I continued the conversation, the AI generated a conspiracy-like output instead, saying that some people are skeptical of the moon landing.
Holden Thorpe, editor-in-chief of the journal Science, which published the research paper, wrote in an editorial about the research, 'Misinformation proponents often use a technique called the gash-gallop , which bombards people with an overwhelming amount of misinformation, but no human, no matter how skilled they are at conversation, can effectively counter this. But LLMs are not overwhelmed by what the other person is saying, and they can generate an infinite number of counterarguments. It may be disheartening to think that machines might be better at persuading conspiracy theorists than humans, but we can take solace in the fact that it is scientific information that is ultimately persuasive. It will be the job of human scientists to show that the future of AI is not so dystopian.'
Related Posts:
in Science, Posted by log1l_ks