A study involving 1,372 participants and over 9,000 experiments revealed that a large number of people are falling into a state of 'cognitive surrender,' where they accept AI-generated misinformation without question.

A research team at the University of Pennsylvania has published a paper based on experiments that have shown many people are adopting a thought process of 'leaving everything to AI' as AI advances.
Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender by Steven D Shaw, Gideon Nave :: SSRN

'Cognitive surrender' leads AI users to abandon logical thinking, research finds - Ars Technica
https://arstechnica.com/ai/2026/04/research-finds-ai-users-scarily-willing-to-surrender-their-cognition-to-llms/
It is believed that there are two main mechanisms for human decision-making. One is 'System 1,' which is 'quick, intuitive, and emotional processing,' and the other is 'System 2,' which is 'slow, deliberate, and analytical reasoning.'
A research team at the University of Pennsylvania has published a paper suggesting that the advent of AI has given rise to a new 'System 3' in human decision-making. According to the paper, in System 3, decisions are made not by the human mind, but by 'external, automated, data-driven reasoning that originates from an algorithmic system.'
Up until now, humans have used various tools such as calculators and GPS systems to delegate some of the tasks in their thinking to automated algorithms, while monitoring and evaluating the results through their own internal reasoning. According to the research team, with the advent of AI, there are now cases where humans engage in 'cognitive surrender,' accepting the AI's reasoning entirely without monitoring or verification. The team also states that 'cognitive surrender' is particularly common when 'the AI fluently outputs sentences that it seems confident in.'

To investigate the proportion of humans who exhibit cognitive obedience to AI, the research team conducted an 'AI-assisted test.' The test was designed to elicit incorrect answers from participants who tend to use the 'intuitive System 1' thinking process, while making it easy for participants who tend to use the 'deliberative System 2' thinking process to answer correctly. During the test, participants had free access to the AI, but it was programmed to output incorrect information 50% of the time.
In the experiment, the majority of participants used AI. When the AI was accurate, 93% of participants trusted it, while 80% still trusted it even when the AI was inaccurate. The group that used inaccurate AI scored worse than the group that relied solely on their brains, but 11.7% more of them felt they had answered correctly.
In another experiment, when instant feedback was provided showing whether an answer was correct or incorrect the moment it was given, and small payments were made based on the number of correct answers, the probability of participants correcting the AI's incorrect output increased by 19%. However, when a 30-second timer was added to create time pressure, the probability of correction decreased by 12%.
Based on the results of over 9,500 tests conducted across all 1,372 participants, the probability of humans correcting the AI's erroneous output was only 19.7%. The research team states, 'This shows that humans readily incorporate AI-generated output into their decision-making processes and often do not question it.'

On the other hand, the research team also points out that 'cognitive surrender,' which involves letting AI handle the thinking, is somewhat rational. In this experiment, the AI's accuracy rate was only 50%, and entrusting thinking to such a poor-quality AI would clearly lead to undesirable results. However, they suggest that using a statistically superior system could potentially yield better results than humans, especially in areas involving 'probabilistic settings,' 'risk assessment,' and 'massive amounts of data.'
It is also noted that when you entrust reasoning to AI, your own reasoning ability will depend on the capabilities of the AI system you are using.
Related Posts:
in AI, Posted by log1d_ts







