Two recent studies paint a concerning picture of how artificial intelligence tools are changing the way people think, reason, and express themselves. Taken together, the research suggests that widespread AI use is not only reducing users’ critical engagement with information, but may also be narrowing the diversity of human thought on a broader scale.
Researchers at the University of Pennsylvania introduced the term “cognitive surrender” to describe what happens when users accept AI-generated answers without scrutiny. Unlike older forms of “cognitive offloading” — such as using a calculator or GPS — cognitive surrender involves handing over the reasoning process itself, not just a specific task. The researchers argue this is a qualitatively different and more significant shift in how humans relate to automated systems.
To test this, the team ran experiments using Cognitive Reflection Tests, which are designed to distinguish between quick, intuitive answers and slower, more careful reasoning. Participants had access to an AI chatbot that had been deliberately modified to give wrong answers about half the time. Across more than 9,500 individual trials involving 1,372 participants, subjects accepted faulty AI reasoning 73.2 percent of the time and overruled it only 19.7 percent of the time.
Several factors shaped how likely people were to question the AI:
- Participants with higher fluid IQ were less likely to rely on the AI and more likely to reject its wrong answers.
- Participants who already viewed AI as authoritative were more easily misled.
- Financial incentives and immediate feedback increased the likelihood of overruling a wrong AI answer by 19 percentage points.
- Time pressure reduced that tendency by 12 percentage points.
Notably, AI users reported 11.7 percent higher confidence in their answers compared to those who used no AI — even though the chatbot provided wrong answers half the time.
A separate team at USC Dornsife raises a related but distinct concern: that AI tools are making people think and write more alike. Their analysis, published in Trends in Cognitive Sciences, argues that because large language models are trained on data that overrepresents Western, educated, and wealthy perspectives, their outputs reflect a narrow slice of human experience. When billions of people use the same handful of chatbots, individual differences in writing style, reasoning, and perspective tend to flatten out.
The USC researchers note that while individual users often generate more ideas with AI assistance, groups working with AI produce fewer and less creative results than groups relying on their own collective thinking. They also point out that AI systems tend to favor linear, step-by-step reasoning, which may crowd out more intuitive or abstract thinking styles.
Both research teams stop short of calling AI use uniformly harmful. The University of Pennsylvania researchers acknowledge that deferring to a highly accurate AI system could, in principle, improve decision-making. The USC team similarly frames their concerns around the need for more diversity in AI training data, rather than calling for a reduction in AI use.
The two studies together highlight a tension at the heart of AI adoption: the same fluency and confidence that make AI tools useful can also make users less likely to question them.
Sources: Ars Technica, USC Dornsife
Stay up to date
AI for content creation: the latest tools, tips and trends. Every two weeks in your inbox: