Searching for information using AI may result in less knowledge being acquired than traditional web searches



Since the launch of ChatGPT in late 2022, chat AI has become increasingly popular, and with the introduction of Google's 'AI Mode' and 'AI Summarization,' many people are using AI to gather information about their work and daily lives. AI is extremely useful because it can summarize a variety of information in a compact manner, but experiments have shown that this convenience comes at a cost.

Experimental evidence of the effects of large language models versus web search on depth of learning | PNAS Nexus | Oxford Academic

https://academic.oup.com/pnasnexus/article/4/10/pgaf316/8303888



Learning with AI falls short compared to old-fashioned web search
https://theconversation.com/learning-with-ai-falls-short-compared-to-old-fashioned-web-search-269760

To find the information you need using a traditional web search, you first need to think of a search phrase that is likely to get hits, then select a page that is likely to contain the information you are looking for based on the title and summary of the search results, and then read the page from top to bottom to find the information you need. Through this process, you can get an overall picture of the desired information and obtain clues that lead to additional information or deeper digging.

In contrast, AI-based searches instantly bring up the information you're looking for simply by asking a question in natural language, eliminating the need to open various websites and check their content. This is extremely convenient when you're pressed for time, but some people argue that relying on AI to instantly provide the information you want can result in only shallow knowledge.

To verify this claim, Siri Melmad, an associate professor of marketing at the University of Pennsylvania in the United States, and her colleagues conducted various experiments with a total of more than 10,000 subjects.

In the experiment, participants were asked to research topics such as 'how to grow vegetables.' They were randomly assigned to either do a 'traditional web search' or 'ask an AI like ChatGPT' to help them with their research. There were no restrictions on how participants used the tools, and they were free to Google as many times as they wanted, or ask the chat AI as many questions as they wanted.

After completing the study, the participants were asked to write a piece of advice to a friend about the topic they had learned. The research team analyzed the content of this advice to assess how deeply the participants had learned about the topic.



The results showed that people who consistently researched topics using AI learned less than those who used web search, put less effort into writing their advice, and the advice they ultimately wrote was shorter, less factual, and more general.

The research team then showed the advice written by the subjects to readers who were unaware of the experiment and asked them to rate it. They found that even though readers didn't know how each piece of advice was created, they perceived the advice written by the AI subjects as 'less informative and less useful.'

These results were replicated when the research team varied the conditions. For example, they used simulated Google search results and a simulated ChatGPT interface, adjusting the amount of information displayed in each result to be the same. Even when subjects used the simulated Google search results, which required them to manually open the links, they reported deeper learning. The results were also the same when comparing a standard Google search with an AI summary integrated into Google Search.

Furthermore, in an experiment in which subjects were given a link to a website along with the AI summary, they found that those who received the AI summary were less motivated to dig deeper into the source, resulting in a shallower knowledge base than subjects who used a standard Google search.



Commenting on the results, Melmad said, 'One of the fundamental principles of skill development is that people learn most effectively when they are actively engaged with the learning material. When we learn about a topic through a Google search, we face a lot of friction: we have to navigate links to various websites, read sources, and then interpret and synthesize them ourselves. This friction, while difficult, leads to developing a deeper, more original mental representation of the topic. But with large-scale language models, this entire process is done for the user, changing learning from an active to a passive process.'

Melmad acknowledges that AI can be beneficial in terms of speed and other aspects, and does not believe that we should avoid its use altogether. Rather, he argues that we need to understand in what situations AI can be beneficial and use it more wisely and strategically.

'With this in mind, future research will explore generative AI tools that introduce healthy friction into learning tasks. Specifically, we will explore what types of safeguards or bumps go beyond simple synthetic answers to motivate users to actively learn more,' Melmad said.

in AI,   Science, Posted by log1h_ik