Report that researchers cannot distinguish the abstract of the paper written by the dialogue AI ``ChatGPT''
The interactive AI `
Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers | bioRxiv
https://doi.org/10.1101/2022.12.23.521610
Abstracts written by ChatGPT fool scientists
https://www.nature.com/articles/d41586-023-00056-7
The interactive AI ``ChatGPT'' announced by OpenAI , an organization that conducts research and development of AI, has attracted attention, with the number of users exceeding 1 million within a week of the release of the test version. . In particular, the accuracy of chats and discussions is a hot topic, and you can see excellent examples of ChatGPT answers in the database 'LearnGPT'.
'LearnGPT', a collection of excellent answer examples that make you realize the accuracy of the interactive chat AI 'ChatGPT' - GIGAZINE
On the other hand, in terms of coding, which ChatGPT is good at besides chat, Stack Overflow , a knowledge community, wrote that a post using ChatGPT said, ``Although the correct answer rate is low, it is easy to create good answers at first glance. ”, and the New York City Department of Education has blocked access to ChatGPT from school computers. In addition, the International Conference on Machine Learning (ICML), one of the international conferences on machine learning, said, ``It is unknown who wrote the sentences output using AI.'' ``Can scholars be responsible for errors in fake papers? ”, We have announced a policy of `` prohibiting the use of AI like ChatGPT to write scientific papers ''.
Writing scientific papers with AI such as ChatGPT is prohibited at international conferences, but editing and polishing your own sentences is OK - GIGAZINE
Regarding creating research papers using ChatGPT, there is widespread concern among scientists and publishing experts that the sophistication of AI chat will compromise the integrity and accuracy of research. In a pre-peer-reviewed paper posted to the bioRxiv server at the end of December 2022, it is said that even a scientist may not be able to distinguish between a genuine paper and a fake paper by AI if ChatGPT writes the abstract of the paper. pointed out.
A research team led by Catherine Gao of Northwestern University in Chicago, Illinois, used ChatGPT to generate abstracts of research papers and verify whether scientists can distinguish between them. Researchers selected 50 medical research papers published in the medical journals JAMA , The New England Journal of Medicine , The BMJ , The Lancet , and Nature Medicine , had them written on ChatGPT, and falsified them to a group of medical researchers. I asked him to look through the abstract of the paper that was written.
First, when the abstracts created with ChatGPT were run through a plagiarism checker, the median originality score was 100%, and plagiarism was not detected. Next, when the same summary was applied to the AI output checker, it seems that 66% could be detected as ChatGPT. On the other hand, a check by a professional researcher misidentified 32% of the abstracts generated by ChatGPT as genuine, and misidentified 14% of genuine papers as abstracts generated by AI.
In the paper, the research team evaluated that ``ChatGPT writes the abstracts of scientific papers that are credible and convincing,'' while saying, ``How far is the large-scale language model useful for writing scientific sentences? The boundaries of what is ethical and acceptable use of is yet to be determined.'
``I'm very worried,'' said Sandra Wachtel, who studies technology and regulation at the University of Oxford. It can lead to a tragic result of losing the existence that is absolutely necessary to guide a strong theme.' Scientists' inability to determine the truth of research, Wachtel said, could lead them down problematic research routes because the research they're reading is bogus, as well as policy decisions based on research findings. It is said that it will affect society as a whole, such as making decisions incorrectly.
On the other hand, Arvind Narayanan, a computer scientist at Princeton University in New Jersey, said, ``No serious scientist would use ChatGPT to generate abstracts for papers. Regardless, the question is whether the tool can generate an accurate and persuasive gist, and since it can't, the upside of using ChatGPT is minimal and the downside is large.' says.
The paper states, ``Evaluators of scientific communication, such as research papers and conference proceedings, should put policies in place to discourage the use of AI-generated text. If we allow it, we need to establish clear rules regarding the disclosure of what has been generated by AI and to what extent.'
Related Posts:
in Software, Posted by log1e_dh