News

AI-written paper ChatGPT fools scientists

According to a pre-published version of a paper published on the BioArchive website in late December, the ChatGPT AI chatbot can write such convincing fake scientific abstracts that scientists often cannot detect them. Researchers disagree about the consequences of this incident for science.

According to Nature, Sandra Watcher from the University of Oxford in the UK, who studies technology and regulation and was not involved in the research, expressed concern about ChatGPT’s ability to “intermediate us” when we’re in a situation where experts can’t tell what’s real and what’s not. Complex is much needed, it will be ineffective.

ChatGPT chatbot creates realistic and intelligent text in response to users’ requests. This chatbot is a large language model and neural network based system that learns to perform its task by receiving huge amounts of human generated texts. San Francisco, California-based software company OpenAI released the tool on November 30, and it’s free to use.

Since ChatGPT’s release, researchers have grappled with ethical issues surrounding its use, as much of its output is difficult to distinguish from human-written text. Now a group of researchers under your supervision Catherine Gao From Northwestern University in Chicago, they used this chatbot to generate abstracts of artificial research papers to see if scientists could detect their inauthenticity.

The researchers asked the chatbot to write 50 medical research abstracts based on a selection of articles published in prestigious scientific journals including JAMA, New England Journal of Medicine, BMG, Lancet, and Nature Medicine. They then compared the generated texts with real abstracts using a plagiarism detector and artificial intelligence output detector and asked a group of medical researchers to identify the artificial abstracts.

The result was surprising. The plagiarism checker did not detect a single instance of plagiarism by examining the texts produced with ChatGPT and gave them an average originality score of 100%. On the other hand, the artificial intelligence output identifier was able to identify 66% of the produced abstracts. However, human judges did not perform much better. They correctly identified 68% of generated abstracts and 86% of real abstracts. In other words, the scientists mistakenly recognized 32% of the generated abstracts as real and 14% of the real abstracts as fake.

“ChatGPT writes believable scientific abstracts,” Gao and colleagues say in a pre-published version of their study. “The boundaries of the ethical and acceptable use of large language models to aid in the writing of scientific texts are still unclear.”

Wachter says that if scientists fail to determine the validity of scientific research, there are likely to be “terrible consequences.” According to him, in addition to being problematic for researchers, the produced texts will also have consequences for the entire society; Because scientific research plays a big role in our societies. For example, policy decisions may be made on the basis of incorrect research.

Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says, “It’s unlikely that any serious scientist will use ChatGPT to generate abstracts.” Whether or not the generated abstracts can be identified is a “trivial” question, he adds. The main question is whether this tool can create a correct and convincing abstract or not. A chatbot can’t do that, and as a result, the benefit of using it is minimal.

Irene Suleiman He researches the social effects of artificial intelligence at Hugging Face, an artificial intelligence company. He worries about any reliance on grand linguistic models for scientific thought. He says these models are trained based on past information; While social and scientific progress is often achieved through new ideas that are different from past ideas.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker