ChatGPT’s performance in providing false information is amazing
When you ask OpenAI’s ChatGPT to provide false information, this AI-based chatbot performs really well; A terrible reality that can have very bad consequences.
Jim WarrenIn an editorial for the Chicago Tribune, NewsGuard, an expert in detecting misinformation, announced that the ChatGPT chatbot responds to requests for misinformation.
written by FuturismWhen Newsguard asked ChatGPT to write about the 2018 Parkland massacre, the chatbot responded:
In addition, ChatGPT created inaccurate, albeit inaccurate, information on Covid-19 and presented vague statements from Russian President Vladimir Putin regarding the war with Ukraine.
In its report, Newsguard referred to ChatGPT as a “provider of misinformation”. Researchers found that this chatbot has a very surprising performance in imitating fake news 80% of the time.
Of course, ChatGPT uses various protection mechanisms to prevent people who intend to use it for bad purposes.
A part of the Newsguard report states:
Newsguard says that in most cases, when you ask ChatGPT to provide false information on various topics, such as the attack on the US Capitol on January 6, 2021, the chatbot will do just that.
The major problem with ChatGPT is not just the provision of false information. In fact, we can expect this artificial intelligence to have other problems that we probably don’t know about, and this issue can lead to a bigger problem.