Meta security analysts warn about fake ChatGPT malware
Meta security team It has detected malware, which is actually a fake version ChatGPT is designed to hack user accounts and gain access to business accounts.
In its security report for the first quarter of 2023, Meta announced that malware developers and spammers are trying to deceive their target users by abusing the trends and hot topics of the day. Since currently chatbots Artificial intelligence like ChatGPT, The new Bing and Google Bard have become the most important media headlines, the use of fake versions of these tools by cyber attackers to hack users has become popular.
Meta Security analysts have identified about 10 new malware types since March, all of which have been released as tools related to artificial intelligence chatbots, including ChatGPT. Some of these malwares are presented in the form of browser extensions and are even being distributed through anonymous official web stores. The Washington Post reported last month how hackers are using fake versions of ChatGPT in Facebook ads to defraud users.
Some malicious tools that have misused the name ChatGPT even use artificial intelligence and at first glance look like regular chatbots. In another part of its recent security report, Meta said it had identified more than a thousand unique links across its platforms that encouraged users to use fake versions of ChatGPT, and that these links have now been blocked.
The social media giant has also provided technical explanations related to how fraudsters access user accounts, including stealing login information and maintaining access to it; A process whose example is already for The famous YouTube channel Linus Tech Tips was also hacked.
to report VergeMeta has provided a way to regain access for users and businesses whose Facebook account has been hacked.