Artificial intelligence-based malware has also found its way into YouTube videos

As is popularity Artificial intelligence in various platforms is greatly increased, the tendency Hackers are also more likely to abuse this technology. Research firm CloudSEK has seen a 200-300% increase in the number of videos containing links to popular malware sources such as RedLine, Vidar and Raccoon on YouTube since November 2022.
Videos containing malware produced with artificial intelligence have been published on YouTube as instructions for downloading cracked versions of popular and widely used software such as AutoCad, 3Ds Max, Premiere Pro, Photoshop, etc.
Attackers profit by publishing AI-generated videos on platforms like Synthesia and D-ID. These videos are usually published with catchy titles to entice more users to watch. CloudSEK says this popular trend has been tapped into on social media and has been reaching users with recruitment content, training and promotions for some time now.
The combination of different methods makes users easily fooled and clicks on the malicious links of these videos; An action that will lead to the download of malware. After installing infected programs, users’ private data such as passwords, credit card information, bank account numbers and other confidential information will be sent to the creator of these videos.
For example, a malware called Infostealer collects various data such as internet browser information, cryptocurrency wallets, information Telegram collects application files such as text documents and system information including IP address.
Although security solutions such as programs Antiviruses are always trying to keep their database updated with new AI-generated malware, but hackers, on the other hand, are constantly trying to keep their malware ecosystems active. CloudSEK says that the number of malware creators has increased significantly since the beginning of the artificial intelligence revolution in the world, which began in November 2022. For example, many malware created with ChatGPT was detected a few months after its release and by then it had infected a large number of users’ systems.
Digital Trends Malware developers share stolen data through illegal marketplaces, forums, and hacker channels, or look for people to collaborate in building new malware, he writes. They use artificial intelligence to deliver fake websites, phishing emails, YouTube video tutorials and social media posts and embed infected tools. In addition, some people deceive users by promising to access the paid version of ChatGPT and collect various personal information.
Attackers try to gain unauthorized access to famous and popular channels and use them to spread and spread their malware. By taking over channels that have over 100,000 subscribers and uploading between 5 and 6 videos, these people get thousands of views and hundreds of clicks on their infected links before the original channel owner regains control. Maybe some people will report such videos to YouTube, which will eventually lead to their removal, but before they are removed, some users will definitely fall victim to such clips.
Hackers also use popular link shortening services like Bit.ly and cutt.ly to make the infected links in YouTube videos look more authentic.
Source link