Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

New Generative AI Tools for Cybercrime Surface on Underground Forums | #cybercrime | #computerhacker

A new type of virtual assistant software has emerged on underground cybercrime forums, specifically targeting “black hat” hackers involved in fraudulent activities. These tools, based on generative AI models similar to ChatGPT, are being sold to assist in various cybercrime activities such as creating malware, writing phishing emails, setting up attack sites, and scanning for vulnerabilities.

The first tool of this kind, called WormGPT, was discovered by security researchers in mid-July. It appears to be designed specifically for business email compromise (BEC) attacks. The model is capable of generating professional-looking emails even for non-native speakers of the target language.

Shortly after, another tool named FraudGPT appeared, offering a wider range of capabilities. It includes the creation of malicious code, internet scanning, hacking tool development, scam page creation, and training in the use of cybercrime tools.

Initially, these cybercrime tools were offered on mainstream internet forums that discuss black hat hacking, but they were subsequently banned due to their explicit purpose. The creator then turned to Telegram to sell their products.

The creator, known as “CanadianKingpin12,” has mentioned two more generative AI tools in development: DarkBART and DarkBERT. These tools are rumored to be derived from Google’s Bard AI and have been trained on dark web sites. Their integration with Google Lens would enable image input along with text. However, it is unclear if these tools will be released to the market.

The emergence of such readily available generative AI cybercrime tools poses a threat to cybersecurity. Advanced hackers can enhance their vulnerability research, malware development, and phishing campaigns, while amateur hackers are also empowered by these tools. The increased likelihood of sophisticated attacks requires organizations to strengthen their defenses. AI-powered security tools may provide some relief, but employee awareness of these enhanced capabilities and targeted training will be crucial.

Business email compromise is currently the most pressing concern, and defense measures include implementing multi-factor authentication, using complex and secure passwords, and setting alerts for logins from foreign countries. Employee training, including running simulated scenarios, will become increasingly important as these attacks grow in frequency and sophistication due to the assistance of generative AI cybercrime tools.


Click Here For The Original Source.