AI-generated cybercrime is rapidly expanding, fueled by the emergence of new tools on the darkweb, according to a cybersecurity report released by SlashNext. The report highlights the launch of WormGPT and FraudGPT as just the beginning of a wave of artificial intelligence tools that cybercriminals are utilizing. FraudGPT, in particular, is capable of creating phishing scam pages, writing malicious code, developing hacking tools, and crafting scam letters. SlashNext researchers engaged with CanadianKingpin12, the pseudonymous individual behind FraudGPT, to assess its technological capabilities. During the conversation, it was revealed that new AI chatbots named DarkBart and DarkBert will soon be introduced. These chatbots will have internet access and integration with Google’s image recognition technology, Google Lens, enabling them to send both text and images. While DarkBert was initially developed as a legitimate tool by S2W to combat cybercrime, it has since been repurposed by criminals. CanadianKingpin12 identified various functions of DarkBert, including assisting in advanced social engineering attacks, exploiting computer system vulnerabilities, and distributing malware like ransomware. To sell access to FraudGPT, the vendor was forced to shift communication to encrypted messaging apps after being banned from public forums on the clear net. In response to AI-generated cybercrime, SlashNext advises companies to take a proactive approach to cybersecurity training and implement enhanced email verification measures. Another report by Immunefi states that cybersecurity experts are struggling to combat cybercrime using AI, with many finding limited accuracy and lack of specialized knowledge in identifying exploits. The rapid progression from WormGPT to FraudGPT and now DarkBERT within a month underscores the significant influence of malicious AI on the cybersecurity landscape.