Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

Kaspersky uncovers talks on dark web about exploiting AI for cybercrime | #cybercrime | #computerhacker

In 2023, the Kaspersky Digital Footprint Intelligence service stumbled upon a wealth of dark web posts, exceeding 3,000 in number, addressing the misuse of ChatGPT for illegal purposes and discussing AI-powered tools. Although discussions peaked in March, the dialogue persists.

According to Kaspersky, a cybersecurity solutions company, the conversations usually span from crafting malicious alternatives to jailbreaking the original ChatGPT, among others. Stolen ChatGPT accounts and services offering automated mass creation are flooding dark web channels.

Threat actors are actively exploring diverse schemes for implementing ChatGPT and AI, with topics ranging from developing malware to illicitly using language models for processing stolen user data and parsing files from infected devices. The integration of automated responses from ChatGPT into cybercriminal forums is on the rise. Threat actors share jailbreaks through dark web channels, unlocking additional functionalities and devising ways to exploit legitimate tools for malicious purposes.

SOC on steroids: How can GenAI boost cybersecurity defenses
AI in cybersecurity: Friend or foe?

Beyond ChatGPT and general artificial intelligence, attention is gravitating toward projects like XXXGPT, FraudGPT, and others. These alternative language models, available on the dark web, boast additional functionalities and claim freedom from original limitations.

Automated defense

Adding to the concerns is the market for accounts of the paid version of ChatGPT. In 2023, an additional 3,000 posts emerged across the dark web and shadow Telegram channels, advertising ChatGPT accounts for sale. These posts either distribute stolen accounts or promote auto-registration services that create accounts on demand. Notably, certain posts were recurrently published across multiple dark web channels.

While AI tools themselves are not inherently dangerous, cybercriminals are actively devising efficient ways to leverage language models, potentially lowering the entry barrier into cybercrime and increasing the number of cyberattacks. Despite the automated nature of cyberattacks, automated defenses exist. Staying informed about attackers’ activities remains crucial to staying ahead in terms of corporate cybersecurity, according to Kaspersky.


Click Here For The Original Source.