Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

Secretary General INTERPOL warns of ‘FraudGPT’ as AI fuels cybercrime boom | #cybercrime | #computerhacker

Srinagar, Jan 28: At the annual World Economic Forum in Davos, cyber security experts discussed the escalating challenges faced by law enforcement agencies, particularly in light of new technologies like Artificial Intelligence and deep fakes.

Secretary General of INTERPOL Jurgen Stock emphasised the crisis global law enforcement is confronting due to the surge in cybercrime.

Despite efforts to raise awareness about fraud, Stock noted that the number of fraud cases continues to rise, often with an international dimension.

During the discussion, attention was drawn to emerging technologies like FraudGPT, a malicious variant of the popular AI chatbot ChatGPT.

According to him, cybercriminals are forming organised networks based on expertise, utilising underground platforms.

He also highlighted the presence of a rating system among these bad actors, enhancing the reliability of their illicit services.

FraudGPT, an AI chatbot leveraging generative models, generates realistic and coherent text in response to user prompts.

Exploiting its capabilities, cybercriminals employ FraudGPT for various malicious purposes.

FraudGPT can craft convincing phishing emails, text messages, or websites to deceive users into disclosing sensitive information like login credentials or financial details.

The chatbot mimics human conversation to build trust with users, leading them to unknowingly share sensitive information or perform harmful actions.

FraudGPT creates deceptive messages enticing users to click on malicious links or download harmful attachments, resulting in malware infections.

The AI-powered chatbot assists hackers in generating fraudulent documents, invoices, or payment requests, facilitating financial scams against individuals and businesses.

While AI has contributed to enhancing cybersecurity tools, it also introduces risks, with examples such as brute force, Denial of Service (DoS), and social engineering attacks. Stock highlighted that even individuals with limited technological knowledge can carry out DDoS attacks, expanding the scope of cyber threats.

The risks associated with AI to cyber security are expected to rise as AI tools become more affordable and accessible.

As the prevalence of AI chatbots grows, it is crucial to adopt proactive measures and remain vigilant against fraudulent activities.

Staying informed and implementing robust cyber security practices can strengthen defences against emerging dangers, contributing to a safer digital environment for individuals and businesses alike. While AI has been enhancing cyber security tools for years, it has also posed a risk to cyber security.

Brute force, DoS, and social engineering attacks are just some examples of threats utilising AI.

Stock said that even people without a lot of technological know-how can carry DoS attacks and with AI, the scope of cyber attacks was just expanding.

The risks of AI to cyber security are expected to increase rapidly with AI tools becoming cheaper and more accessible.

As the popularity of AI chatbots continues to grow, it is essential to take necessary precautions to safeguard against fraudulent activities.

Proactive measures and vigilance are paramount.

By staying informed and implementing robust cyber security practices, individuals and businesses can fortify their defences against these emerging dangers, ensuring a safer digital environment for all.


Click Here For The Original Source.