Artificial intelligence is providing everyone with tools to make daily tasks easier. Unfortunately, hackers can benefit from AI by using it for their schemes. For example, cybercriminals have been using ChatGPT to facilitate coding viruses and malware. In response, more online security conferences like the Cyber Security & Cloud Expo Europe discuss its challenges.
Fortunately, hackers aren’t the only ones who can wield artificial intelligence. You and your business can use AI to safeguard your assets and platforms. Of course, that means understanding the numerous tools and programs available to find the one that fits your needs. Thankfully, the Internet has many valuable resources, like this one!
This article will list some of the most pressing AI cybercrime methods at the time of writing. Later, I will share some of the available solutions, such as the Philippines’ latest “Zero Defects” mobile app monitoring program.
The 10 must-know AI cybercrime threats
- Phishing emails
- AI-powered password cracking
- Evading security programs
- Supply chain attacks
- Ransomware attacks
- Business Email Compromise (BEC)
- Payment gateway fraud
Spam is one of the oldest online crimes that predate generative artificial intelligence. Yet, it stands to gain significant upgrades from this emerging technology.
Online security solutions provider CSO shared related insights from Fernando Montenegro, analyst at the Omdia research company. Ironically, spam filters could facilitate this AI cybercrime.
The latest spam detection tools share reasons why they block specific messages. In response, an AI program could study those comments to adjust its messages.
You may also like: Filipinos must be wary of online scams
Eventually, the AI tool could learn how to bypass filters. “If you submit stuff often enough, you could reconstruct what the model was, and then you can fine-tune your attack to bypass this model,” the analyst explained.
2. Phishing emails
Phishing emails are another conventional online crime. They impersonate reputable companies by mimicking their writing style and email layout.
Then, they use that brand recognition and trust to scam recipients into giving their money and sensitive information. Generative AI can make phishing significantly faster and easier by writing messages on a hacker’s behalf.
CSO says artificial intelligence enables attackers to customize phishing messages so that they don’t appear in bulk emails. Moreover, AI can generate convincing photos and social media profiles to make scams seem more legitimate.
3. AI-powered password cracking
Hackers don’t mash on their keyboards frantically to hack online accounts, despite what most movies and TV shows depict. Instead, they use programs that guess passwords.
For example, a brute-force password attack involves entering every possible alphanumeric combination of a password. Previously, that took significant amounts of time because the program would depend on a hacker’s internet speed and RAM.
Nowadays, generative AI provides it with unprecedented speed. Adam Malone, EY partner at Technology Consulting, said hackers use machine learning “so they can make fewer attempts and guess better passwords and increase the chances that they’ll successfully gain access to a system.”
4. Evading security programs
Most people think of installing the latest antivirus software to defend against AI cybercrime and similar threats. However, these programs use AI that hackers can also manipulate.
After all, everyone can install and use these technologies, including hackers. “Anything available online, especially open source, could be leveraged by the bad guys,” said Murat Kantarcioglu, a University of Texas computer science professor.
Attackers can pinpoint flaws in antivirus software so that their artificial intelligence systems can adjust malware to avoid detection. “You might be able to change them by changing features of your attack, like how many packets you send or which resources you’re attacking,” the professor stated.
Artificial intelligence enables everyone to fake everything, such as images, videos, voices, and personalities. As a result, people could use it to impersonate people and companies, destroying their reputations or scamming clients.
Believe it or not, deepfakes have become such a pressing issue that it was the first topic in the first-ever AI hearing at the US Senate. In May 2023, Senator Blumenthal opened the proceedings by playing an AI-generated recording of his voice discussing ChatGPT.
The senator used an artificial video to emphasize generative AI’s capabilities. It was a fitting introduction to a historic meeting that included OpenAI CEO Sam Altman.
Reconnaissance involves monitoring a target to pinpoint digital traffic patterns, defenses, and vulnerabilities. Artificial intelligence is adept at detecting patterns, so it could drastically facilitate this activity.
It is possible but not easy for your average online criminal. “You do not need some skill sets to use AI, so I believe that it would be advanced state actors who will use these techniques,” Professor Kantarcioglu said.
You may also like: PH cybercriminals now in popular chat apps
CSO says AI recon tools could become more widely accessible once they become an underground service. Also, it may happen “if a nation-state threat actor developed a particular tool kit that used machine learning and released it to the criminal community,” explained Allie Mellen, AI analyst.
7. Supply chain attacks
Exchanging goods worldwide has become highly complicated, with multiple transportation methods carrying various goods to several locations. That is why modern supply chains rely on digital systems.
Artificial intelligence could significantly boost their efficiency, but they may become an AI cybercrime opportunity. For example, hackers could disrupt a country’s major distribution port.
That is what happened to the Port of Nagoya, Japan’s largest trading port. A cyberattack disabled the Port on July 4 before authorities restored operations on July 6.
8. Ransomware attacks
Ransomware encrypts important business databases and demands payment for decryption. In other words, it locks company documents and platforms and charges money to restore access.
In June 2021, hackers proved the destructive power of a ransomware attack by disrupting the US Colonial Pipeline. As a result, they caused one of the biggest gas shortages in the United States.
That attack happened before the rise of generative AI tools like ChatGPT. Nowadays, those programs can automate and streamline ransomware distribution by targeting valuable assets automatically.
9. Business Email Compromise (BEC)
A business email compromise impersonates high-level executives to deceive employees into performing unauthorized actions. For example, a hacker could fool them into leaking sensitive information.
It may seem impossible as executives and business partners use high-end tools to secure their online accounts. Yet, generative AI makes it easier and faster than ever.
For example, you could feed an executive’s messages to ChatGPT to mimic the writing style. Then, the hacker could ask the bot to produce a scam message with their tone and word choices.
10. Payment gateway fraud
Online wallets and payment portals have become a staple in the modern age. That is why we must choose our apps wisely. Otherwise, they could fall victim to AI cybercrime.
Online criminals could use artificial intelligence to mimic authentic human transactions. For example, they could use AI tools to analyze how a platform verifies manmade exchanges.
Next, they could generate realistic synthetic identities to fool online wallet KYC (Know Your Customer) protocols. They may also scam the people checking these transactions via phishing attacks with AI-generated content.
How do you defend against AI cybercrime?
Artificial intelligence is not only for harming people; others use it to defend themselves and others against online attacks. Here are some examples:
- Detection of malicious activities: AI can analyze networks to establish telltale signs of malicious AI activity. Consequently, it could prevent such attacks before they worsen.
- Malware detection: AI can examine factors like code patterns and file characteristics to determine if a file is safe or harmful. As a result, this technology can improve antivirus software.
- Threat intelligence: Artificial intelligence can compile AI cybercrime patterns to defend against future attacks.
- Threat management: This technology can improve cybersecurity analysts’ efficiency. These tools could flag false positives so that digital security personnel can focus on verified threats.
You may also like: How the CICC protects the PH from cyberattacks
It can also help fight zero-day attacks, which involve security flaws implanted by a hacker before a company or organization deploys a digital platform.
Once a company activates it, a hacker could use that to infiltrate a company without raising suspicion. After all, the program would treat the hacker’s access as legitimate. Fortunately, AI could determine such breaches and plug them immediately.
AI cybercrime will inevitably become more widespread as the world adopts artificial intelligence further. Fortunately, there are tools that can help you fight against it.
If you need further digital security for your business, speak with companies like IBM and Microsoft. Meanwhile, smaller businesses may try Alibaba Cloud for more affordable options.
This article does not condone using these hacking methods. However, everyone should know that these threats exist so they can defend themselves. Learn more about important digital tips and trends at Inquirer Tech.
Frequently asked questions about AI cybercrime
Is AI cybercrime real?
It might be hard to believe artificial intelligence helps hackers worldwide, but it is beyond a sci-fi concept nowadays. People use ChatGPT to facilitate coding malware and viruses. Also, they can make that generative AI program produce scam messages en masse that closely mimic a reputable company or person.
What are the risks of AI cybercrime?
AI programs make it easier to steal information by faking online presence. For example, artificial intelligence can impersonate a celebrity to fool people into giving up their money and sensitive information. Moreover, people could ruin a company’s reputation by posting damaging content while impersonating them.
Are there AI cybercrime solutions?
The Philippines is a great example of a country with effective cybercrime solutions. Department of Information and Communications Technology Secretary Ivan John Uy launched the Consumer Application Monitoring Systems “to identify the performance and the problems with government applications. Click here to learn more about the CAMS platform.