Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267
0

AI and cybersecurity – the threats and the defensive strategies | #cybercrime | #computerhacker


The artificial intelligence (AI) market is growing rapidly. Between 2023 and 2030, Grand View Research projects that it will grow at a compound annual growth rate of 37.3%, having already reached a value of $136.55 billion in 2022. It is little surprise then, that both cybercriminals and those fighting them are racing to embrace the potential of AI.

When it comes to the use of AI in fraud management solutions, the figures are still eye-wateringly high. Future Market Insights expects the overall market to reach a value of around $39.5 billion by 2031. But how exactly do all these dollars equate to fighting AI-enabled cybercrime? And what precisely are they fighting?

The use of artificial intelligence in cybercrime
Cybercriminals are certainly embracing the potential of AI. They are manipulating AI models and algorithms for a range of purposes, including to create fake identities and false information, to conduct fraudulent transactions and to generate well-worded, grammatically correct phishing emails. Cybercriminals have also been playing with AI content generation tools to write malicious code – with varying degrees of success (functional but buggy seems to be the standard thus far). CAPTCHA-breaking attempts and voice cloning are also being undertaken through the use of AI tools.

As the use of AI continues to evolve, so too will cybercriminal’s use of it. This means that businesses have an every-growing task on their hands when it comes to cybersecurity.

The effect of AI on cybersecurity
The nature of AI in the cybersecurity sphere is double-edged – criminals aren’t the only ones who can benefit from it. In the right hands, artificial intelligence can strengthen security defences significantly.

Businesses need to be on the lookout for fraud and cybercrime from all angles. Recent tabled amendments to the Economic Crime and Corporate Transparency Bill, for example, focused on fraud prevention in relation to employees, agents and subsidiaries. This is in addition to the vigilance required by businesses combatting fraud and cybercrimes carried out by customers and external bad actors.

Thankfully, AI-powered machine learning algorithms have done much to enhance businesses’ approach to cybersecurity. In terms of fraud detection solutions, for example, AI is helping to spot patterns, trends and anomalies in transaction log data and customer onboarding processes, as user onboarding software helps organisations fine tune their approach to weeding out those who would do them harm. It has become a fundamental element of many data analysis tools.

Cybersecurity firms are also using machine reasoning in their attempts to fight off attempts at cybercrime, for example by parsing suspicious language in phishing emails and using natural language processing to identify adverse media coverage instances on social media sites.

Other use cases of AI in cybersecurity include predicting breach risks, detecting malware and preventing its spread, filtering spam and identifying bots. AI can also help bolster security measures, for example by authenticating users and aiding in password protection mechanisms. As AI tools can analyse millions of events incredibly fast, they are a powerful weapon in the fight against cybercrime when used well.

Human versus machine
While AI can do much to identify and prevent attempted cybercrime, it does not negate the need to train employee teams in fundamental cybersecurity awareness. Employees still need to apply common sense when it comes to fighting phishing, smishing and the like.

Businesses also need to remember that AI should always be used in line with equality law, to ensure that people with protected characteristics aren’t treated less favourably due to bias built into AI algorithms. This is still an emerging area of work, so companies need to regularly update their systems and learn lessons as they do.

AI and cybersecurity: real world example
According to AAG, 39% of businesses in the UK in 2022 reported suffering a cyberattack. The figure highlights the sheer scale of the issue. Cybersecurity breaches in which AI plays a role are now on every security conscious business’ radar.

The AI tools themselves are, of course, not immune from cybersecurity incidents. OpenAI reported a data breach in March 2023 that resulted in it taking ChatGPT offline while it worked on a patch. The breach was due to a vulnerability in the Redis open-source library, which enabled users to see other active users’ chat history. OpenAI also discovered that the vulnerability could enable users’ payment information to be seen, including names, physical and email addresses, credit card expiry dates and the last for digits of users’ card numbers.

In addition to AI firms being the victims of data breaches, many have found that their tools have been utilised for malicious purposes. Back in 2019, for example, cybercriminals used an AI voice altering tool to con a senior executive at an unnamed UK energy firm out of £200,000. The fraudsters used the AI tool to impersonate the voice of the individual’s boss, which resulted in the loss of funds.

Time to get defensive
The global regulatory landscape looks sparse indeed when it comes to artificial intelligence. The European Parliament is the closest at present to establishing rules on AI. In May 2023, it endorsed new rules in relation to transparency and risk management for AI systems, with the work being taken forward by the Internal Market Committee and the Civil Liberties Committee. The committees’ work has focused on areas such as banning biometric surveillance, emotion recognition and predictive policing AI systems.

While the high-level debate rumbles on, businesses in Europe and around the globe are left to deal with the day-to-day reality of an increasing number of AI-enabled cybercrime threats. The National Crime Agency reports that cybercrime, “continues to rise in scale and complexity” costing the UK “billions of pounds”. This means that businesses need to be proactive in their approach to defence, implementing the latest AI cybersecurity tools to guard against fraud, data breaches and other attacks. They also need to roll out robust, comprehensive training programmes that regularly teach staff about cybersecurity threats, including how to spot them, what to do about them and how attacks are evolving as their use of artificial intelligence becomes increasingly sophisticated.

——————————————————–


Click Here For The Original Source.

Translate