Ransomware attacks, the use of AI, and the rise in cybercrime-as-a-service were observed to be the key trends in the cybersecurity space in the first half of 2023.
While LockBit ransomware was the most used, accounting for 30.3% of observed ransomware cases, cybercriminals were also found making use of new variants, including Akira and Luna Moth.
Ransomwares like LockBit were also updated to target newer operating systems including Linux, and macOS. The updated ransomwares, spotted in the wild, increased the scope of an attack, a report from Arete, a cybersecurity company said.
Q2 2023 also witnessed the emergence of Akira, a new ransomware group, which is expected to be updated by threat actors to counter a flaw in the ransomware that allowed it to be decrypted by a freely available decryptor.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
The first half of 2023 also witnessed increasing attacks on the professional services sector, witnessing an increase of 12% compared to the second half of 2022.
Interestingly, despite an increase in the number of ransomware attacks, ransom was paid in only 19% of the cases in the first half of 2023.
While Ransomware-as-a-Service (RaaS) model has dominated the cybercrime industry over the past few years, Cybercrime-as-a-Service grew parallel to it in H1 2023. Cybercrime-as-a-Service has lowered barrier of entry into cybercrime giving theat actors access to various resources allowing them to work their way through the attack lifecycle effectively, the report said.
AI continues to be misused
While AI tools have filters intended to prevent them from being used for harmful content, threat actors have discovered workarounds and methods to bypass these filters and leverage AI to launch cyberattacks.
Threat actors were also found to be using AI tools like ChatGPT to identify vulnerabilities, reverse-engineer shellcode, and even generate code for malware. Cybercriminals were also found discussing the use of ChatGPT in creating and sharing malware on hacking forums, the report said.
Earlier in July, WormGPT, a blackhat version of the popular AI chatbot ChatGPT was found to be used to generate malicious content, including phishing emails, malware code, fake news, and social media posts.
The tool is being viewed as a severe threat due to its lack of ethical boundaries or safety mechanisms to prevent it from responding to harmful or illegal requests, and is allegedly trained with data sources, including malware-related information.
The report is based on Arete incident response cases.