Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267

The Clock is Ticking to Protect Vulnerable Groups from AI-Driven Cybercrime | #cybercrime | #computerhacker

When ChatGPT was launched at the end of 2022, many had their first direct encounter with AI. But for those of us with an AdTech background, it was just the latest iteration of a technology we’ve been relying on for years.

Over the past decade, advertising has been the test bed for cutting-edge AI advancements well before they reached consumers or research agencies, and that includes large language models.

Today, AI-based behavioral and geographic targeting is the special sauce that makes digital advertising work at every stage, from data collection and analysis to audience segmentation and real-time bidding. Since 2011, it has even powered content and product recommendations across eCommerce and entertainment sites.

But that doesn’t mean the new wave of Generative AI isn’t exciting for advertisers. On the contrary, it opens up a whole new world of possibilities. Imagine creating personalized content for individual users in real-time on their device – not just personalized ads but entire articles complete with AI-generated images crafted to pique a user’s interest based on their real-life conversations, frequented locations, and online engagement.

This is the stuff advertisers dream about. Unfortunately, it’s the stuff Digital Criminals and malicious actors dream about, too. Worse yet, they don’t have to dream anymore. As we barrel towards a brave new world of AI-facilitated connections, we have to think about the way it will be weaponized against the most vulnerable.

The Shocking Effectiveness of AI Content

Recently, Meta announced plans to combat AI-generated content spreading across Facebook. This content includes emotionally manipulative but totally fake pictures meant to farm engagement from an audience largely comprised of an older user base.

In one of these images, a man kneels next to an intricate wood carving of a bulldog – he has three hands. In another, a child holds up a drawing of a kitten – but the child is six feet tall. Despite the blatant errors, the subject matter is sufficiently tailored for its target audience that most users don’t notice.

Even if Facebook solves this problem within its walled garden, who will fix the wider open Web, where 50% of the content is already AI-generated?

If this is what we’re up against with generative AI in its infancy, it’s easy to imagine how much worse the problem will become as the underlying technology improves. But criminals who use the Web to find victims aren’t waiting to find out.

How AI Content is Being Weaponized

For the past year, online criminals have been using AI tools to automate the most time-consuming aspect of a phishing attack: crafting messages that can convince users to click on malicious links or divulge sensitive information without setting off SPAM or security filters.

According to one report, the number of phishing attacks has skyrocketed by 1,265% since the end of 2022. At the same time, financial losses to digital fraud are at an all-time high, with further increases expected over the coming year.

With the help of AI, phishers are not only able to craft more messages in less time, but they are able to write in perfect English without the assistance of an English speaker. Worse yet, they are able to create messages tailored for niche audiences and even specific individuals.

Only one thing stands between malicious digital actors around the world and a net-negative relationship between consumers and the internet, and that’s technology to easily deliver their perfectly crafted messages to individual victims. But that’s already here, and it’s only getting better.

Advancements in AdTech

We owe a lot to AdTech – without it, the modern Web would not be possible. It enables companies to offer free content and services to users while monetizing their attention. At the same time, it gets a lot of flack for being invasive and downright annoying.

This invasiveness is largely the result of two technologies working in tandem with AI to deliver the right messages to the right users:

  1. Data Collection – as third-party cookies disappear, advertisers are leaning more on other methods to track users across the Web, including first-party data, device fingerprinting and anonymized advertising IDs. At the same time, they are partnering with third-party brokers who gather that data in more creative ways (yes, your smart devices really do listen in on your conversations).
  2. User Targeting – it has never been easier to target users with advertisements based on granular, real-time data, segmenting them into micro-audiences with shared characteristics such as location, education, financial status and personal interests. In many cases, advertisers have enough resolution to target a single user directly with push notifications, programmatic ads, or SMS.

Next Up: Automated Messaging

Now, advertisers are pioneering automated messaging, which uses AI to generate personalized advertising messages for a target audience.

At the moment, this mainly comprises hundreds of small variations on human-created advertising material (including images and text) – next up is articles, stories and posts complete with unique images and emotional hooks tailored for a micro-audience, or even specific individuals.

This AI-generated content will follow users around digital platforms, across media, news and entertainment sites, social platforms, and (eventually) the metaverse, attempting to sell products, services, and causes. But inevitably, it will also be used by criminals to scam and spy on their victims, install malware on their devices, and peddle fake stories.

Defending the Most Vulnerable

Some are incredulous that AI-based attacks will ever be a real threat to them. They are wary of everything they read – they don’t click on ads, and they don’t open emails from anyone they don’t know. But if they really are invulnerable, they aren’t the targets that digital actors prey on anyways.

As we’ve seen in other domains, digital adversaries prefer to tailor their attacks for vulnerable groups with a weakness they can exploit. This may include a lack of digital literacy (children and seniors), desperation (addicts searching for drugs), or illness (a cancer patient scouring the web for cures).

If this is what’s already happening, we can be sure that fully automated content combined with even more advanced data collection and targeting algorithms will lead to exponentially more victims in the coming years. Talk is expensive – the time to act is now.

Digital Trust & Safety teams at big tech and digital media companies are working hard to protect their corners of the internet, but they can’t protect everyone without tactical help. There is a window where we can all take a whole of society approach to containing AI-generated harm. By acting proactively to identify sources of malicious content and eliminate them from the Internet, those who are in a position to protect vulnerable groups can make a difference – the clock is ticking.

Latest posts by Chris Olson (see all)


Click Here For The Original Source.