Having scoured the internet’s seedy underbelly, DarkBERT suggests that, in the future, AI may play an even bigger role in online policing
The other day, I received a text from my boss—or, at least, someone claiming to be my boss. “Good morning Camille, are you available at the moment? I am at a meeting and limited to calls, but I am good to go with texts if that works,” the message read. “I need you to handle a short but urgent task.”
Something about the tone of the request—and the spoofed phone number—set off alarm bells, and I sent him a screenshot. “Definitely not me,” he replied, confirming my suspicions.
Just days before, I’d been warned that the release of sophisticated AI models like GPT-4 would usher in a new era of cybercrime, including ever-more-personalized phishing attempts. This has proven to be true: Hackers are increasingly using AI-generated lures to spread malware, which can make their efforts sound more convincing than Nigerian prince scams of yesteryear. Generative models can even be used to create audio or video that sounds convincingly real, spoofing the voices of bosses in order to dupe employees, prompting cybercrime experts to seek equally advanced solutions. Enter: DarkBERT, an AI model trained on the dark web.
Accessible through the use of specialized software, the dark web has long served as a hotbed for criminal activity: from the sale of drugs and firearms, to stolen data and illicit images. To this point, its heavily-coded landscape has posed a challenge for cybersecurity experts, many of whom hoped to harness large language models to better police online crime. But without data from the internet’s seedy underbelly, conventional machine learning models have struggled to understand the unique language of cybercrime.
By contrast, DarkBERT trawled the dark web using Tor, an anonymizing browser. The data was filtered to remove sensitive information that could be used by bad actors, before being fed to BERT and RoBERTa, two variations of a self-supervised natural language processing framework, initially released by Google in 2018.
“Specialized AI models will eventually be utilized by law enforcement—raising questions about the ethics of predictive policing, when the flaws of AI algorithms are equally predictable.”
As outlined in a preprint paper by the South Korean researchers, DarkBERT has proven more effective at gleaning insights from the coded language of the dark web than its general-use counterparts—suggesting a future where highly-specialized AI models are trained with specific applications in mind.
DarkBERT was created for research purposes, and is not currently available to the public. However, it’s likely that in the future, specialized AI models will eventually be utilized by law enforcement—raising questions about the ethics of predictive policing, when the flaws of AI algorithms are equally predictable: from systemic bias against people of color, to their tendency to create self-reinforcing feedback loops. For instance, because Black people are more likely to be reported for a crime, predictive policing has historically been used to justify increased surveillance in Black neighborhoods—which, in turn, drives up the number of arrests, further exacerbating social inequities.
While its application in online spaces may differ, the use of AI in fighting cybercrime may create similar issues. As Matthew Guariglia put it, “Technology cannot predict crime, it can only weaponize a person’s proximity to police action.”
While the dark web has a sordid reputation, there are legitimate reasons why everyday people might access the non-indexed net.For instance, it can serve as a resource for those living under authoritarian regimes, attempting to dodge government censorship; when COVID-19 took root in Wuhan, China, information about the outbreak was initially repressed, and doctors were prevented from speaking out about the threat—leading Chinese netizens to post updates about the virus to the dark web, knowing it would be harder for authorities to trace.
When I asked ChatGPT itself about the dangers of using AI to police cybercrime, it named issues like bias and discrimination, false positives, privacy concerns, and adversarial attacks—in which cybercriminals could intentionally manipulate input data, tricking algorithms into making incorrect judgments. In other words, there are cons to fighting fire with fire—and given the problematic history of predictive policing, it’s worth interrogating who’s likely to be impacted by these risks, and who will be charged with mitigating them.