Arabic Arabic Chinese (Simplified) Chinese (Simplified) Dutch Dutch English English French French German German Italian Italian Portuguese Portuguese Russian Russian Spanish Spanish
| (844) 627-8267
0

Media Law Review Raises Thorny Issues around Freedom of Expression | #socialmedia | #hacking | #aihp



Media

New Zealand has been trying to update its media laws for the modern digital environment for over a decade. Experts warn there are no simple solutions, so has the Government now bitten off more than it can chew?

Anjum Rahman knows more than most about the harmful effects of media content.

Over the years, Rahman – an accountant by trade, who also founded the Inclusive Aotearoa Collective Tāhono and acts as spokesperson for the Islamic Women’s Council – has been a leading voice from the Muslim community speaking about the harms caused by online extremism.

She’s received a fair deal of pushback for this, ranging from fairly civil to downright abusive.

The end of our phone conversation about the Government’s plan to update New Zealand’s media laws for the digital age was tinged with resignation – the product of experience.

“You’ll publish your thing and I’ll get the angry email because I spoke about it,” she said.

“Even to speak up on these matters is a risk for communities that are targeted.”

But Rahman also seemed cautiously optimistic about the broad review of our media laws currently underway, which is being led by the Department of Internal Affairs.

It’s been a long time coming: various ministers have tried and failed to get the ball rolling on a review since 2008.

When the review was announced last year, Internal Affairs Minister Jan Tinetti said New Zealand’s current regulatory system was designed in the early 1990s for traditional broadcast and print media – not the internet.

“It is not fit for purpose … That’s why the Government is designing a new modern, flexible and coherent regulatory framework; to minimise the likelihood of unintentionally coming across harmful or illegal content – regardless of the way that content is delivered.”

In other words, it doesn’t matter whether it’s a radio broadcast, a blog post or a Facebook comment – all publicly available media content will be subject to whatever regulatory framework comes out of the Government’s review.

If that sounds vague, that’s because it is – it’s still in the early days of stakeholder consultation, with wider public consultation planned for later this year.

But in a nutshell: the Government has decided its laws regulating media content are stuck in the 1990s, and aren’t adequately protecting people from the potential harms of content in a contemporary media context.

“In reality, people will be harmed online, and depending on your definition of harm, some kinds of political debate and disagreement require harm.”

The University of Canterbury’s Dean of Law, Ursula Cheer, said the review’s inclusion of online media content means it is much broader than anything that’s been carried out in the past.

Longstanding questions around streamlining media regulation have prompted various reports and reviews over the years, Cheer said.

“Then something gets cherry picked out of the various reviews, and so what we’ve had is the general regulatory system, which is very diverse, has been fiddled around with on the edges. So the existing system is quite complicated.”

However, what’s being proposed with the Internal Affairs’ review goes well beyond streamlining the rules that govern the ‘media’ in a traditional sense, and into the territory of regulating all information that exists online, including social media.

In fact, it’s the traditional media that the Government seems least worried about with this review.

Cabinet briefings state “many of the harms the current system is unable to respond to are coming from contemporary digital media content, for example social media” and that the broadcasting sector “poses a very low level of risk of harm to New Zealanders” in comparison.

Instead, the review appears to be mainly focused on harmful content that proliferates more online, such as mis- and disinformation, violent extremist content, child sexual abuse material (CSAM), hate speech, racism and other discriminatory content.

“They’re taking on quite a task here,” Cheer said.

“What they’re contemplating will be regulating forms of speech or forms of publication, and that’s always a really tricky, difficult area.”

Thomas Beagle of the Council of Civil Liberties also had reservations about the Government’s review having such a broad scope – some of which he outlined in a blog post at the end of last year.

“In terms of the actual content of [the review], I was a little bit disturbed by it really,” he said.

“It’s a very, very huge topic. I worry they’ve bitten off more than they can chew.”

Beagle said one of his main concerns with the review was its focus on ‘harm minimisation’ – a nebulous concept not explicitly defined in the cabinet paper.

He said this approach assumed all harm from speech was wrong, which he considers misguided.

“I think that some speech harms – only some – are actually a natural consequence of freedom of expression, and of the fact that speech is powerful,” he said.

“If I find someone who is corrupt and is doing corrupt activities [and] if I then publish proof of that online, there’s no doubt that I am going to harm that person. They might lose their job. They might go to jail. But at the same time, that speech of revealing corruption is obviously highly important and should be protected.”

Tom Barraclough and Curtis Barnes, co-directors of the think tank Brainbox Institute, agree there are some problems with the review being framed around harm.

They know a thing or two about the subject, having written a 94-page research report examining the intricacies of governments moderating objectionable content on social media.

While government documents make all the right noises about needing to balance harm minimisation with preserving freedom of expression, Barraclough worried there was still an implicit assumption that the Government’s aim should be to completely protect people from harm.

“In reality, people will be harmed online, and depending on your definition of harm, some kinds of political debate and disagreement require harm,” he said.

“There should be extreme caution in trying to avoid harm completely.”

Barnes was also concerned about the imprecision of the term ‘harm’, warning it could encompass objectively abhorrent content – such as CSAM – but could also include vaguely defined content governments simply disagreed with, as has happened in Tanzania and Singapore, among other examples.

“If a state oversteps the mark and censors a person without a sound human rights basis to do it, you could very easily say that person has suffered a significant harm,”

He said another major limitation of framing the review entirely around harm is that it implied censorship was not a harm when done unjustly.

“If a state oversteps the mark and censors a person without a sound human rights basis to do it, you could very easily say that person has suffered a significant harm,” he said.

“They’ve not been allowed to speak, they’ve not been allowed to hold an opinion, they’ve not been allowed to associate, whatever it might be. So it’s not harms versus speech, it’s harms versus the harms of intervention.”

But Anjum Rahman pointed out that a silencing effect happens already in online environments that are particularly hostile towards certain groups, such as ethnic and religious minorities.

“You put something up and there’s a whole barrage of partially negative comments, all of which are designed to silence people.”

These negative comments can range from death threats and rape threats to ‘doxxing’, which might involve someone’s employer’s name or photographs of their home being put online, she said.

“When you are a person of a vulnerable community or a marginalised group … you are more vulnerable to those kinds of coordinated attacks. So therefore your freedom of speech is being taken away. And that’s very rarely recognised in free speech discussions.”

However, Rahman appreciated the risks of government overreach when it comes to online content moderation policy, and said it was critical that any actions prompted by the review were statutorily independent and protected from state interference.

Elsewhere, she has argued that whenever new legislation is considered, it needs to be assessed considering the worst case scenario: “How might a hostile government misuse this legislation, and what checks and balances are in place to prevent that misuse?”

But she also made the point that new regulation can only do so much, particularly in areas such as content moderation.

“Obviously we know that you can’t solve everything by content moderation alone, and just taking things down doesn’t solve the problem,” she said.

Indeed, this speaks to a wider point that many human rights advocates worldwide are making: there are no straightforward technical solutions in this space.

As Harvard Law School’s Evelyn Douek puts it, regulators should be wary of building content moderation policy on the assumption that techies at online media platforms can simply “nerd harder” and stop the spread of all harmful content, without any trade-offs, if they just put their mind to it.

International human rights advocates are also warning about the trickiness of introducing legislation that puts responsibilities on online media platforms to crack down on content that goes beyond the bounds of being outright illegal.

For example, the UK’s Online Safety Bill has been widely criticised for its plan to require large platforms like Facebook and Twitter to protect users from lawful content that is nevertheless deemed “harmful”, such as content that advocates self-harm or disinformation.

Brainbox Institute’s Barraclough and Barnes are worried that the New Zealand Government is risking going down the same path as the UK by considering both illegal and legal harmful content in its media regulation review.

“I think there is a huge risk to rolling various kinds of harmful content into one big bucket, because the harms are quite different. The victims are different,” Barraclough said.

“I think the primary reason that we roll them together is because some people have this sense that there’s a lot of bad stuff on the internet, and we need a way to talk about that. So we just start listing things, like there’s misinformation, there’s disinformation, there’s CSAM, there’s terrorist content.

“There’s no doubt all of these things deserve some sort of scrutiny and careful thinking about what we should do about them.”

But each type needs to be carefully defined by law, and while that is the case already for content such as CSAM and terrorist content, standards and definitions around things like mis- and disinformation are much looser, Barraclough said.

“The task should be to find illegal content and deal with it because it’s illegal, not because it breaches some other standard that we are trying to come up with.”

Barraclough and Barnes also suggest the Government could compel platforms like Twitter and Facebook to be more transparent about their content moderation processes.

While some of the big platforms do provide information about these processes, inconsistent data-keeping practices and differing standards around transparency between platforms makes it difficult for researchers and policymakers to properly understand the impacts of harmful online content.

Barraclough said there isn’t yet a strong evidence base proving direct links between certain kinds of content and adverse outcomes, and this void is often filled instead by speculation, anecdote and rumour.

If platforms provide researchers with more data, then policymakers will be better informed about the problems they’re trying to solve when it comes to harmful online content, he said.

Anjum Rahman agrees that requiring more transparency from platforms is crucial. She’s even written a draft paper on how platforms can be audited to measure and document the impact of harmful content.

“At the moment with platforms, there isn’t that uniform access. And you can create that access while maintaining confidentiality,” she said.

“I think it’s absolutely doable. It’s just a matter of the will of getting it done.”

To the Government’s credit, the fact that it’s finally doing something that’s been floated as an idea since way back in 2008 shows that there is the will to address the impacts of harmful online media content.

And it needs to be said that it’s not just considering new legislation as the only fix in this area: Cabinet papers show the Government is considering a wide spectrum of interventions, including public education and industry self-regulation.

The review is certainly a tall task. But in Rahman’s eyes, this shouldn’t be a reason for it to ever get relegated to the back-burner again.

“They can’t not do it just because it feels too hard.”

Click Here For The Original Source.


————————————————————————————-

Translate