The story so far: In early February, Meta CEO Mark Zuckerberg provided a public apology to parents whose children were victims of online predators during a Congressional hearing, that could be described as hostile to not just Meta, but other tech majors including X, TikTok, Snapchat, and Discord. The Big Tech and the Online Child Sexual Exploitation Crisis hearing was reportedly called “to examine and investigate the plague of online child sexual exploitation” and all their executives were pinned on their abdication of responsibility to protect children on social media platforms.
What are the issues with children’s safety online?
Tech majors are increasingly finding themselves in the midst of a maelstrom of protests across the world, not just over privacy concerns, but also with the security of users online. Across the world, parents and activists are aggressively advancing the agenda of having the tech companies take responsibility, or provide platforms that are ‘safe by design’ for children and young users.
A UNICEF report of last year, ‘The Metaverse, Extended Reality and Children’, attempted an analysis of how virtual environments may evolve and how they are likely to influence children and young adults. These technologies do offer many potential benefits for children, such as in the areas of education and health.
Are the risks significant?
The potential risks to children are significant, the report points out. “These include safety concerns such as exposure to graphic sexual content, bullying, sexual harassment and abuse, which in immersive virtual environments can feel more ‘real’ than on current platforms.” Further, vast amounts of data, including about non-verbal behaviour are collected, potentially allowing a handful of large tech companies to facilitate hyper-personalised profiling, advertising and increased surveillance, impacting children’s privacy, security, other rights and freedom.
While the complete immersion in an alternate reality which Metaverse promises is still not here, there are multiple virtual environments and games that are not fully immersive, and yet indicative of dangers in coping with that world. For instance, explains Sannuthi Suresh, programme co-ordinator, Tulir — Centre for the Prevention and Healing of Child Sexual Abuse, “in the hugely popular Grand Theft Auto, which does have adult and child versions, there is an instruction in the adult version to ‘approach a prostitute and spank her many times’. Now, adolescents are likely to pick the adult version. What messages are we sending to children?” More recently, she adds, there were reports in the media about how children were using Artificial Intelligence to generate indecent child abuse images.
Then there is the mental health aspect, with children facing the prospect of trauma, soliciting and abuse online, which can leave deep psychological scars that impact lives in the real world too. Innocuous and innocent sharing of images online can also be twisted by depraved predators. End-to-end encryption is essential to protect the information that children share online, points out Ms. Suresh.
What about the reach of generative AI?
The Davos World Economic Forum in a paper last year explained that generative AI brings potential opportunities, such as homework assistance, easy-to-understand explanations of difficult concepts, and personalised learning experiences that can adapt to a child’s learning style and speed. “Children can use AI to create art, compose music and write stories and software (with no or low coding skills), fostering creativity,” it says. For children with disabilities, a world opens up as they can interface and co-create with digital systems in new ways through text, speech or images.
“But generative AI could also be used by bad actors or inadvertently cause harm or society-wide disruptions at the cost of children’s prospects and well-being,” the report records. Generative AI has been shown to instantly create text-based disinformation indistinguishable from, and more persuasive than, human-generated content. AI-generated images are sometimes indistinguishable from reality. Children are vulnerable to the risks of mis/disinformation as their cognitive capacities are still developing. There is also a debate about how interacting with chatbots that have a human-like tone will impact young minds.
What can be done to keep children safe online?
The primary responsibility is that of the tech companies who will have to incorporate ‘safety by design’, explains Ms. Suresh. The proceedings of the Congressional hearings have made it obvious that these companies are fully cognisant of the extent to which their apps and systems impact children negatively.
Drawing on the Convention on the Rights of the Child, UNICEF offers guidance that lists nine requirements for child-centred AI, including support for children’s development and well-being, and protecting children’s data and privacy. UNICEF recommends that tech companies apply the highest existing data protection standards to children’s data in the metaverse and virtual environments.
In addition, governments have the burden of assessing and adjusting regulatory frameworks periodically to ensure that such technologies do not violate children’s rights, and use their might to address harmful content and behaviour inimical to children online.
Ultimately, as Ms. Suresh points out, everyone must start from the assumption that all the rules that exist in the real world to protect children, should also prevail online.
This is a Premium article available exclusively to our subscribers. To read 250+ such premium articles every
month
You have exhausted your free article limit.
Please support quality journalism.
You have exhausted your free article limit.
Please support quality journalism.
You have read {{data.cm.views}} out of {{data.cm.maxViews}} free articles.
This is your last free article.
Click Here For The Original Source.
————————————————————————————-