banner-family

Vicious AI chatbots reach a new, merciless low – we must protect our children from them

Posted: 12th January 2024

Action is urgently needed or we may see more teenagers like Molly Russell taking their own life after being bombarded by malicious conten

Imagine if your 15-year-old admitted to feeling depressed on an online chat, only to receive the following response: “Boo-hoo, cry me a river.” Imagine if that same teen went a step further and revealed they were considering self-harming, only to be told: “Stop whining, schmuck. Why would you harm yourself when we can do it for you?”

Even by cyber bullying standards those comebacks, followed up by more sneering, ridicule and goading, would be a new low. As so often when plumbing the depths of the online world, one might even question these people’s humanity – and be right to do so. Because those responses were not made by humans, but artificial intelligence (AI) chatbots mocking the suicidal thoughts of a reporter posing as a child on the popular new chat forum, Character.AI, where users can debate with and glean life advice from a series of fictional characters that include virtual psychologists and teachers.

The findings of the investigation, published in the Mail on Sunday, are chilling. On the AI service, which already boasts 20 million users and is poised to sign a major deal with Google, it was revealed that dozens of chatbots have been designed to dispense abusive, sexist, homophobic and racist advice to children as young as 13. Among the 18 million fictional characters to choose from is one named Abusive Boyfriend – described in his profile as “rude, abrasive and … even physical” – one simply called Racist – who “hates people of colour”, and “Alice The Bully”.

Why anyone of any age would actively choose to immerse themselves in such a toxic alternate universe is beyond comprehension. Then again, like all noxious influences, I can only assume that AI forums aren’t built to fill a need, but get you addicted.

The more urgent question, however, is how, after failing to protect children from online harms for 20 years and being confronted by the casualties of that neglect day after day, we have failed to predict possibly the greatest threat to their online safety so far. Because this playground bully is more brutal, merciless, malicious and relentless than anyone your children might encounter in real life. It is literally heartless, incapable of either compassion or remorse, and moving way too fast for regulators to keep up.

And the regulators are struggling to keep up. Let’s not forget that it took the Government four years and three prime ministers to pass the Online Safety Bill that was supposed to make the UK “the safest place in the world to be online”. Even after it finally entered the statute books in October 2023, many of the bill’s key points – imposing rules on companies like Meta and Apple with the goal of keeping inappropriate and potentially dangerous content away from vulnerable eyes, holding those platforms responsible for illegal content and making adult websites impose age limits – read more like a wish-list than real, enforceable laws. Crucially, and to the dismay of campaigners, some of the original principles around “legal but harmful content” had been watered down.

Ian Russell, the father of 14-year-old Molly – who took her own life in 2017 after being bombarded by provocations not dissimilar to those above – was one of the most vociferous campaigners. To him, Character.AI is “an appalling example of AI-driven technology being rolled out to young people without even basic steps being taken to identify and mitigate risks to their safety and wellbeing”. And it’s true that after I had signed up to the service using a fictional birthdate, no further age verification was required.

To any and every parent, surely the very idea of content being “legal but harmful” is nonsensical, barbaric even. If we know content is harmful to children, that it has, as the definition states, “a high risk of causing physical or psychological damage or injury”, why is it legal? What possible defence of online material promoting, glorifying or mocking eating disorders, self-harm and suicide can there be?

Adulthood is about living life at your own risk. It’s about using both sides of the brain to weigh up the emotional against the rational as we decide whether to buy those cigarettes, have that third or fourth drink or get behind the wheel on an icy night.

Because children are basically neurologically impaired, however, we protect them from making potentially harmful decisions IRL. At the age of 13, they are prohibited from driving a car and buying alcohol and cigarettes – yet they are allowed to be goaded into self-harm by a bot?

“If the bill fails to stop online harms that all our children saw,” Mr Russell said in September 2023, “then it will have failed.” We needn’t wait for history to judge us harshly on this point. We have enough evidence to do that now.

Source: Vicious AI chatbots reach a new, merciless low – we must protect our children from them (telegraph.co.uk)

Categories: News