Facebook unveils new Instagram safeguarding measures for children

Posted: 11th October 2021

But critics say the company has acted only after pressure from outside.

Facebook has announced several new features to help safeguard young people following claims that its platforms harm children.

New functions include prompting teenagers to take a break from using its photo sharing app Instagram, and “nudging” teenagers if they are repeatedly looking at the same content which may not be good for their wellbeing.

Others have said the plan lacks detail and they are sceptical about the effectiveness of the new features.
Sir Nick told CNN’s State Of The Union programme: “We are constantly iterating in order to improve our products.

“We cannot, with a wave of the wand, make everyone’s life perfect. What we can do is improve our products, so that our products are as safe and as enjoyable to use.”

Sir Nick said Facebook has invested 13 billion dollars (£9.5 billion) over the past few years in trying to help keep the platform safe and that the company has 40,000 people working on these issues.

Ms Haugen’s accusations were accompanied by tens of thousands of pages of internal research documents she secretly copied before leaving her job in the company’s civic integrity unit.

Ian Russell, who set up suicide prevention charity the Molly Rose Foundation after his 14-year-old daughter Molly took her own life in 2017 after viewing disturbing content on Instagram, branded the announcement “a crisis comms initiative”.

He said: “Since Molly’s death, nearly four years ago, I have grown used to hearing the response of the social platforms when they tragically fall short and their practices are inevitably challenged.

“Their usual PR reaction is to announce a small change to their guidelines or processes; speak fine words indicating their understanding and empathy; and then to go about their business as close to usual as possible.

“Every positive change that makes the platforms safer is of course welcome but, as time goes by, I am left asking, ‘Why do the platforms wait before reacting? Why don’t they use their unique influence and extraordinary wealth to innovate and lead?’”

Andy Burrows, head of child safety online policy at the NSPCC, said: “It is no surprise that yet again Facebook has committed to introducing child safety measures only in response to media pressure, as opposed to making sure their sites are safe by design in the first place.

“In this instance, this announcement has come a week after damning evidence revealed that Facebook has been fully aware of the ways it causes harm to children and has failed to act.

“It is time for tech firms to stop treating children’s safety as a public relations exercise, which is why we need to see an Online Safety Bill that forces companies to finally treat children as a priority.”



Categories: News