Social media moderators are being forced to check up to 1,700 videos a day, whistleblowers have revealed, making it impossible for them to remove sexual or violent content that children could see
Speaking anonymously for fear of losing their jobs, the moderators said they had to watch as many as four speeded-up videos simultaneously in order to meet their targets of checking 1,300 to 1,700 a day for explicit content so it could be removed from social media sites.
The whistleblowers said they not only “inevitably missed” dangerous content as they had to squeeze 40 hours viewing into an eight-hour shift, but they were also overwhelmed by the amount of “extreme” material that the tech firms’ business models either permitted, failed to remove or incentivised users to post.
The moderators told Revealing Reality, a research group which works with Ofcom and the BBC, the “volume of content they were expected to get through each day was unmanageable” and raised questions over the platforms’ commitment to removing dangerous content.
Monitoring videos showing murder and suicide
The disclosures came ahead of an eight-month deadline for Parliament to pass new duty of care laws by the autumn giving the communications regulator powers to fine companies that fail to protect children from harmful content.
The moderators said that users were encouraged to share or post ever more violent and sexual material because it generated more attention for their social media accounts.
According to one moderator: “I saw people hanging themselves. I saw a girl shooting her head off.”
One platform, TikTok, even rewarded people with financial bonuses for viral posts that often contained the most graphic content.
“People just want to get viral and they want to get views. So it’s disgusting…” another moderator said. “People want to be noticed. So they’re willing to do anything just to have views, just so that people can share their content.”
Moderators said they were also told not to “overkill” content which was proving popular with users. This meant “questionable” material remained online, they said.
“Do not ‘overkill’ it,” said one of the moderators. “That was the name they gave it. They didn’t want to overkill content. So if you take [down] too much, then you’re going to over-regulate… I think it was for them to have, like, a lot of content on the platform.”
Even when they took down content, it quickly reappeared either on their own platform or on other social media sites as users shared it. This meant that the removal was only ever a “temporary fix”, said the moderators.
“I’ve seen a few contents that were a bit disturbing… would say, murder-related. I’ve seen that it was some content that was taken from one platform and put on [social media platform A] out of nowhere. And someone filmed that on [social media platform B]… so it doesn’t take long,” said the moderator.
The researchers said that social media firms should be preventing such material being posted in the first place rather than removing it as this simply allowed it to be reposted to potentially harm users.
The moderators also complained that the tech firms set the threshold for removing content too high. One cited how she was told she could not remove videos of a kidnapping and a case of animal cruelty resulting in the pet’s death because she could not prove they were not staged, rather than real.
Moderators also exposed lax age verification where they were told they needed at least four videos to “prove” a child was under 13 so they could be barred from the site. One cited the case of an eight-year-old posting selfies, of which there were just three videos, which meant the child could not be excluded from the site.
All the moderators said they suffered stress and anxiety from their jobs, with one forced to take sick leave for five months, which included three occasions when he was hospitalised with mental ill health.
The report by Researching Reality concluded: “These moderators not only felt let down by the platform that employed them. They felt a bleak pessimism about the possibility for potentially harmful content, experiences and behaviour to be effectively reduced using moderation.”