Menu
Subscribe to Holyrood updates

Newsletter sign-up

Subscribe

Follow us

Scotland’s fortnightly political & current affairs magazine

Subscribe

Subscribe to Holyrood
Online Safety Act not fit for purpose, study finds

Bereaved parents call for Online Safety Act to be strengthened | Alamy

Online Safety Act not fit for purpose, study finds

Social media platforms are still failing to remove harmful content nearly a year after the Online Safety Act became law, a study has found.

Research by the Molly Rose Foundation (MRF) revealed that out of more than 12 million content moderation decisions made by six of the biggest platforms, more than 95 per cent of these were detected and removed by only two sites – Pinterest and TikTok.

Meanwhile, Instagram and Facebook are responsible for detecting one per cent of all dangerous content detected by the major platforms.

And, X, formerly known as Twitter, was behind one in 700 content decisions. Snapchat was the other app featured in the report.

The MRF, which was set up after 14-year-old Molly Russell ended her life in 2017 due to suffering the negative effects of harmful content, has now urged the UK Government to strengthen regulation and “finish the job”.

Ian Russell, MRF chair, said: “Almost seven years after Molly died, it’s shocking to see most major tech companies continue to sit on their hands and choose inaction over saving young lives.

“As the last few weeks have shown, it’s abundantly clear that much more ambitious regulation is required. That’s why it’s time for the new government to finish the job and commit to a strengthened Online Safety Act.

“Parents across the country will be rightly appalled that the likes of Instagram and Facebook promise warm words but continue to expose children to inherently preventable harm. No ifs, no buts, it’s clear that assertive action is required.”

The MRF has also warned Ofcom’s proposed regulation, published earlier this year, does not go “far enough” and lacks the “much-needed ambition” to tackle the “complexity” of harmful content.

The charity also claimed social media platforms’ measures were “inconsistent, uneven and unfit for purpose” after finding TikTok had detected almost three million items of suicide and self-harm content but had only suspended two accounts.

It also accused sites of failing to detect harmful content on the “highest-risk” parts of their services, after its research showed only one in 50 suicide and self-harm posts detected by Instagram were videos, despite its short-form video feature Reels accounting for half of all-time spent on the app.

A Meta spokesperson told Holyrood that the MRF study didn’t reflect the platform's “efforts” to remove harmful content.

They said: “Content that encourages suicide and self-injury breaks our rules. We don’t believe the statistics in this report reflect our efforts. In the last year alone, we removed 50.6m pieces of this kind of content on Facebook and Instagram globally, and 99 per cent was actioned before it was reported to us. However, in the EU we aren’t currently able to deploy all of our measures that run in the UK and the rest of the world.” 

Meanwhile, a Snapchat spokesperson said “safety and wellbeing” remained a “top priority” for the platform.

They added: “Snapchat was designed to be different to other platforms, with no open newsfeed of unvetted content, and content moderation prior to public distribution. 

“We strictly prohibit content that promote or encourage self-harm or suicide, and if we identify this, or it is reported to us, we remove it swiftly and take appropriate action. We also share self-harm prevention and support resources when we become aware of a member of our community in distress, and can notify emergency services when appropriate.

“We also continue to work closely with Ofcom on implementing the Online Safety Act, including the protections for children against these types of harm”.

This is not the first time online safety laws have been flagged as inadequate. In May a group of bereaved parents sent a joint letter to then-prime minister Rishi Sunak and Labour leader Keir Starmer to do more for child online safety.

In it, the cohort said that “much more needs to be done” and they had “so far been disappointed” by the “lack of ambition” around safety laws.

The letter continued: “We collectively fear that Ofcom’s proposed approach may be insufficient to tackle the growing risks of grooming, sexual abuse, content that promotes or facilitates acts of serious violence, and the active incitement of acts of suicide and self-harm among young people”.

And, last month, Labour ministers vowed to do more to toughen online safety laws after Ofcom was accused of being too soft on technology companies.

 

This research also comes at the back of platforms coming under fire for spreading malicious content during the riots that took place across the UK earlier this month.

First Minister John Swinney wrote to X, Meta and TikTok asking them to outline what steps they were taking to halt the spread of misinformation and address racist and hateful speech across the platforms.

He added: “Given the seriousness of the situation, action needs to be immediate and decisive.

“Police Scotland has specifically raised with me concerns about the time it takes for problematic posts to be removed when these are identified by law enforcement agencies. This increases the risk of spread of malicious content. I would wish to understand the steps you are taking to address this, particularly for content that police identify as illegal or harmful.”

Holyrood has also contacted X, TikTok and Pinterest for comment.

 

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Read the most recent article written by Sofia Villegas - Scottish researchers use AI to predict patients’ risk of stroke.

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox

Subscribe

Popular reads
Back to top