Menu
Subscribe to Holyrood updates

Newsletter sign-up

Subscribe

Follow us

Scotland’s fortnightly political & current affairs magazine

Subscribe

Subscribe to Holyrood
Ofcom's Melanie Dawes: The social media companies need to get to grips with online hate

Ofcom's Melanie Dawes: The social media companies need to get to grips with online hate

The Online Safety bill currently passing through the House of Commons carries with it a heady claim that it will make the UK “the safest place in the world for our children to go online”. 

The bill, which is sponsored by Secretary of State for Digital, Culture, Media and Sport, Nadine Dorries, will regulate companies that allow users to post self-published content on their websites.

The most common of these are social media platforms, messaging apps, online games and popular pornography sites. It will also cover search engines, which will be required to minimise the presentation of harmful search results to users.   

It has already successfully made it through the first and second readings and is now at the committee stage and is expected to be passed into law around the spring of 2023.

 Dorries has said it will make the UK “the safest place in the world for our children to go online”. Ofcom, the UK’s communications regulator, will use the new powers to protect children’s wellbeing online, by ensuring the removal of illegal content, particularly material relating to terrorism and child sexual exploitation and abuse. Legal, but harmful material, that could be accessed by young people will require companies to have measures in place to prevent children from seeing such content.   

Terms and conditions of the largest platforms, deemed as high-risk platforms, will need to state that such legal content that encourages things like self-harm or eating disorders is not acceptable.

Users will also be provided with tools that will give them more control over the people they interact with and the content they see. Further to this, users will also be able to choose to verify their identity.   

Ofcom will require companies to assess risk on a whole range of issues and implement changes to their services to negate or lessen harm. The most obvious cause of harm online that the bill will seek to prevent is illegal content.  

 Each company must determine the likelihood of children using their service, and if it is likely, they must protect them from harmful content. The terms and conditions of big platforms must state what types of legal content can be posted on their site.   

If companies fail to comply with Ofcom’s regulations, they could incur fines of up to £18m or ten per cent of their global annual turnover (whichever is higher). Senior managers could have criminal sanctions placed on them if they fail to ensure their company complies with Ofcom’s information requests, or if they deliberately destroy or withhold information.

However, the bill is not without its critics. A recent briefing paper from the Institute of Economic Affairs (IEA) think tank warned that it could encourage the implementation of automated tools that could censor certain types of content, as a result of new online safety duties.   

The IEA says the Secretary of State and Ofcom will have “unprecedented powers to define and limit speech, with limited parliamentary or judicial oversight”.  

Former cabinet minister Lord Frost raised concerns over the new bill and has called for the government to “slim it down”.  

He said that the bill “panders to the view of the perennially offended, those who think the government should protect them from ever encountering anything they disagree with”.  

David Davis, the former Brexit Secretary, said: “While the government no doubt has good intentions, in its current form the bill could end up being one of the most significant accidental infringements on free speech in modern times.” 

In the last year, Ofcom has begun to regulate around 25 video-sharing websites, including TikTok and Snapchat. Chief executive Dame Melanie Dawes spoke to Holyrood about Ofcom’s recent finding, takeaways from early online regulation and what the new bill will allow them to do. 

In the lead-up to the new powers being granted, the UK’s media regulator has carried out research into online harm. A recent report has shown that women are affected by negative online content, in every metric, more than men.  

Dawes says: “There are a lot of perspectives in our report, but we thought that this is one of the really interesting ones. On all measures, women say they feel less safe online than men. They are much more likely to feel that trolling is harmful to them, as well as hate speech and misogyny as well.   

“It is those more personal attacks online that women said in our research that they just feel much more keenly than men do. Why does that happen? I think that reflects what goes on in wider society. We do know that misogyny against women and girls is still very much a fact of life, offline as well as online.   

“What happens when it goes online is it gets amplified and spread with a speed that can be incredibly damaging,” says Dawes. 

“What our research shows is that the impact of that harm upon women is that they do feel like it is hard to have their voices heard in the same way as men. I think there is a cause here, a greater feeling of harm, in the form of hate speech and attacks, which makes you feel more cautious about expressing your opinions online.   

“I think from our point of view, the social media companies need to get to grips with this now. It has been happening for a long time and they must start talking to women more to understand what is going on, in particular, how they enable their users to report and flag.   

“In our research, people say that when they do report something they do not know if any action is being taken. This is one of the most important things under the online safety regime. We will be making sure that what these services have are proper reporting and flagging mechanisms.” 

Ofcom’s chief executive emphasises the importance of knowing who is using online platforms to protect children as an initial step.  

“I think it is one of the most important things that you need to start with, who is using your platforms. We published research about a month ago that looked at children and adults’ media use and attitudes, and that showed huge numbers of primary school children are on TikTok and Snapchat as well as some of the gaming platforms. Pretty much all children are on YouTube, this is even pre-schooler, you know, kindergarten ages.  

“Some of them are taking steps to make it harder to be on the platform if you are underage. But you need to recognise in the real world that children will go onto their parents’ tablets or phones, so it is important that the services understand what is happening. 

“If you think about what happens when you buy something now, increasingly you have two-factor authentication and you have many different options, you can have a text message sent to you, you can have something sent through your online bank account. In practice, what I think will increasingly work is for you be bring a few different pieces of evidence about your age.  

“I think it is about how on different platforms you enable users to show who they are through several different routes. Then you have to have something that can scoop up those who are not telling the truth, and that can sometimes be about facial recognition and other forms of technology.

“This is emerging I think, but the solutions are beginning to be there, and what we will be doing as a regulator is making sure that they are being applied, particularly in very high-risk situations such as adult sites and the risk to under 18-year-olds.” 

Dawes explains how they will begin to regulate the social platforms once the bill is passed. She says: “Once the Online Safety bill is passed, which we expect around spring of next year, we will bring in a duty of care on these platforms, requiring them to assess their risks properly, to share that with us, and to share information about what they are doing to address those risks.   

“The biggest platforms will need to publish transparent information about all of that, and we will hold them to account. So, we will be setting out codes and guidance on some of the steps we think that a social media platform can take to tackle some of these issues.  

“I should be really clear – this is about systems and processes and product service design. A lot of the time the debate is framed around content moderation and what gets taken down. That is one of the things that you need to do for the most damaging speech and content, things like child sexual abuse material or terrorist material.  

“But the real change here, we believe, will be when the social media companies are putting safety upfront, and they are designing their services differently, and not just taking something down when something has already gone wrong.”  

Algorithms pushing negative content and data sharing were touched on by Dawes. She believes that some algorithms are designed to accentuate online harm while saying opening data sharing to independent researchers will improve accountability. 

She says: “We know that people are becoming more engaged online, and often that engagement comes from negative emotions like anger, upset, or being offended. That encourages the ‘click through’ and drives the advertising revenues. That combination of revenue and commercial incentives has amplified harm online. The algorithms are a part of that, and one of the things we will have the power to do as a regulator will be to ask questions about how they are run.  

“Algorithms are not wrong in themselves, you cannot operate the internet without them, and you certainly cannot tackle these algorithms at scale without using AI and other forms of modern technology.   

“The questions that need to be answered are what are these algorithms designed to answer, what data is being used, and what is the outcome of them when you take the outcome of the design with real-life users, rather than what might happen in the lab?  

“I think that the more we can open up data scrutiny to external partners, and not just the regulator, the more the public will want to know what is going on and the more that accountability will improve.” 

The topic of online verification to lessen the impact of trolling online splits opinion. Dawes is opposed to enforcing online verification: “In many regimes, overseas anonymity is the only way that some journalists can do their jobs.

So, what we will be doing to look at the platforms is to look around all these risks and look at these features, looking at who their actual users are, and be honest if they have users who are younger than what their terms and conditions state. This is often age 13, but often there are a lot of primary school children using social media sites and gaming sites as well.  

“They need to make sure that the mix product design, including things like anonymity and verification, matches the people who are on their platforms and the way that they need to be protected from harm.” 

Holyrood Newsletters

Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe

Read the most recent article written by Ruaraidh Gilmour - Man convicted for threatening MSP after new parliament monitoring scheme introduced.

Tags

Connect

Get award-winning journalism delivered straight to your inbox

Get award-winning journalism delivered straight to your inbox

Subscribe

Popular reads
Back to top