4 ways Ofcom wants to change the internet for children

Glasgow, Scotland - Close-up on the screen of a Google Pixel 8 Pro smartphone, as the user selects an app from the touchscreen.
Ofcom says social media platforms must take action to stop their algorithms recommending harmful content to children. (Getty)

Ofcom has published its proposals on how social media platforms will be expected to protect children in future.

As part of the Online Safety Act, the regulator says algorithms must stop recommending harmful content to children. Robust age-checking measures should also be put in place, Ofcom said. This will, it says, mean a reduction in pornography and other harmful content being accessed by children. It will also mean potential changes to how children can use group chat apps like WhatsApp and Snapchat.

Social media platforms face a series of new legal responsibilities under the act, which will require sites which can be accessed by children to take action to protect those younger users by assessing the risk their platform poses to them and then putting in place measures to mitigate those risks. Large fines are among the possible penalties for those found to be in breach.

Ofcom is the new regulator for the sector and has set out how platforms should handle different types of content, ahead of the new rules beginning to come into full force, which is expected towards the end of this year. The latest codes include more than 40 practical measures which Ofcom says will demand a step-change from tech firms by compelling safer design and operating practices from the biggest sites.

Here are four ways Ofcom wants the internet to change for children:

Ofcom is the new regulator for the social media sector. (PA)
Ofcom is the new regulator for the social media sector. (PA)

Ofcom has outlined proposals to make it more difficult for children to access pornography. Currently, a quick search on some social media platforms can throw up pictures and videos without the need for the user to provide details of their age.

Under Ofcom’s plans, robust age checks would be expected, so that services know which of their users are children. These could include a rigorous ID check to determine the exact age of a user.

Any harmful content that is not banned should again be subject to “highly effective age checks” to prevent children from seeing it.

Algorithms that provide personalised or ‘recommended’ content should also be adjusted so that any potentially harmful content would be filtered out of the feed of any child user, Ofcom said.

Safe search settings under moderation guidelines would also be expected to be put in place, which would not be able to be turned off by child users. These steps should reduce any harmful content – that would include, as well as pornography, content that encourages or promotes instructions for suicide, self-harm or eating disorders. Content that incites hatred, bullying, violence or dangerous stunts would also fall under this category.

Use of mobile phone in trendy neon lights. Creative vivid color of ultraviolet red and blue. Hands of Teen Girl scrolling up photos Close-up at dark neon room. App for sale in shop. Social media. 4K
More rigorous age verification checks should be implemented, Ofcom says. (Getty)

Apps like WhatsApp and Snapchat allow young people to engage in group chats with friends, or anyone else that may be added. However, Ofcom says that children should in future have to give their consent instead of being added to any chat thread automatically.

Platforms should give users the option to not only decline group invites but to be able to block and mute user accounts or disable comments on their own posts. Ofcom says that this is intended to prevent instances of potential online bullying, that can often spill over into the real world.

Many platforms already have reporting and complaint functions available to all users but Ofcom says their research suggests that children do not find these easy to use and transparent. As a result, children are discouraged from making complaints about content they may have seen.

New measures put forward by Ofcom include improving the accessibility of forms to offer “clear, straightforward, and accessible complaints procedures”. Platforms will have to respond to complaints promptly and provide information about the resolution – so as to provide assurance to children that their complaints are confidential and are seen and responded to.

London, UK - July 31, 2018: The buttons of WhatsApp, Facebook, Twitter and other apps on the screen of an iPhone.
Consent would be needed to be added to group chats on apps like WhatsApp. (Getty)

The Ofcom chief executive, Dame Melanie Dawes, claims their measures “go way beyond current industry standards”. She said the regulator “won’t hesitate to use our full range of enforcement powers to hold platforms to account”.

However, while charities like the NSPCC have cautiously welcomed them, there have been some criticism that Ofcom’s proposals essentially meant big tech companies were being “allowed to mark their own homework”. Speaking on BBC Radio 4’s Today programme, Dawes denied this was the case, insisting Ofcom would be doing the marking themselves transparently.

Child online safety campaigner Ian Russell, the father of 14-year-old Molly Russell who took her own life in November 2017 after viewing harmful material on social media, said more still needed to be done to protect young people from online harms. In his role as chair of online safety charity, the Molly Rose Foundation, Mr Russell said: “Ofcom’s task was to seize the moment and propose bold and decisive measures that can protect children from widespread but inherently preventable harm.

Dame Melanie Dawes, Chief Executive, Ofcom, giving evidence to the Digital, Culture, Media and Sport Select Committee at the House of Commons, London, on the subject of the work of Ofcom. Picture date: Tuesday March 14, 2023.
Ofcom chief executive, Dame Melanie Dawes, claims their measures ‘go way beyond current industry standards’. (PA)

“The regulator has proposed some important and welcome measures, but its overall set of proposals need to be more ambitious to prevent children encountering harmful content that cost Molly’s life.”

He added that the next prime minister should commit to “finish the job and strengthen the Online Safety Act to give children and families the protection they deserve”.

Lisa Kenevan, who believed her 13-year-old son Isaac died after taking part in a dangerous TikTok challenge, told BBC Breakfast that Ofcom still does not have the power to look at individual complaints on social media platforms.

Hollie Dance, whose son Archie Battersbee died at the age of 12 in 2022 during a prank, that she believed was part of a TikTok challenge, questioned whether Ofcom could ensure platforms were imposing rigours age checks. Dawes admitted that the platforms “are not doing enough” but insisted Ofcom would hold any who do not follow age restriction rules to account.