Tech firms must act to stop their algorithms recommending harmful content to children and put in place robust age-checks to keep them safer, under detailed Ofcom plans today.
These are among more than 40 practical measures in our draft Children’s Safety Codes of Practice, which set out how online services are expected to meet their legal responsibilities to protect children online.
The Online Safety Act imposes strict new duties on services that can be accessed by children, including popular social media sites and apps and search engines. Firms must first assess the risk their service poses to children and then implement safety measures to mitigate those risks.[1]
This includes preventing children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, online bullying, and content promoting dangerous challenges.
Safer by design
Ofcom expects services to:
1. Carry out robust age-checks to stop children accessing harmful content
Our draft Codes expect much greater use of highly-effective age-assurance[2] so that services know which of their users are children in order to keep them safe.
In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it. In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.
2. Ensure that algorithms which recommend content do not operate in a way that harms children
Recommender systems – algorithms which provide personalised recommendations to users – are children’s main pathway to harm online. Left unchecked, they risk serving up large volumes of unsolicited, dangerous content to children in their personalised news feeds or ‘For You’ pages. The cumulative effect of viewing this harmful content can have devasting consequences.
Under our proposals, any service which operates a recommender system and is at higher risk of harmful content must also use highly-effective age assurance to identify who their child users are. They must then configure their algorithms to filter out the most harmful content from these children’s feeds, and reduce the visibility and prominence of other harmful content.
Children must also be able to provide negative feedback directly to the recommender feed, so it can better learn what content they don’t want to see.
3. Introduce better moderation of content harmful to children.
Evidence shows that content harmful to children is available on many services at scale, which suggests that services’ current efforts to moderate harmful content are insufficient.
Over a four-week period, 62% of children aged 13-17 report encountering online harm[3], while many consider it an ‘unavoidable’ part of their lives online. Research suggests that exposure to violent content begins in primary school, while children who encounter content promoting suicide or self-harm characterise it as ‘prolific’ on social media, with frequent exposure contributing to a collective normalisation and desensitisation.[4]
Under our draft Codes, all user-to-user services must have content moderation systems and processes that ensure swift action is taken against content harmful to children. Search engines are expected to take similar action; and where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off must filter out the most harmful content.
Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained.
Ofcom will launch an additional consultation later this year on how automated tools, including AI, can be used to proactively detect illegal content and content most harmful to children – including previously undetected child sexual abuse material and content encouraging suicide and self-harm.[5]
"We want children to enjoy life online. But for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.
“In line with new online safety laws, our proposed Codes firmly place the responsibility for keeping children safer on tech firms. They will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age-checks so children get an experience that’s right for their age.
“Our measures – which go way beyond current industry standards – will deliver a step-change in online safety for children in the UK. Once they are in force we won’t hesitate to use our full range of enforcement powers to hold platforms to account. That’s a promise we make to children and parents today.”
Dame Melanie Dawes, Ofcom Chief Executive
A ‘quick guide’ to consultation proposals is available, along with an ‘at a glance’ summary of proposed Codes measures.