UK introduces rules to hold big tech accountable for child safety

Alice Johnson, IBA Multimedia JournalistThursday 25 September 2025

During the summer, child safety rules introduced as part of the UK’s Online Safety Act 2023 came into force. To comply, online platforms, including social media companies and search services, must implement age verification checks to block people under 18 from accessing ‘harmful content’ including pornography and material that might promote eating disorders, self-harm and suicide. 

‘The Online Safety Act starts with the idea of regulating systems and processes,’ says Lorna Woods, a professor at University of Essex whose research contributed significantly to the legislation. ‘Back in 2017, people were becoming aware that the way a platform is designed does impact the way people behave.’

Other requirements for online platforms introduced by the Act include the duty to take down illegal content and empower users to report problems online and filter the content they can see. Failures to comply can result in companies facing fines of up to £18m or ten percent of their global revenue, whichever is greater.

‘It’s a huge relief now that companies will be legally obliged to keep young people safe online,’ say members of the UK charity the National Society for the Prevention of Cruelty to Children's (NSPCC) Young People’s Board for Change.

The introduction of laws that aim to make the internet safer, especially for children and vulnerable users, is an area of increased focus, with similar laws introduced in the EU, France and Australia. Legislators have been under increasing pressure to introduce stricter regulation of trillion-dollar tech companies as concern has grown about the mental health impacts of internet and social media use. The Center for Countering Digital Hate says the Online Safety Act is a ‘vital, evidence-based law’ that has ‘set a global standard for protecting children and communities online’.

It is important to keep technical ways to ensure the internet is one worldwide platform where oppressed people…are able to be heard

Raphaël Dana
Officer, IBA Technology Law Committee

The new rules have faced pushback from campaigners who say they are concerned about the implications for data privacy and freedom of expression on the internet. According to Ofcom, the UK’s communications regulator, sites or social media platforms with adult or harmful content are required to implement ‘highly effective’ age assurance tests which can include requiring users to upload photo identification, allow access to banking information or undergo facial age estimation. The Act requires online platforms to ensure that their age verification systems – whether in-house or through a third-party provider – comply with UK data protection laws and collect only necessary information for as long as it is needed.  

Woods says that while data protection is a legal requirement in the UK, international data protection standards specifically for age verification systems should be introduced to ensure providers have greater clarity on what is expected of them and reduce the risk of inappropriate data use. ‘There is a question of data protection, but I don’t know automatically that all industry actors are equally problematic,’ she says.

Experts have also raised concerns about the vulnerability of age verification systems to cyber attacks and data breaches, with recent catastrophic data leaks underlining the importance of robust data security. In July, The Tea, a US-based dating safety app for women, experienced a significant data breach exposing the identification photos and private conversations of its users, and in the UK, a High Court judge lifted a superinjunction to reveal a catastrophic leak by the Ministry of Defence of the identities of over 18,000 Afghans who had applied for relocation to the UK following the Taliban takeover of Afghanistan in 2021. ‘All the legal safeguards that can be put in place will never prevent a breach or a hack,’ says Raphaël Dana, a member of the Paris bar and officer of the IBA’s Technology Law Committee. ‘One day you will open the wrong attachment or insert a USB that is contaminated’.

Dana says that online platforms requesting users to upload their identification documents may create a chilling effect on freedom of speech because of the fear of repercussions in the case of a leak. ‘It is important to keep technical ways to ensure the internet is the one worldwide platform where oppressed people for reasons based on their race, religion, philosophical beliefs, sexual orientation, are able to be heard,’ he says.

The UK Online Safety Act and France’s Sécuriser et Réguler l’Espace Numérique (SREN) law do not give regulators the power to determine what people see online and place duties on platforms to protect freedom of expression and political debate, with special protections for journalism and content of ‘democratic importance’. 

Another challenge related to the online safety rules is people using virtual private networks – which can conceal a user’s IP address and location – to circumvent the safeguards and access age-gated content. In July, after websites such as Reddit and X began rolling out age verification, VPN apps became the most downloaded on Apple’s app store. ‘Circumvention is perhaps inevitable but maybe the law is like putting speed bumps in the road,’ says Adam Rose, a data protection partner at Mishcon de Reya in London and officer of the IBA’s Technology Law Committee. ‘It slows enough people down and that makes the roads safer’.
 

Image credit: FAMILY STOCK/AdobeStock.com