Australia enforces social media ban for under-16s as other jurisdictions consider similar legislation
Stephen CousinsFriday 9 January 2026
In December, Australia became the first country to ban people under the age of 16 from using designated social media platforms. The ban is aimed at protecting youngsters from harmful content and other risks, says the Australian government – and several other jurisdictions are now considering the adoption of similar rules.
Australia’s Online Safety Amendment (Social Media Minimum Age) Act requires ‘age-restricted social media platforms’ to ensure that users aged 15 years and under can’t hold accounts.
The flexible compliance standard under the law doesn’t prescribe specific technologies to verify age, such as ID verification or biometrics. Instead, it places the onus on platforms to justify how their chosen methods constitute ‘reasonable steps’ for age verification. Platforms face fines of up to AUD 49.5m (£25m) for serious or repeated breaches. As a result, those social media platforms designated under the legislation have been taking steps to achieve compliance, for example by deactivating accounts and setting out the ways in which users can verify their age.
Other jurisdictions are planning similar clampdowns. Malaysia is formulating a policy to ban under-16s from having social media accounts, potentially from 2026. Denmark has announced a measure to ban under-15s from having access to social media, although reportedly there will be a provision to allow parents to grant their teenage children access.
The European Parliament voted in favour of a ban on social media for children under 16 – unless their parents decide otherwise – in November. If the ban is ultimately not passed by the EU, France’s President, Emmanuel Macron, has said his administration will implement similar legislation, affecting under-15s, at a national level.
‘This is a worldwide move, and when it happens as a wave through many different jurisdictions it is difficult for tech companies to deny the need to do something about the issue,’ says Simone Lahorgue Nunes, a Member of the IBA Technology Law Committee Advisory Board at the time of writing.
This is a worldwide move, and when it happens […] through many different jurisdictions it is difficult for tech companies to deny the need to do something about the issue
Simone Lahorgue Nunes
Member, IBA Technology Law Committee Advisory Board
Supporters of the ban argue it’ll protect children at risk of being exposed to uncontrollable social pressures, bullying and predators. Daisy Greenwell is a co-founder of the UK-based campaign group Smartphone Free Childhood. She says that Australia’s policy on social media use recognises ‘that children should not be the testing ground for technologies designed to maximise engagement rather than wellbeing.’
Social media platforms say they are taking the risks seriously. They have implemented various safety features to mitigate the risks of harm to youngsters. Depending upon the platform, these include offering specific types of accounts for young people, enabling individuals to block or mute other users, rolling out parental controls and establishing default privacy settings.
Opponents of the ban more generally have suggested it will push some children to visit less regulated corners of the internet and expressed concern about the potential impact on freedom of expression. While all platforms covered under the Australian legislation have said they’ll comply, some have suggested that alternatives to the ban could work more effectively in protecting minors – for example, the introduction of new rules requiring app stores to verify the age of users at the point of download.
Platforms face challenges balancing user satisfaction with compliance. The law is tech-neutral and Australia’s eSafety Commissioner will ultimately decide if platforms have met the required standard of taking ‘reasonable steps’ for age verification. This creates interpretive risk – measures considered too lenient may result in closer regulatory scrutiny or penalties, while those considered overly strict may be viewed as intrusive, also resulting in fines.
Concerns have also been raised about the technical effectiveness of age verification technologies and their implications for data privacy and security. The law allows platforms to offer government ID verification as an option, but other methods must also be made available.
An Australian government-funded, industry-run trial found that ‘age assurance can be done in Australia privately, efficiently and effectively’ but concluded that of the verification methods available, there wasn’t a ‘single ubiquitous solution that would suit all use cases,’ nor did the study find ‘solutions that were guaranteed to be effective in all deployments.’ Further, while the systems tested were found to be fairly robust in respect of cybersecurity, ‘the rapidly evolving threat environment means that these systems […] cannot be considered infallible.’
‘Every single form of age verifying technology has an error rate, which means when it gets down to the granular level – are you 16 or are you 15? – there are going to be a lot of errors,’ says Molly Buckley, an activist at the Electronic Frontier Foundation, which campaigns against the use of age verification systems.
Young people could also bypass verification measures by using virtual private networks (VPNs), as has been the case with access to other age-restricted websites.
According to Buckley, age verification ‘upends longstanding norms around online safety’ with children – rather than being discouraged from sharing their private personal information – required instead to supply biometric data, identity documents or other sensitive information, ‘creating honey pots of data for identity thieves, hackers or other bad actors.’
Greenwell counters this argument by highlighting that the new Australian law builds in privacy protections and ‘pushes platforms to design proportionate systems that confirm age without creating permanent identity databases or expanding surveillance.’ A ‘ringfence’ protocol within the legislation requires platforms to segregate data collected for age assurance from the rest of their business, including from the platform’s advertising algorithms.
The global regulatory community will now scrutinise the impact of Australia’s new age assurance regime. Comparisons will be made to less restrictive options such as enhanced age verification, stronger content moderation, design codes or further options for parental control.
Lahorgue Nunes, who’s also Founding Partner at Lahorgue Advogadas Associadas in Brazil, highlights here her country’s Digital Statute of the Child and Adolescent, which was enacted in September. The Statute doesn’t ban access but sets out comprehensive rules designed to protect children and adolescents online.
‘The law includes many important principles, such as the requirement to link accounts to legal guardians, regulatory oversight by the autonomous Brazilian Data Protection Authority, design and risk mitigation duties on platforms, safety by design obligations, and age verification duties across the digital chain, not just the provider,’ says Lahorgue Nunes. She adds that under the Statute, whenever platforms receive a notification of harmful content, either from a victim, their representatives, the public prosecutor’s office or a children’s rights organisation, ‘they have to take [the content] down, even without a court order.’
Header image: Monkey Business/Adobe Stock