IBA Global Insight - tech harms
Tuesday 7 April 2026
In March, a US jury ruled that Meta and Google had intentionally built addictive social media platforms that harmed the mental health of a woman who used them as a child. Though both companies have said they intend to appeal, this landmark ruling appears to open the door for others to hold tech platforms to account. Below, we’ve selected some of the best of Global Insight's continuing coverage of these and related issues.
News analysis: Landmark US case ‘could launch wave’ of addiction litigation
A Californian court has found Meta and Google liable for intentionally building addictive social media platforms. Both companies have said they intend to appeal. Philip Yannella, a partner at Blank Rome, says the case ‘is potentially the most impactful […] in the US right now.’
Read here...
News analysis: Australia enforces social media ban for under-16s
In 2025, Australia became the first country to ban under-16s from using designated social media platforms. The ban is aimed at protecting youngsters from harmful content, says the Australian government – and several other jurisdictions are now considering the adoption of similar rules.
Read here...
Podcast: Taming the tech giants
The scale at which tech companies operate and their innovative use of technology can lead to challenges in keeping their power in check. This podcast assesses the ways in which governments, lawyers and the courts – as well as the tech companies themselves – are attempting to do this.
Listen here...
Feature: Improving (anti-)social media behaviour
Legislators have intensified their efforts to drive social media platforms towards ‘good behaviour’ in response to concerns about the harms they cause. Tech companies have responded by rolling out accounts designed for young people and enabling users to set daily time limits, for example.
Read here...
News analysis: UK looks to hold big tech accountable for child safety
In 2025, child safety rules introduced as part of the UK’s Online Safety Act 2023 came into force. Online platforms, including social media companies, must implement age verification checks to block under-18s from accessing ‘harmful content’ and material that might promote self-harm and suicide.
Read here...
Column: You can’t outsource responsibility
Content moderators are vital for tech platforms, but critics claim these workers aren’t getting the care they deserve, given their roles in reviewing graphic and distressing material, for example. In Africa, the courts have taken notice and are beginning to hold tech companies to account.
Read here...
News analysis: Countries react to evolving online violence against women
Evolving online harms and tech-enabled violence against women and girls have been highlighted by a UK National Police Chief’s Council report as a ‘high harm and high-volume threat area’. Criminal offences in this area include stalking, harassment, cyberflashing and the sharing of ‘deepfakes’.
Read here...
Column: The fight against harmful content
Attempts to protect against the obvious dangers of some online content are long overdue. This column assesses the UK’s moves to toughen its stance in this area, including through the Online Safety Act, which introduced a statutory duty of care for platforms that curate user-generated content.
Read here...
News analysis: Lawmakers urged to focus on harms of synthetic media
Hyper-realistic deepfakes – recordings that replace an individual’s face or voice with that of someone else, while appearing real – are time-consuming and expensive to create. The introduction of more accessible generative AI tools could change this, leading to concerns about misuse.
Read here...