Social media and free expression
The censoring of Donald Trump by the biggest media platforms has prompted a pressing focus on the respective roles of rights, algorithms and people in dealing with such cases.
The last days of the Trump administration will be remembered for many things – the former United States president refusing to concede defeat to the victor Joe Biden, the tragic storming of the Capitol by his supporters and, by those following trends in technology and freedom of expression, the removal of his accounts by Twitter, Facebook, YouTube and Instagram.
In the six months since the event, attitudes in the US have changed about whether the takedowns were the right thing to do in light of an individual's right to freedom of expression. Immediately after the ban, for instance, Pew Research Center found that about 60 per cent of Americans agreed that the media companies had done the right thing. By May, the American public was split 50-50 broadly along party political lines.
But the debate on when and how companies should take down content is just getting going. Its outcome could determine how the world's data is curated and who has a voice in political and social debates – which is why all eyes should remain in the fallout from the Trump cases. In the US, freedom of speech is enshrined in the First Amendment to the Constitution. But freedom of speech is not explicitly enshrined in the governance of the internet. Instead, as tech platforms have become bigger, governments have put pressure on such businesses to moderate what users see and read – effectively passing some of the responsibility for upholding those values on behalf of society to privately-owned businesses.
If governments continue to devolve responsibility for upholding social values and legal rights to private companies, it is likely that more decisions on such issues will be taken by machines rather than humans
The Polish Prime Minister Mateusz Morawiecki, for one, has said that such a system risks recreating the country's communist past. ‘Censorship of free speech, which is the domain of totalitarian and authoritarian regimes, is now returning in the form of a new, commercial mechanism to combat those who think differently,’ he says. The country is planning a law that would prevent similar action by media companies in Poland in the future.
Critics such as Morawiecki argue that there is an inherent conflict of interest at work in giving social responsibility to privately-owned tech companies because of the way content on social media operates. Algorithmic curation generally favours posts that are more contentious. Trump's often abrasive political style was highly suited to this sort of media, with individual posts going viral among both followers and detractors. The advertising generated around such content fuels profits.
That conflict of interest may be better managed if there were a change in law. Danielle Citron, a distinguished law professor at the University of Virginia and author of Hate Crimes in Cyberspace, believes that Section 230 of the 1996 Communications Decency Act should be amended so that businesses become open to lawsuits if they fail to beef up their content moderation practices. At the moment, the act protects social media platforms from liability for their users' posts.
While it is unlikely that the law – which in internet terms is virtually prehistoric – will be repealed, it could be changed. At a congressional hearing this March, no other person than Facebook's chief executive officer Mark Zuckerberg said that he supported such a move: ‘230 broadly is important so I wouldn't repeal the whole thing,’ he said. Instead, he suggested, sections should be amended to make platforms report on how much harmful content they find and that they should also be held accountable for their material.
Human intervention
On the face of it, this seems to be asking for the Santa Clara Principles on Transparency and Accountability in Content Moderation, that guide social media companies' take-down rules, to be enshrined in law – so, potentially, not much of a change. Established by academics and civil society activists in 2018, the Principles aim to provide minimum standards of transparency and accountability for removing content from social platforms. The Principles say that companies need to provide the number of accounts suspended in a given period, provide the users with reasons and have a mechanism of appeal moderated by humans – such as Facebook's independent moderation board.
In a best-case scenario, this means that take-downs involving issues such as free speech are moderated by humans after the algorithms have done their initial job. Facebook, for instance, has an independent board to assess the validity of such decisions. That board has recently said the company had been right to act in light of the insurrection at The Capitol because Trump's comments could have been interpreted as an incitement to violence. But it has also asked that the case be reconsidered before Christmas 2021. It said that Facebook was trying to ‘avoid its responsibilities’ by giving Trump ‘the indeterminate and standard-less penalty of indefinite suspension’, rather than making a permanent decision about whether to reinstate him, suspend him for a finite period or bar him permanently.
More algorithms
The move towards a machine-human moderation of the internet could be a welcome development. But it will still require private businesses to uphold values, such as those enshrined in the First Amendment.
Is that likely? If trends during the current pandemic are anything to go by, the signs are not promising. During Covid-19, the moderation of contentious content has become both more important – for example to fight fake news about the virus and vaccines – and more difficult. The human workers who support and question how algorithms have made their decisions have found it increasingly difficult to get into their workplaces and do their jobs. As a result, most platforms have increased the amount of automated take-down algorithms on their sites. Twitter and Facebook have both said suspension of accounts during this time will be both temporary and open to appeal. But if the history of technology tells us anything, it is that once technology has been introduced, it is seldom withdrawn.
While most people would accept it would be impossible for all of the content on social media platforms to be moderated by humans, the difficult issues – free speech included – depend on assessing a lot of contextual information that the available algorithms are not yet able to decipher. If governments continue to devolve responsibility for upholding social values and legal rights to private companies, it is likely that more decisions on such issues will be taken by machines rather than humans. A high-profile case like Donald Trump's will then merely have been exceptional because someone real had a say in its outcome and not because it touched on the issue of fundamental human rights.
Arthur Piper is a freelance journalist and can be contacted at hello@arthurpiper.co.uk
Header pic: Shutterstock.com / kovop58