Facebook, oversight and the future of social media regulation

Arthur Piper, IBA Technology CorrespondentWednesday 23 September 2020

The tech giant has responded to criticism that it spreads fake news and misinformation by appointing the great and the good to its Oversight Board. Global Insight assesses the implications and chances of success.

The United States presidential elections are in full swing and nobody is happy. Incumbent President Donald Trump is unhappy because social media giants are stifling conservatives' right to free speech, he says. Democratic challenger Joe Biden is unhappy because Facebook refuses to take down fake news ads about him. And Mark Zuckerberg – Co-founder, Chairman and Chief Executive Officer of Facebook – is unhappy because he worries the regulations threatened by both of these powerful men could hit profits from political advertising.

All three have something else in common: they are spending millions of dollars in public relations to get their messages heard, including on social media platforms. The amount of cash spent on political digital media campaigning has rocketed in the US. This year it is expected to reach $1.3bn – three times the amount spent in the run-up to the last US poll during 2016, according to one study.

Charm offensive

Facebook has launched its own charm offensive to persuade those in power that it is doing everything it can to balance free speech by addressing the rampant spreading of lies – also known as fake news and disinformation. Most recently, it created a panel of people, the Oversight Board, to help self-regulate the content published on the site. The Board is meant to act as a sort of uber-moderator – able to override the platform's current decision-making processes. It can consider direct appeals from those who believe they have been treated unfairly.

Currently staffed by 20 ‘supporters and critics’, the Board includes a number of legal academics, including Dwight Professor of Law Jamal Greene of Columbia Law School, who specialises in constitutional law and Professor Nicolas Suzor of Queensland University of Technology’s Law School, who is also Chief Investigator of their Digital Media Research Centre. Alan Rusbridger, former Editor-in-chief of The Guardian and Julie Owono, a digital rights activist and Executive Director of Internet Sans Frontières (Internet Without Borders), have also accepted the call to arms. The final board is expected to comprise about 40 people.

But, given the short time between the creation of the board this summer and the US presidential elections, it is unlikely to make much of an impression by the time of the poll in November. Rusbridger warned as much in an interview with the BBC in July.


Facebook has pledged to warn users about content containing lies, yet it still has only nine fact-checkers in the US, making the task a challenging one to say the least


The platform is also taking steps to curb the spread of lies during the forthcoming campaign, as it has prior to recent major global elections. Facebook claims that the size of the team working on this kind of security has tripled to 35,000 compared to the last US election – coincidentally mirroring the increase in money political parties are spending on their digital campaigns. The measures in place this year range from removing abusive posts to setting up a US Voting Information Center to encourage people to get out and vote.

As though to demonstrate the seriousness of these intentions, the company flagged a post from President Trump within hours of publishing the initiative. He had suggested that postal voting could lead to a corrupt election. Not so, said Facebook. It labelled his post: ‘Voting by mail has a long history of trustworthiness in the US and the same is predicted this year.’ This aspect of Facebook's approach is similar to that taken by Twitter in May, who updated their misleading information policy in light of fake news about Covid-19, and introduced a misinformation labelling and tagging scheme. Facebook has pledged to warn users about content containing lies, yet it still has only nine fact-checkers in the US, making the task a challenging one to say the least.

Late to the game

One obvious question that such moves pose is why they have been implemented so close to poll dates; Facebook has form in this area. In last year’s European Union elections, for instance, it launched a 40-strong operations centre in Dublin on 29 April 2019. The polls opened across the region on 23 May 2019, giving it less than a month to influence the traffic on its site. While there’s more time running up to this year's presidential election, if Facebook is serious about clamping down on the misleading content posted on its platform, why not start six or nine months ahead of the poll? In fact, a recent study found that fake political news on the site topped 158 million estimated views in the year before the election started – enough to reach every registered voter in the US.

This tardiness gives some credence to those critics who claim that the company's efforts to self-regulate are likely to be ineffective and that, therefore, regulation would be a better alternative. Joe Biden, for example, suggested that section 230 of the Communications Decency Act be revoked. It holds that online platforms are not held liable for the things their users post – although exceptions exist.

‘[The New York Times] can’t write something you know to be false and be exempt from being sued,’ Biden said at meeting with the paper's editorial board in December 2019. ‘But [Zuckerberg] can. The idea that it’s a tech company is that section 230 should be revoked, immediately should be revoked, number one.’

Syntax aside, the meaning is clear – Facebook should be treated as a publisher and be subject to civil liability claims. That would not only affect Facebook, but it would change the complexion of large public platform internet companies completely.

caption

Facebook CEO Mark Zuckerberg under media scrutiny at a joint US Senate Judiciary and Commerce Committees hearing, Washington, DC, 2018. REUTERS/Leah Millis

 

Is this, then, the real purpose that the Oversight Board will play? Will it mark a transition from the old internet to the new? It does not seem so. The Board will instead ensure that Mark Zuckerberg does not have the final say over the company's ethical stance on every issue. That feels like a missed opportunity because it sidesteps the more fundamental question that needs to be answered – what role do we want social media to play in our democracies? Such a well-qualified board could have something interesting to say on the issue and be well-placed to help real change take place. That would also justify its half-in, half-out style of independence.

There is little doubt that change is needed. The short history of social media has put under immense strain one of the central ideologies of the information age: that the wisdom of the crowd can be relied on to arrive at the most rational way forward. An analysis of social media behaviour shows that it can serve that role. But it also shows how extreme views and lies are easily disseminated and amplified, and may, in their totality, influence the very governance of countries. Whether Facebook is willing to, or capable of, curbing those excesses in the forthcoming poll could shape the direction, influence and regulation of social media platforms well into the future.

Arthur Piper is a freelance journalist. He can be contacted at arthurpiper@mac.com

Header pic: Facebook CEO Mark Zuckerberg at a joint US Senate Judiciary and Commerce Committees hearing, Washington, DC, 2018. REUTERS/Leah Millis