Social platforms, and Facebook in particular, are increasingly criticised for their opaque algorithms and insufficient efforts to fight online abuse and misinformation.
At the same time, Facebook often acts as an amplifier for suppressed voices in authoritarian countries and in many cases activists do not have much else to get their message out.
So the answer is simple - we often hear - we need to regulate Facebook. Things get a bit more complicated when we try to define who is "we" and what it means "to regulate."
To shed some light on Facebook’s own effort to become more transparent, The House of Lords Communications and Digital Committee quizzed former Guardian editor-in-chief Alan Rusbridger at a virtual meeting yesterday.
Rusbridger, who now sits on Facebook’s independent Oversight Board, spoke about the role of this recently appointed body in helping the company govern speech on its platform. He was joined by Dr Kate Klonick, assistant professor of law, School of Law, St John's University, who studies Facebook’s Oversight Board and rules around its decision-making.
Who should make decisions?
Since it was created in 2004, Facebook marketed itself as an oasis of free speech on the Internet. The downside is that misinformation and conspiracies are rife and the platform’s critics warn that, left unregulated, Facebook can become a threat to democracy and citizens.
In 2019, the company responded to pressures by creating an independent Oversight Board, made of around 40 members, whose job is to arbitrate individual disputes between Facebook or Instagram and its users, as well as advise on policies. Their expertise span free speech, governance, human rights, technology, and other fields.
To uncover what rules Facebook uses to moderate its content, how they are enforced and how they influence public policies, Klonick dedicated her research to The Oversight Board.
Because of Facebook’s global reach, its rules are international - no single country can turn them off. This makes it both a powerful tool for freedom of expression and a hotbed for harmful content. So how do you balance valuable contribution to society with harm? How do you decide what tradeoffs are worth it and who should make these decisions?
"There is no one lever that will solve the problem of online speech," says Klonick, adding that users are often frustrated because they do not understand which rule has been broken and how not to re-offend. The board is trying to build some of that transparency back.
It does so by choosing cases that are representative of the most serious problems, focusing on content taken down, but in the year ahead it will pay more attention to content that should be kept up.
"One thing that strikes me as meaningful is how much we came to rely on these platforms," says Klonick, adding that when content and accounts are taken down, this can have a profound impact on individuals and organisations.
Expectations vs. reality
Taking down misinformation is possible, said Rusbridger, but that hardly stops it. A blanket ban may force it underground, which in the digital world means mostly encrypted and private messaging channels that are much harder to monitor.
The way to counter bad information is to create good information, he added, and he is hopeful that Facebook can play a role in disseminating useful and verified content.
Another hurdle in formulating global policies is settling cultural differences. A typical example is nudity, as different countries have a very different definition of what constitutes nudity, where is the bar, and what to do about it.
How to deal with uncomfortable content finally boils down to the difference between the right not to be offended and the right not to be harmed. "This will never be settled," says Rusbridger, adding that while there is a consensus about the necessity to moderate harmful content, it is much harder to decide what to do with offending words and images.
It is easy to arbitrate on hypothetical scenarios, added Klonick, but deciding on concrete cases is much trickier. In Facebook's case, it would be almost better to take potentially harmful content down as a default practice and let the board make the decision, rather than leaving arbitration to the platform.
The problem is also in the metrics. On the one hand, there is the argument that problematic and salacious content is what people want. But other metrics show that it is precisely what people do not want in their feeds.
"Everyone seems to want the platform to read their mind and this may be an impossible standard to enforce," she says.
Robots are still not smart enough
AI-powered moderation is increasingly criticised because machines struggle with nuances of human speech, such as satire or metaphors.
Bad moderation is very dangerous for free speech, said Rusbridger, and we are only beginning to learn how to make it work for good.
Klonick pointed out that one of the reasons why we started to talk about problems with moderation is because it happened to powerful people. One of the most prominent cases was the “napalm girl”, an iconic Vietnam war photograph taken down because one of the scared running children is pictured naked.
However, poor AI-powered content moderation has been going on for years. Sometimes, however, users outsmart the machines who then play catch-up. This was the case of the Christchurch mosque shooting, where the terrorist live-streamed the attack. Although the platform promptly took the stream down, users around the internet managed to circulate the video for at least 48 hours by, for example, playing it in reverse or changing background sound to escape detection.
Finally, the committee addressed the question of giving UK broadcasting regulator Ofcom new powers to enforce transparency. The main point was the penalties for non-compliance.
Klonick said she was in favour of fines that she described as "very incentivising" for Facebook. Rusbridger, however, pointed out that while huge fines can be paid by the 'big tech' they would stifle innovation and small, emerging platforms.
Sound knowledge of the law is essential to avoid legal problems. Join our Media law masterclass and brush up on your skills.
Free daily newsletter
If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).
Related articles
- Depth not scale: How Times Radio is building an engaged YouTube following
- Brian Whelan from Times Radio on growing a successful YouTube news channel
- How journalists can decode 'algospeak' on social media
- Rachel Duffy, senior social media editor of The Telegraph, on using Reddit for news
- 10 creative ways to interview celebrities and experts