Fact-checking organisation Full Fact has today urged the UK government to enable Ofcom to demand social media companies to disclose how their algorithms work to tackle online mis- and disinformation.
The Communications and Digital Committee discussed how to fact-check online space, the role of social media platforms in moderating misleading content and the implications for freedom of speech.
Full Fact CEO Will Moy said that the scale of demand for fact-checking on the internet and specifically social media has grown far beyond human capability. Technological solutions are now necessary.
However, the problem is that Facebook, in particular, is very private when it comes to internal decision-making. This is despite working with fact-checking organisations like Full Fact, meeting 'public criteria' on who it works with, and having an open appeal process.
"For its human moderation processes or to train their machine learning models that report false information automatically, there's no accountability there at all," Moy told the select committee.
"If a mistake is somewhere in that pile, it would not be visible. So that's why we think it's so important that this committee recommends a strong power requirement for Ofcom."
He added that any claims that current algorithmic models can stop the spread of false information from companies like Facebook or Twitter should be treated with scepticism. That is because the "holy grail of auto checking" - where computers can correctly interpret a claim, source relevant databases or articles, and take necessary automatic action - is still a distant dream, according to Lucas Grave, associate professor at the University of Wisconsin-Madison.
This is only conceivable with very narrow statistical datasets and even then it often fails. Machines simply struggle with semantics, like in the case of the City of London police that had a Facebook post marked as false because it contained the word 'scam' to warn citizens. And this is to say nothing of the challenges represented by false images and videos.
The models may be fast but they are limited and error-prone, said Moy. He compared them to a driver who can handle speed well but has poor hazard awareness.
The real "black box" is how platforms downrank users posts, Grave added. On a scale of how punitively a platform responds to fact-checking, this sits somewhere in the middle: harsher than linking to verified information and labelling posts as false or untrue but not quite as extreme as banning accounts and removing content.
He said what fact-checkers need is real-time data to understand whether their efforts are reaching the right people, which is still unclear. This could reveal just how far a false claim has been spread, which fact-checkers or news outlets can use to target corrections out to those who need it quickly. The problem is that platforms are happy to just sit on their data.
Would you prefer to write cruder laws or would you prefer to have error-prone enforcement of laws?Will Moy
"Ironically, one of the least transparent environments in this respect is Facebook," says Grave.
"The area where fact-checkers get the least feedback about what the effects of their work have been perversely is on social media platforms, where that data is abundantly available, it's just not being presented to fact-checkers."
It presents a grim choice for parliament, said Moy: "Would you prefer to write cruder laws or would you prefer to have error-prone enforcement of laws?"
He added that there is no single law that will fix all these issues and what is needed is a body of legislation "cautiously developed" over time.
The widely-anticipated Online Safety Bill is expected to be ready this year, which would give Ofcom wide-reaching powers to establish a new 'duty of care' on companies "to ensure they have robust systems in place to keep their users safe."
The role of the government in tackling harmful mis- and disinformation is discussed in section 34. In addition to the requirements under the "duty of care," the legislation would call for "further provisions" to address the threat of disinformation and misinformation. This would include specific transparency requirements and the establishment of an expert working group that would seek to understand and drive action to tackle these issues.
It goes on to say: "Where disinformation and misinformation presents a significant threat to public safety, public health or national security, the regulator will have the power to act."
Sound knowledge of the law is essential to avoid legal problems. Join our Media law masterclass and brush up on your skills.
Free daily newsletter
If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).
Related articles
- Newsrewired throwback: fact-checking, Gen Z and public interest news
- News media must change tactics when fighting false information to protect democracy
- Sky News producer and psychotherapist James Scurry: 'Journalists are a decade behind the latest knowledge about mental health'
- University of Central Lancashire pushes for stronger news literacy education in the UK
- #JournalismMatters: Joy of shared truth, sacred bond and democracies’ self-evident values