liza-pooor-XmRQ179zCLI-unsplash.jpg
Credit: Liza Pooor on Unsplash

The pandemic and the US election have reminded us about the danger misinformation poses to our health and our democracy.

For the past five months, executive director of The Signals Network Delphine Halgand-Mishra led a team of rapporteurs that took a long, hard look at the social platforms and their role in the 'infodemic'. She was joined by 15 committee members, including Rappler's CEO Maria Ressa, ICFJ's global research director Julie Posetti, and data scientist Christopher Wylie best known for being the whistleblower who reported on Cambridge Analytica. The working group was part of the Forum on Information and Democracy, an initiative launched in 2019 by 11 non-governmental organisations.

The committee came up with 250 recommendations on what the platforms need to do to fight misinformation.

"It's time to end self-regulation and start public regulations," says Halgand-Mishra. "It will change the way platforms are operating and it will be a start of democratic oversight."

The committee mainly calls for the creation of a new co-regulation model that will see platforms, governments and civil society working together to moderate social content. This plurality could help the new body gain public trust rather than being perceived like an arrogant or authoritarian decision-maker.

But this may take years of discussions and negotiations. What can platforms do right now to limit the spread of misinformation?

One of the recommendations of the report is that platforms could impose a 'cooling-off' period for people who follow a particular type of content to de-focus the algorithm.

In practice, this would mean that, say, cat lovers would see less and less of cat-related content and be progressively more exposed to other topics. The authors explain that this 'mandatory noise level', where platforms would intentionally expose users to a certain quantity of new content, would help to break down echo chambers.

Solutions like these are not without pitfalls. Someone will have to ultimately decide what new content to expose users to, how to measure its quality and what would truly help to address echo chambers. But, Halgand-Mishra said, platforms need to start putting user safety first and make sure considerations, such as the impact of harmful content on human rights, are added to the product testing process.

Perhaps the biggest obstacle to change is that misinformation is at the heart of the business model - the platforms are selling our attention and that is how they make money. Changing the algorithm could lead to users spending less time on the platforms which may not go down well with advertisers.

Despite that, even Facebook now recognises that a new solution to content moderation is needed to stop the spread of misinformation on the social platform. According to Monika Bickert, VP, content policy at Facebook, the platform has finally recognised it makes some degree of editorial decisions, such as amplifying or downgrading content, and in that sense, it needs to have transparent rules around moderation.

But who decides what is misinformation?

"Today, it's the platforms," says Halgand-Mishra. "That's why we need a new model that includes civil society and governments, as well as the tech companies."

When moderating harmful content, the report argues, one of the requirements should be provenance: providing more context about where the information comes from.

But it goes further - the authors recommend that CEOs of social media companies should be personally and legally responsible for transparency and disclosure around algorithms and revenue. The platforms should also introduce a serious human right impact assessment and test the implication of potentially harmful content for minorities or users' mental health.

"There should be massive fines to make sure it’s taken seriously," says Halgand-Mishra, adding that fines should be on the scale; the biggest platforms would pay the largest fines so this would not disproportionally impact smaller startups.

Such changes will require an army of content moderators who need to be as diverse as possible. Platforms will need people who speak regional dialects to be able to assess the risk to minorities, or health experts to make sensible decisions about how a piece of information can impact users' physical and mental wellbeing.

But, as with everything that gets strongly regulated, the danger is that the bad actors would simply move under the radar. In the case of social media, this would mostly mean moving to private messaging groups. Trying to moderate this space comes with a host of new headaches around balancing privacy and protecting encryption while also caring about the impact of misinformation on the users.

Despite that, "we need to do something because of what’s happening right now publicly," concludes Halgand-Mishra.

"You can still add friction and provide more information in a way that protects human rights and democracy."

You can read the full report by the Forum on Information and Democracy here.

Join us at our next digital journalism conference Newsrewired from 1 December 2020 for four days of industry expert panel discussions and workshops. Visit newsrewired.com for event agenda and tickets.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).