Image_from_trust_conf.jpg

Yasir Khan (Thomson Reuters Foundation), Jeff Allen (Integrity Institute), Ritu Kapur (Quint Digital) and Claire Leibowitz (Partnership on AI) speaking at Trust Conference 2024, the Thomson Reuters Foundation annual flagship forum


False information spreads fast online and can hurt our democracy - that much is clear. But what can journalists do about it, especially during the super election year? Yesterday (22 October 2024), a panel at the Trust Conference organised by the Thomson Reuters Foundation tried to answer that question. We summed up the main points.

People spread lies, not the AI

Ritu Kapur, CEO of Quint Digital made an important point: we do not need AI to create election lies. Most false information starts with people, especially politicians. We see this when politicians claim election fraud without proof, share wrong numbers about their achievements, or when campaign teams spread rumours about opponents. These false statements do not just confuse people – they can lead to real violence in communities.

Tech companies are less prepared

Jeff Allen, co-founder and chief research officer of the Integrity Institute shared some worrying news about how tech companies handle false information today. He said that back in 2016, Facebook had integrity teams for most products to check for lies. These teams worked to catch fake accounts and stop false stories from spreading during elections. But things have changed. Many companies - not just Facebook/Meta but other tech companies too - have fired many of these teams, leaving fewer people to fight false information. This means lies can spread more easily than before because there is no one left to even ask questions about integrity, harm and truth.

When real videos are called fake

Claire Leibowitz, head of AI and media integrity at Partnership on AI warned about a growing problem: people claiming real videos or photos are fake. As an example, she talked about the time Trump claimed that crowds that greeted Kamala Harris at an airport were AI-generated. This trick is dangerous because it makes people doubt real evidence. Leibowitz says this hurts democracy more than actual fake content because it breaks trust in real news.

Problems with different apps

Social media platforms each face unique challenges with false information. X (formerly Twitter) has become a particular problem because users can now create and share fake videos with Grok which lacks many safety checks used by other AI platforms. Recently, a fake video showed Taylor Swift supporting Trump. In other cases, fake videos have shown celebrities endorsing products they never agreed to, or politicians saying things they never actually said (even if it is just for parody).

How we can fix this

Experts shared several promising solutions to fight false information:

Working with influencers

In India, journalists found a creative way to spread the truth: they got popular people to help. Online influencers, such as cricket players, shared fact-checked news with their followers. Food bloggers helped fight lies about health. Local celebrities joined the fight against covid-19 misinformation. This approach helped reach people who might never see traditional news.

More competition among AI companies

Having more AI companies like OpenAI, Gemini, and Perplexity could lead to better results. When these companies compete, they might focus on accuracy and not just on getting attention. This could lead to a fiercer competition to create better tools for spotting lies.

Making truth easy to find

Truth needs to be as quick and engaging as lies. Short videos under 60 seconds work well because they match how people use apps like TikTok. When fact-checkers use popular hashtags and eye-catching images, they reach more people. The key is using the same SEO keywords as those who spread misinformation to share the truth quickly before lies can spread too far.

Community help

Online communities can be powerful allies in fighting false information. Reddit users often work together to spot fake news quickly. Wikipedia editors check facts as a team. But this system works best when verification marks are earned through trust, not bought with money, such as blue ticks on X.

Making things better

Being open about how social media works builds trust. People should know how news feeds choose what they see and how fact-checkers do their work. Understanding where information comes from helps everyone make better choices about what to believe.

False information is not just a social media problem. It spreads through private messaging apps, email chains, gaming chats, and even shopping websites with fake reviews. To fight lies effectively, we need to look at all these places.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).