A social media "lie detector" is being developed to help journalists verify rumours and other information online.
The three-year project, named Pheme, is an European Union-funded collaboration between an international group of researchers led by the University of Sheffield.
Lead researcher Dr Kalina Bontcheva told Journalism.co.uk that the idea for the project had come about following the circulation of rumours within tweets during the London riots in 2011, such as the false claim that animals had been set free from London Zoo.
"The problem with [verification] is that it can take quite a lot of people's time and effort, which isn't always tenable when we're talking about responding to events unfolding in real-time, which is what normally happens in a newsroom," she explained.
The idea of Pheme, she said, is to automate certain verification processes to make it easier, and faster, for journalists to use the social web effectively in a breaking news situation, when such platforms are often flooded with information.
Pheme will attempt to sort online rumours into four categories: speculation, controversy, misinformation (where false information is circulated unknowingly) and disinformation (where something is spread maliciously).
To do this, it will automatically assess sources according to their authority, such as news outlets, individual reporters, potential eye witnesses, or automated ‘bots’.
It will also look at the text of the tweet itself. As Bontcheva explained: "Is it emotionally loaded with swearwords or shouting - words in all caps? What kind of verbs are used? Is there any critical language? What are the emotions - are they angry?"
Pheme will also analyse what Bontcheva called the "propagation factor" to examine the "conversations and dialogues" around the tweet to identify any suspicion that the information is controversial or untrue.
The results will be displayed in a visual dashboard which show the dynamics of a developing rumour and makes it easier for a reporter to "sift through" information.
The Swiss Broadcasting Corporation, swissinfo.ch, will test the initial platform when it becomes available "within the next 18 months", although the project is still is in the very early stages of development.
However, while the platform aims to enable speedier verification of content from social media there are some elements, said Bontcheva, that remain "a human job".
For example, the platform will only attempt to verify text, not images, which Bontcheva said will "only be analysed in the context of the text that they appear in".
"With the verification process not everything can be done by a machine, still a lot of the work has to be done by a human."
Free daily newsletter
If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).
Related articles
- WAN-IFRA launches gaming tool NewsArcade with six-month free trial
- Six self-care tips for journalists to stay sane during the US election
- Nine AI hacks for newsroom leaders to promote employee wellbeing
- Blockchain can help news publishers fight risks posed by fake news websites
- Updated global directory features 3,000 independent digital media companies