When talking about artificial intelligence in the newsroom, there is too much focus on the technology and not enough on what it actually does. We want to help journalists, technophiles or technophobes, to explore this topic in an accessible way. So we are launching a second series of "I am a journalist, what can AI do for me?" that brings stories from your peers who work with editorial robots.
For many publishers, the comments section is a mixed blessing. It is a fantastic space for readers to contribute their thoughts, experiences, and diverse perspectives. But it can also become a breeding ground for negativity, with arguments and offensive language driving valuable voices away.
This was the challenge that Lucy Warwick-Ching, community editor at the Financial Times, faced when she took on her role two years ago. FT had a dedicated team of human moderators, but with hundreds of stories published weekly, they could not keep all open to comment threads and had to close some. They also decided to close some comment threads over the weekend to avoid any toxic discussions while there was limited capacity to moderate.
Hiring more moderators was too expensive and building its own moderation system from scratch proved more trouble than it was worth. It would also mean diverting time and resources from other ways FT was experimenting with AI, so that was a no-go.
Warwick-Ching’s team settled on an off-the-shelf moderation tool from Utopia Analytics which was then trained on 200,000 real reader comments to make it understand what FT means by polite, non-argumentative discussion that does not go off-topic.
"We’re very lucky to have readers who offer valuable comments that enhance our journalism,” she says, adding that she was also very keen on increasing the diversity of voices in the comment sections and making it more inclusive for women, minorities and people from different backgrounds.
Research showed that these groups are often reluctant to join online conversations either because they do not see themselves in them or for fear of getting involved in a toxic debate.
Training the AI
After starting to use the tool, rather than accepting every decision the machine has made, the human moderators checked each decision manually. It took a couple of months to get the moderating decisions right: the machine now catches most sexist and racist comments, despite the sophisticated language the FT readers use to get around it.
"It is not perfect and it is still learning," Warwick-Ching says after six months.
However, its impact has been significant. Previously, moderators spent a large portion of their time filtering out negativity. Now, AI takes care of a lot of the heavy lifting, freeing them up to focus on community-building. Readers often share valuable insights, personal stories, and even story leads within the comments. Moderators can now dedicate their time to finding these gems and bringing them to the attention of journalists, enriching FT's content.
The benefits are not just about efficiency. Moderating online comments takes an emotional toll. AI now absorbs most of that negativity, protecting humans from the worst abuse.
Most importantly, Warwick-Ching says no moderator has been made redundant.
"From the very beginning, when we pitched automated moderation to the editors, we made it clear that this wasn't about reducing headcount, but about creating a space for moderators to do more fulfilling work," she says.
The results so far have been positive. Readers have emailed in to say they noticed the improved civility in the comments section, and there have not been any significant complaints about comments being unfairly rejected.
The impact on the newsroom has been significant. Journalists are seeing the benefits of a thriving comments section, with valuable reader contributions enriching their stories. They have also seen that using AI does not mean sacrificing the human touch in moderation – it allows moderators to focus on what they do best: fostering a welcoming and informative space for everyone.
Update: An earlier version of this article incorrectly stated that FT closed most comment sections for the weekend. In fact, it only closed some as it had a limited capacity to moderate.
This series is supported by Utopia Analytics, a Finnish company that enables AI-automated moderation in any language of reader comments and cuts down the publishing delay. Inappropriate behaviour, bullying, hate speech, discrimination, sexual harassment and spam are filtered out 24/7 so teams can focus on moderation policy management & engage with readers. Utopia Analytics has no editorial input in the series.
Free daily newsletter
If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).
Related articles
- Humans vs machines: the publishing world's tech tango
- News media must change tactics when fighting false information to protect democracy
- How The Economist reached young audiences through new formats and brand marketing
- What the Financial Times learned from experimenting with AI
- How the BBC is using artificial intelligence