Human oversight is now well established as an integral part of using AI in journalism, providing an essential ethical balance to the inherent biases of the time saving work machines can do.
But if we could make machines behave responsibly without us, could we let them loose to produce engaging journalism, keeping audiences as happy as they are with their human reporters?
A new study from European and American universities, looking at how audience perception of video news changes according to how much automation has been employed in creating them, suggests not.
It is a question which holds relevance for many working journalists: the likes of the BBC, Reuters, and The Economist are already using video automation services like Wibbitz, Wochit, and Synthesia.
"Our research shows that, on average, news consumers liked short-form, automated news videos as much as manually made ones, as long as the automation process involved human supervision," says Neil Thurman, Senior Honorary Research Fellow in the Department of Journalism at City, University of London, and Professor at Ludwig-Maximilians-University in Munich.
Along with Dr Sally Stares, from the London School of Economics, and Dr Michael Koliska, of Georgetown University in Washington DC, Professor Thurman surveyed the reactions of thousands of UK news consumers to videos which were human-made, highly-automated, or partly-automated: combining machine and human effort.
And the results showed that while there were no significant differences in how much news audiences liked the human-made and partly-automated videos, viewers still need that human touch. Highly-automated videos were liked significantly less.
What is automated video journalism?
Previous audience perception studies had focussed on the full automation of text – programming code that takes data and turns it into a narrative with no further human intervention. But for the purposes of this study, that did not work: they needed to be able to account for inputs other than numeric data; needed to account for a level of human post-editing; and, of course, for the fact they were looking at video, not text.
So, adapting previous definitions of text-based automated journalism, they settled on the following definition: "Algorithmic processes that convert numerical data, images, or text into written or audiovisual news items with various levels of human intervention beyond the initial programming."
Once they had this, they could look at the three types of videos they wanted to present: human made, partly automated and fully automated.
For their study, they used human-made videos from the PA news agency. The automated videos were made using Wibbitz, taking the captions from the PA videos as input prompts.
The platform automatically tried to find video clips and still images that matched the captions by searching media databases. The resulting highly automated videos were included in the study, along with partly automated videos that the researchers created by post-editing the highly automated videos to replace still images and video clips that did not match the captions.
The big questions
When showing the videos to news consumers, the researchers were asking the question: "what, if any, differences exist in UK online news consumers’ evaluations of short-form online news videos made with various levels of automation, and none?"
To find out, they prepared videos covering 14 different stories, including some on news figures like Donald Trump and Elon Musk and others on sports topics including the Wimbledon tennis championships.
For each of these stories, they used the three types of videos: human made, partly automated and highly automated. They showed each of these variations to 100 people, a total of 4,200 news consumers.
As they did so they were also asking themselves a second question: "How, if at all, do any differences found in the audience’s perception vary across the 14 story topics included in the experiment?"
What the results showed
The outcome, as you might expect, showed that the human-made videos were more popular with audiences. However, while there was a significant gap between how much audiences liked partly-automated and highly-automated video, the human-made content was not as far ahead of the partly automated.
Notably, the gaps were small for criteria that were related to characteristics of the videos that the researchers had edited: for example, how well the images related to the caption, and the variety of stills and video clips used.
But on the other hand, it was clear that there were wider gaps in perception for criteria relating to characteristics of the videos the researchers did not change in post-editing: for example, the pace of the video, the transition of the captions, and the audio – the automated videos had music overlaid, while the human-made videos often had authentic background noise.
The researchers also found that there were significant differences in evaluation between the different stories that the videos covered. For example, there were large differences in audiences’ relative perceptions of automated and human-made videos about Donald Trump, but this was not the case for those about Elon Musk and 5G technology. However, they were unable to identify why different stories resulted in such changes in perception.
Finally, the researchers also looked at one more question: whether the differences in audience perception of automated content changed according to the demographic of the audience themselves?
On this, they did not find any strong systematic patterns, although one or two interesting nuances emerged. For example, their male respondents tended to rate human-made and automated videos as being similarly dry or emotive, while women tended to rate human-made videos as more emotive than the automated content.
The need for a human touch
The takeaways from this study are highly relevant to companies already employing this technology, and those who will adopt it in the coming years.
The study shows that while there are undoubted gains to be made from employing AI, humans clearly still have their role to play.
As Dr Koliska explains: "One key takeaway of the study is that video automation output may be best when it comes in a hybrid form, meaning a human-machine collaboration.
"Such hybridity involves more human supervision, ensuring that automated video production maintains quality standards while taking advantage of computers’ strengths, such as speed and scale."
Audiences like their videos to have a human touch, and a combination of new AI technology and editorial oversight looks like it could bring the best of both worlds.
The study, Audience evaluations of news videos made with various levels of automation: A population-based survey experiment, is published open access in the international, peer-reviewed journal, Journalism.
Joseph Hook is the founder of Subbed News, a news automation and AI consultancy. He has a background in data analysis and local and national journalism, and while managing the RADAR local news service, part of the PA Media newswire, he led a small editorial team in the production of thousands of AI-driven stories every week.
Free daily newsletter
If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).
Related articles
- Depth not scale: How Times Radio is building an engaged YouTube following
- Nine AI hacks for newsroom leaders to promote employee wellbeing
- Livestreaming, explainer videos and newsletters: Overnight election coverage with three new media companies
- Video meets podcast: Five tips for making a successful 'vodcast'
- RISJ Digital News Report 2024: Three essential points for your newsroom