BBC
Credit: Tim Loudon, via Flickr

Beth Ashton is chief growth officer at Bright Sites, who interviewed Laura Ellis, head of technology forecasting at the BBC.

Ellis has worked on news teams in radio, TV, and online and established the BBC’s first end-to-end digital newsroom. In her current role, she focuses on ensuring the BBC is best placed to take advantage of emerging technology.

Laura Ellis's LinkedIn profile picture

How does the BBC use AI?

We have been using AI to do a number of things for years—translation, transcription, and object recognition via computer vision for example - exactly the sorts of things you might expect from a broadcaster. We have a programme called Springwatch, for example, which is part of our series of ‘Watches’ programs about nature and AI would spot a bird or an animal, having been trained on the sort of wildlife to expect. It would save producers from having to go through hours of footage to find where the duck comes into the shot. That is really clever because it can also distinguish between species and between individual animals as well.

And then, of course, when generative AI arrived, we started to look at how that might change the landscape.

How did generative AI change the landscape?

So the first thing we realised is that there were quite a lot of additional risks. There are three key problems we have to solve, I think, before moving forward comfortably with generative AI.

The first is that because it is much more democratised, everybody could suddenly use it. You can speak to machines in your language. That meant that there was a danger that people could put data from the company into a system and if you do not have the right safeguards, that data can go anywhere. You do not know where it is going to wash up so you have to be very careful.

The second worry was that we do not know how to make these models work without 'hallucinating', or coming up with things that are not true. They have been designed to be pleasing and to give you an answer. There is not the reasoning in there that says, 'Oh, I don't know that; I'll say I don't know it.' It can just come up with things which we as we know can be harmful, damaging, and unhelpful, so that's a real issue.

And then there is the third issue, which is common to the whole of the media industry and beyond, which is to what extent are we comfortable with the fact—whether that is legally or ethically—that these models have been trained on vast amounts of data, which in many cases is copyright data.

How difficult is it to police an AI policy at an organisation as large as the BBC?

We are used to having editorial guidelines in BBC that everyone is expected to adhere to. So we do have a history of knowing that there are specific things that we have agreed editorially that we will and will not do.

Now, we have moved into a new era where we are looking at guidelines which have a lot more technology in them or a lot more references to technology. So in a way, it has been quite interesting, adding to and updating those. I think it is also really important to supplement those with human training. I did a course this morning as part of the Co-Pilot series that we are running, for example.

It is also really important, I think, for people to be able to ask questions, to say, "Well, hang on a minute, how does that work? Well, how would I access that? Are you sure that we've made the right decision on that?"

We also disclose to audiences what we have done. It is very important to say to our audiences that we have used AI in the creation of this particular piece of content and to convey that information in a way that is not intrusive but is instructive and useful. That is a really big challenge.

Do you work with other media organisations when it comes to AI?

It used to be the case that you would have a lot of rivalry between organisations. You probably would not share much about what you were doing but generative AI hit so hard and fast that we have wanted to share findings, and I think that has been a really positive outcome. There are some people who are doing really impressive things.

It is similar to the way that we have collaborated on tackling disinformation. So again, that problem is too big to be something that you use as a distinguisher in the competitive space.

What advice would you give to publishers just starting down the road of AI?

The first thing we need to do is to listen. It is important to have conversations with colleagues and maintain the human element in journalism. The people that are going to use this need to be able to ask questions. They need to feed back their concerns. We have something called the Blue Room where we do a lot of sessions on this. So people come in and they tell us what they think. It is a great feedback loop.

If generative AI can help, then it should be able to improve and enhance, rather than take anything away. It should be able to give us new opportunities. For me, the wonder of journalism is a human looking at a situation and telling another human or group of humans about what they have discovered and how they have reacted to it.

That is a very, very precious thing, and if we lose that, we have lost the industry, really. We need to keep that really at the forefront of our imaginations, in our minds, as we do this.

Is there a world where journalism, publishing, and AI all live together harmoniously?

There should be. One of the things that AI could do is add value because it can change modes, so it can take text into voice and voice into text.

You might not have any spare capacity, and that is giving added value to the content that we have already paid for and we have already accumulated. So allowing AI to do those sorts of things adds value. Something new to the offering that we have for our audiences, and that feels great, right?

There are lots of positives from that point of view. Where it becomes more difficult is the jobs that are perceived to be dull. They might be repetitive—translation, transcription. What are we losing if we let AI do all of that?

There is an awful lot of material we create again that we would just never translate, and we would just never transcribe because there are not enough human efforts to do it, but let's not lose the beauty and the subtlety of getting human translations for things that might be particularly sensitive, or beautiful, or precious.

And let's make sure we do not wipe out a very, very subtle and high-end human skill which is understanding another language. And again, with audience stuff, people say, "Oh, I know we can get a story, write it once and then have AI write five versions for an audience with English as a second language." An audience that really does not like words, that prefers to have bullet points and pictures or whatever. Absolutely, you can do that, and that might be useful, but I guess I would ask two questions. One is do you then lose touch with the audience that you are not writing for anymore?

If we look at our wonderful journalists in the BBC, they do things like Newsround and Newsbeat. They are writing for specific audiences. We need them to know what the language is, what the idioms are and how that audience responds to being told stories in a different way. Language changes and dates really quickly.

Secondly, if you do not understand that audience, is that a problem for you long term as an organisation? I do not think we should use AI as a proxy to communicate with people that we could otherwise make an effort to communicate better.

What in your journalistic background do you find yourself drawing on a lot when it comes to AI?

If you have worked in a newsroom like the ones here, you have an absolute passion for fact, and for truth, and for accuracy.

I think we are as a society losing grip on facts and truths, so going back to journalism every so often reminds me of how facts are not only important and not only the absolute currency in journalism but also a basic human right.

Journalism should not have to be fighting AI for that. You should be enlisting it in the cause of making it better.

This article was originally published in a newsletter by Bright Sites and is republished here with permission. You can sign up for the Bright Sites newsletter here.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).