Call us at 030 227 21 68 or reach out at info@ai.nl.

The way artificial intelligence is being used at RTL Nederland

The use of data and AI within media is often seen as a technology to optimize ads, but it’s more than that. At RTL, AI is also being used to create personal trailers for films, rove speech recognition and to e which moment is best to start the commercial break during a television broadcast. Data Science Manager at RTL, Daan Odijk and his team are making sure these processes are efficient and are improving themselves all the time.

There are many definitions of artificial intelligence, but what is AI, according to you?

I maintain a relatively broad definition of AI; how do you teach computers to take over human tasks? That also means that the work of the people who work on this is extensive. AI is more than just machine learning and deep learning; it also focuses on what people do and how we can automize certain activities.  

You worked at Blendle as a Lead Data Scientist and are now active as a Data Science Manager at RTL. Does the use of AI differ at RTL than at Blendle or a bank?

I’ve worked in the media for a long time, and that’s not entirely coincidental. At a company like RTL, two exciting things come together: many users and valuable content for which people are willing to pay. That combination is interesting enough to analyze and apply AI models to it, and you can learn something from all users. That’s why it’s a lot of fun for me to work in the media sector.

How is AI used in an (online) medium like RTL?

I usually divide the use into three categories: data science and AI applications, for example, predicting television viewing figures. We are now also looking at how we can partially automate the planning of films when which film is broadcasted on TV to reach the right audience for that film.

We also have personalization and access to content. Our streaming service, Videoland, analyses your viewing behaviour and makes suggestions based on that behaviour.

The last category is a bit more content-supporting. We automatically select thumbnails of videos, the automatic creation of trailers for films and the automatic subtitling of TV programs.

Are you developing an algorithm for this process yourself?

We often do this ourselves. In this, we work together with creative people within RTL. Selecting thumbnails is done by a designer, who spends a lot of time on that. The team of data scientists were interested in how this designer does this, so we watched over his shoulder to see how this progress works. We discovered that there must not be more than three faces on a thumbnail and that any text isn’t allowed either. These are things that can be automated easily. After that, we started to improve this, and we could create large datasets with which we work now.

Are there differences and similarities in the use of AI between your current employer RTL and your former employer, Blendle?

Blendle is an aggregator because news comes together, so you have a lot to choose from. You can read the same kind of story at de Telegraaf, NRC or de Volkskrant. There is something to choose from for the reader. This makes the personalization problem interesting. Both media are part of a journalistic system with an editorial team that has a specific vision for the company. That has been interesting at Blendle, but recently also at RTL Nieuws. You want to do it together with journalists, and you don’t want to personalize too much because you want everyone to access the same information. There is also too much information, and there is a balance between these things that we are looking for together with the journalists.

What is the role of data science within RTL?

There are a few different goals within RTL. For consumers, we can make the content offer more interesting because we can offer much more diverse content on our online platforms than on TV. Thanks to AI, we can do this in a completely different way than before. We are traditionally good at reaching the masses, but now we must understand from each individual what they want to watch. That is something we can solve well with AI.

Everything is becoming more and more personal, and we have more content. We also must become more efficient in the operation and be able to do things on a larger scale. This means not only that we must save work for people but also that we can and must do much more. The automatic creation of personalized trailers is an example of this. You get to see a different trailer than your parents. This is necessary as we develop more and more content. To better market this, we need the scale of AI.

Furthermore, we have the goal of making processes more efficient. Things like predicting viewing figures used to be done by hand, but new models have automated this.

Are ad blockers a problem for you when trying to analyze your audience to personalize ads?

You do not necessarily have to permit cookies when you visit a website. We try to estimate who you are and what suits you based on where you are. Advertisers are often interested in demographic audiences, which we use when offering our advertisements. Furthermore, we look at the data of the public that shares that information with us, such as gender, age, etc.

How does data affect media and advertising?

I think the audience’s viewing figures have been looked at for a long time. We look at what appeals to the audience and translate those kinds of figures into how you produce content. With the video on demand, we have a lot of interesting and detailed information about that. We can see much better where people drop out and specific programs are being viewed from which region. This way, we can also further optimize our content. We also work together with the teams that purchase and produce content. We give them information about which programs are searched for on Videoland and which programs are viewed. So we have all kinds of small things that we can trace back to, which is a very data-driven process.

With other providers, much attention is paid to the gap in their series offer. That’s something we do less than a platform like Netflix does. We feed our creative people with a massive amount of data, but they ultimately make the decision instead of an algorithm. That is also part of our strategy. We focus on the work process of the creative people, they must be able to carry out their work as well as possible. Our AI has a supporting role in these processes so that the work of the creative people can best be carried out.

How is an AI for media developed?

For Videoland, we have introduced an ad tier, where you will see more advertisements if you pay less. However, we had a lot of content explicitly produced for Videoland, which never contained advertisements. For television, a script is often adapted so that certain advertisements can be broadcast at a better time. Still, for a series like Mocromaffia on Videoland, this has never been the case. We did this partly by hand, especially with the more extensive content. However, we have partly automated this.

Such a project works by having conversations with its experts in the field of advertising. We asked them when they enter a particular advertisement and how they do it. We try to translate that into a model. In this case, we started with a switch between camera angles, and it’s inconvenient for people to talk about it. Then we look at the voice signal that it is quiet for a while or that there is a shot change; those are good candidates for a possible advertising break. That worked well, but we did notice that sometimes you’re in the middle of the story and that it’s not a good time to introduce an advertisement. We then added new signals to the model, such as switching faces between shots so that we go to a scene detection. This is the model that we have developed, and occasionally things have been added to make this model better.

To complete that loop, the suggestions we make are returned to the editor, who approves or disapproves but also comments on errors that often occur. The combination of the signals we have in an AI model with several rules ultimately results in commercial breaks on television. With new feedback, we can of course, train a new model, but we can also learn which signals we miss.

This is precisely the vision we want to pursue. We are looking for hybrid intelligence, a collaboration between people and the algorithm. Our content has a high production value, which means that we can spend people’s time on it to make this content more accessible. This makes it more logical for us to make the cooperation between man and machine better. People can do some things better than algorithms, deliver the scale and through the combination, we can keep the quality high and maybe even make it better.

How is AI used at the media park?

We are a media company and not a technology company. I also firmly believe that we have to work together with other companies, so we look a lot outside and seek cooperation with other parties.

We are looking for collaborations at the Mediapark, partly because of the Media Perspectives Foundation, which establishes contacts between different media parties. In this way, we try to find different subjects on which we do not compete but can learn from each other. So we’ve done many different things, like the combination of ethics and AI. This included the explanation of the responsible use of AI in the media. This means we want to think carefully about how we use AI and its intention. This supports us as AI developers in how we should think about ethics and the use of AI in this.

We also looked at how we can jointly develop a benchmark for speech recognition, specifically for content on television. It is very logical to look at this together with a party like the NPO. Everyone benefits from high-quality speech recognition. We do this from a different point of view than the NPO, but to find the pain points and which points you need to dwell on longer, cooperation between different parties is needed. We also work together with universities and colleges on research projects. Half of our team are students who are graduating.

We also want to develop our own AI research lab in which we will employ five PhD students so that they can research us. We hope that this can start in early 2023.

How can the AI community help you?

We are happy to enter the discussion on this subject and the cooperation therein. If there are topics where cooperation is possible, that is very interesting for us to do. Topics like speech recognition are not specific to us alone. We have subtitled years of television content for speech recognition, which is a considerable amount of training data to develop a beautiful model. It could well be that a model trained on RTL data also works very well to perform speech recognition in a call centre. These are the very issues on which we must enter cooperation.

2048 1151 Pepijn van Vugt

Pepijn van Vugt

Pepijn van Vugt is editor for ai.nl who specializes in data, machine learning and artificial intelligence.

My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy