AI Liability Directive: what is it and how it does it protect people against harm caused by AI-powered software and products

Artificial Intelligence is no longer confined to big tech companies, research labs, and academics. The technology, promising to bring rapid change to society, is now seemingly everywhere. However, AI is still not at a state where it can claim to make the right decisions at all times.

There are already proven biases shown by AI systems in the field of image recognition as well as hiring. As a result, there has been constant criticism against the lack of a system to address harm caused by AI-powered software and products. To address these concerns, the European Commission has introduced a new AI Liability Directive. Here is everything you need to know.

What is the AI Liability Directive?

The new AI Liability Directive was proposed by the European Commission last month as a way to compensate people harmed by AI-powered software and products. The directive will make it easier for a person or an organisation to sue for compensation when they get hurt or suffer damage through artificial intelligence-powered drones or robots.

The directive also includes any harm caused by software such as automated hiring algorithms, reports Politico. “The new rules will give victims of damage caused by AI systems an equal chance and access to a fair trial and redress,” Justice Commissioner Didier Reynders told reporters ahead of the presentation of the proposals.

As AI gains popularity and becomes embedded into products and services we use on a daily basis, the call for regulation is also growing. Even the likes of Elon Musk have openly called for regulation in the fields of artificial intelligence and machine learning.

Now, the draft law proposed by the European Commission shows the intent to regulate AI and also set a global standard to control this new age technology. The AI Liability Directive comes right before the AI Act, designed to regulate high-risk use of AI such as facial recognition, social-scoring systems, and AI-boosted software for immigration and social benefits, becomes law.

“If we want to have real trust of consumers and users in the AI application, we need to be sure that it’s possible to have such an access to compensation and to have access to real decision in justice if it’s needed, without too many obstacles, like the opacity of the systems,” said Reynders.

AI Liability Directive: what provisions does it offer?

Under this new directive, victims of AI-powered software and products will be able to challenge a provider, developer or user of AI technology if they suffer damage to their health or property. They can also go to court if they suffer discrimination based on fundamantal rights such as privacy.

While there are existing provisions for victims to plead their case against AI-powered software and products, the whole process has been expensive for victims since the technology is complex and opaque.

The new law will allow courts to seek more information about the data being used for the algorithms, the technical specifications, and risk-control mechanisms. If the onus was on the victim earlier to file their case then the onus is now on the provider, developer or user of AI technology to ensure no harm is done to a person or an organisation.

Once technology providers are made to shed light on their AI programs and the data used by such a program, the victims will be able to prove the damage caused to them. The user of an AI system, such as a university, workplace, or government agency, will also be held liable under this new law.

What’s the next step for the AI Liability Directive?

The AI Liability Directive still needs approval from national governments in the European Union Council and from the European Parliament. The European Parliament might object to this directive since it proposes a weaker liability to technology providers than the one suggested earlier.

In 2020, the Commission had proposed adoption of rules to ensure victims of harmful AI can obtain compensation. The rule asked for developers, providers, and users of high-risk autonomous AI to be held legally responsible even for unintentional harm.

With a pragmatic approach, the new AI Liability Directive offers less provision to victims of AI-powered software and products, but it does offer a framework previously unavailable.

“We chose the lowest level of intervention,” said Reynders, according to Politico. “We need to see whether new developments [will] justify stronger rules for the future.”

The Commission plans to review whether a stricter regime is needed five years after the AI Liability Directive comes into force.

What to read next?

2048 1367 Editorial Staff
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy