Can autonomous systems make ethically correct decisions? Meta AI’s Hubert Etienne’s paper calls the ethical approach wrong

If a machine makes an important decision, will you accept it? Or will you consider the machine to make an ethically correct decision?

These questions are at the heart of all the major advancements happening in the field of artificial intelligence (AI). AI and 5G are two of the most dominant and much hyped technologies right now. The synergy between the hype around these two technologies makes sense considering their combination is set to power some of the biggest advancements in the interim future.

With automation making major waves in the industry and acting as a key pillar of Industry 4.0, both AI and telecommunication advancement like 5G are important. While researchers at MIT believe it is possible to train machines to make ethical decisions like humans, a new paper vehemently opposes that narrative.

Autonomous systems are everywhere

One of the important things to understand right now is that autonomous systems are no longer restricted to low-risk environments. While a major share of autonomy has occurred in the field of performing actions that are repetitive, they are also taking actions in high risk environments.

The research paper by Hubert Etienne explains how autonomous systems are used in high risk environments such as highways, operation theatres, and care homes. The study looks at how all of these roles are morally significant and thus, there is a need to study about ethics.

AI ethics, a growing field with artificial intelligence, focusses on ensuring autonomous systems (or AI systems) are capable of making the ‘right’ ethical choices. The paper by Etienne opposes this growing interest and argues that the methodology and results are unreliable and fail to advance ethics discourse by using autonomous vehicles as an example.

Moral Machine Experiment

The first part of the paper criticises the famous Moral machine (MM) experiment by arguing how it fails to “contribute to development of ‘ethical’ decision making in autonomous vehicles.”

According to Etienne, the MM experiment is designed by collecting answers to trolley-problem type moral dilemmas. The author notes that it is designed by asking questions like whether to save the many over the few, young over the old, and so on.

The report also notes that the MM experiment collected 39.61 million answers from 1.3 million respondents across 233 countries and territories over a period of two years. The moral machine experiment was originally posited as a mere descriptive ethic and tried to describe what people considered to be ethical.

This is in stark contrast to a benchmark or guideline which would describe what people should do. Etienne opposes the use of this experiment as an inspiration for the development of automated decision making based on computational social choice.

Etienne notes in his paper that the MM experiment was not methodologically sound to use for an actual automated ethical decision making. “Aggregating individual uninformed beliefs does not produce any common reasoned knowledge,” Etienne states in the paper.

Instrumentalisation of ethical discourse

Hubert Etienne, who is an AI ethics researcher at Facebook parent Meta, also opposes the intrumentalisation of ethical discourse. In the second part of his paper titled “When AI Ethics Goes Astray: A Case Study of Autonomous Vehicles,” Etienne argues that “the instrumental use of moral considerations as leverage to develop a favourable regulation for manufacturers has no solid foundations.”

Since the earnest development of autonomous vehicles, a common argument has been that humans make grave ethical errors behind the wheel and an ethical AI system can avoid those errors. Etienne believes that the deployment of autonomous vehicles is not necessarily a good thing.

Etienne believes the money used to develop autonomous systems can instead be used to alleviate starvation for many people. He also believes that autonomous systems may still kill some people, and “those people would be different from the people killed by not using autonomous systems.”

Thirdly, Etienne thinks that if the decision making of autonomous systems undermines some ethical principles or the value of individuals then a huge number of humans will be violated everyday. He believes this regardless of whether these humans interact directly with autonomous systems or not.

What’s the big picture?

The valid argument that Etienne makes is against the use of moral machine experiment to drive the ethical decision making of autonomous machines. By explaining that the moral machine experiment included irrelevant criteria and the distinction between descriptive ethics and normative ethics, Etienne makes a solid argument.

By completely opposing instrumentalisation of ethics discourse, Etienne is not making the strongest argument but he does seem to believe that the ethical approach to building autonomous vehicles is not sound. For scientists building systems that mimic human intelligence, the challenge is abstract the way humans think but the coming attraction is one where ethical decision making is done by autonomous systems.

What to read next?

2048 1366 Editorial Staff
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy