‘We cannot outsource our responsibilities to machines’: Zeynep Tufekci on machine intelligence and the need for human ethics and values

As artificial intelligence becomes omnipresent and powers a number of devices and services, it is becoming increasingly clear for tech companies to look at the ethical implications. A paper by Meta AI’s Hubert Etienne simply called out the autonomous systems behind self-driving systems for not being ethical.

We have now reached a place where we no longer talk about AI systems without also talking about ethics. However, techno-socialist Zeynep Tufekci has been advocating for ethical AI for nearly half a decade. Around 5 years ago, Tufekci explained why human morals are more important than ever in her TED Talk.

After hearing Gary Kasparov say there is no need to fear intelligent machines and instead, humans need to work with them, Tufekci offers a cautionary talk on what could be with the advancements in the field of AI.

Computation used to make subjective decision

In her TED Talk, Tufekci makes it abundantly clear that machine intelligence is already here. She explains how machine intelligence is being used to make subjective decisions. She also talks about how we are increasingly asking computers questions that have no single right answers.

The modern computation is filled with questions that have subjective and open-ended, value-laden answers. This, Tufekci explains, is possible because of the development of powerful software. She fears that this quest to build powerful software has led to a place where the software is less transparent and more complex.

“Recently, in the past decade, complex algorithms have made great strides. They can recognise human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages,” she says.

She links all this growth and changes to machine learning, a programming method that is different from traditional programming. “It’s more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data,” she explains.

She says the upside of machine learning is that the method is really powerful. The downside being that we don’t really understand what the system learned. “It’s a problem when this artificial intelligence system gets things wrong. It’s also a problem when it gets things right, because we don’t even know which is which when it’s a subjective problem,” she adds.

Training of machine learning systems

In her TED Talk, Tufekci speaks extensively about various scenarios where machine learning could do more harm than help. She uses examples that are being deployed in the real world without offering any indicator of its ethical approach. However, she also drills down on how training data could be the root cause of every effect.

She says machine learning systems are usually trained on data generated by our actions, human imprints. “Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us,” Zufekci argues.

She substantiates those claims with research which suggests that women are less likely than men to be shown high-paying job ads on Google. Similarly, searching for African-American names is likely to bring up ads suggesting criminal history.

Tufekci says, “Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don’t know, can have life-altering consequences.”

She also brings up the use of algorithms in parole and sentencing decisions. Tufekci calls the use of AI systems and other algorithms in law enforcement as a “commercial black box” and ProPublica found that the outcomes to be biassed and its predictive power to be dismal.

Failure of machine intelligence

While machine learning is becoming ever more powerful and there are now more systems dependent on this technology than before, there is also a possibility for machine intelligence to fail.

Zeynep Tufekci cites the example of IBM’s Watson machine, which failed to answer a simple question in the final jeopardy. She says when machine intelligence fails, it will do so by not fitting error patterns of humans and in ways, “we won’t expect and be prepared for.”

She says humans have always made biases and even decision makers make mistakes and these are questions that we cannot escape. “We cannot outsource our responsibilities to machines,” Tufekci says.

With advancement in computational algorithms and computational power, AI systems will only get more advanced. However, these AI systems won’t come with what Tufekci calls “Get out of ethics free” card. She says machine intelligence is here and thus, it is necessary for humans to hold on ever tighter to humans values and human ethics.

What to read next?

1964 1002 Editorial Staff
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy