Moravec’s Paradox: Explained in five levels of difficulty with practical examples

From Amazon to Tesla CEO Elon Musk, every company and every tech entrepreneur seems to be talking about robotics. While they speak about robotics in terms of its impact and efficiency in the production line, the conversation misses out on concepts like Moravec’s paradox.

Moravec’s paradox is a phenomenon offering observations surrounding the abilities of AI-powered tools, especially robots. While there are many ways to explain this concept, Chelsea Finn, Assistant Professor in Computer Science and Electrical Engineering at Stanford University, explains the concept in five levels of difficulty.

What is Moravec’s paradox?

In many ways, Moravec’s paradox can be defined as the guiding principle for design and development of modern robotics. Moravec’s paradox is an observation by artificial intelligence and robotics researchers that reasoning requires very little computation while sensorimotor and perception skills require enormous computational resources.

This principle is contrary to traditional assumptions and was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky, and others. “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility,” Moravec wrote in 1988.

For decades, technologists and scientists have been arguing in favour of reverse engineering human skills to develop robots. However, the reality is that robots have been really good at understanding and overcoming hard problems but have found easier problems difficult to tackle.

Moravec’s paradox, in essence, argues that the oldest human skills are largely unconscious and thus appear to be effortless. As a result, these effortless skills of humans will be difficult to reverse-engineer but skills requiring effort won’t be difficult to engineer at all.

Here’s how Finn explains this paradox to five different people: a child, a teen, a college student, a grad student, and an expert.

Explaining Moravec’s paradox to a child

In the first task, Finn explains Moravec’s paradox to a 6-year-old child as a phenomenon that explains “what is easy and what is hard” for a robot. To further explain the point, she uses the example of stacking cups and the child even manages to define what would be described as easy and what would be difficult.

Once the definition is established, Finn explains that stacking two cups is “actually really hard for robots.”

She further asks the child to think about how they might have the robot stack two cups. In order to do that, there will be a need to program the robot to move its hand right where one edge of the cup is and then program the robot to close its hand around the cup. The next programming will require the robot to move over where the second cup is placed and open its hand.

With a real robot in hand, Finn demonstrates how it will work, but when she inadvertently moves the cup a bit, the robot fails to recognise that movement and goes to the same position programmed previously. In a simple way, Finn manages to prove Moravec’s paradox that simple things for humans are actually hard for robots.

Explaining Moravec’s paradox to a teen and college student

In order to explain Moravec’s paradox to a teen, Finn first asks the teenager to pick up a penny with their right hand and put it in their left hand. To make the task a little difficult, she next asks the teenager to wear gloves and then try to pick up the penny with right hand and put it in their left hand with their eyes closed.

Even though the coin drops the first time, the teenager succeeds in his second attempt. She explains how the teenager knew from the sound that the coin dropped. “When a robot tries to do something like pick up an object, not only do you need to program exactly what the motors should do, the robot also needs to be able to see where the object is. This is what’s called a perception action loop in robotics,” she further explains.

While explaining to the college student, Finn uses the example of picking up cups again. Instead of explaining the process, she narrates the generalisation gap, wherein there is a difference between what the robot is trained to do and the new thing it is asked to do.

In a nutshell, Finn explains how a robot is made up of two components: perception (ability to see and feel) and action (ability to figure out how to move motorised parts like an arm). She says the only way robots can advance and come close to matching human skills of doing simple things easily is training perception and action components in sync with each other.

Explaining Moravec’s paradox to a grad and expert

While Finn does not need to explain Moravec’s paradox to a grad student and expert, she does focus on finding a solution to overcome this challenge. While speaking about solutions with the grad student, she emphasises on the need to offer structure and support to the robot, echoing the sentiment of the first year Phd student.

She adds that robots can get better at learning new tasks if programmers can acquire prior learnings about the world and interaction from previous data and even offline data. They also describe how robots can be trained better with skill transfer style of learning.

Michael Frank, Professor of Psychology at Stanford University, begins by expressing his surprise when knowing that robotics begins with the assumption that your computer model cannot recognise objects because “recognising objects is impossible.”

Frank says a lot of things that humans do and that we take for granted are “learnt in cultural time” or “evolutionary time.” Thus, he doesn’t see there to be enough data to train a robot to perform the same actions the way humans would do
.
Both Finn and Frank conclude that we now have AI systems that are capable of completing higher level tasks like playing chess. They say it is easier because these systems are given abstraction to all the perception and action systems. For now, Moravec’s paradox stands with robots finding tasks easy to humans as difficult and will thus accompany humans instead of replacing them.

What to read next?

2048 1365 Editorial Staff
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy