Artificial Intelligence (AI) is seemingly everywhere. From redefining education to upending healthcare, AI has become that hard to ignore technology that tech companies cannot stop talking about. While AI is being talked about in length right now, it is not a modern phenomenon either. AI has been maturing for centuries to become the transformational technology it is right now.
The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference on the subject at Dartmouth College. Before the term was coined, researchers and computer scientists have been laying the groundwork for AI to become a dominant field in computer science.
Before AI became the talk of every tech corner like sourdough became a pandemic staple, it was always machine learning that dominated the industry. With machine learning, computer vision, natural language processing, speech recognition and robotics becoming different forms of AI, let us look at some of the major milestones that were crucial in the evolution of AI.
250 BC: World’s first artificial automatic self-regulatory system
Greek inventor and mathematician Ctesibius invented the first automatic self-regulatory system by designing an improved water click in 250 BC. Called clepsydra, it did not require any outside intervention between the feedback and the controls of the mechanism. It kept more accurate time than any other clock invented by ensuring that the container used in the water clocks remained full.
From 380 BC to late 16th century: mechanised humans and automatons
These can be described as the primitive years for artificial intelligence as Aristotle described the syllogism, a method of formal, mechanical thought and theory of knowledge in The Organon. It was followed by Heron of Alexandia creating mechanical men and other automatons in the 1st century. This period also saw many theologists, mathematicians and philosophers publish materials on mechanical techniques.
1726: Description of the Engine
One of the consequential events leading to the development of AI as we know it now can be traced back to 1726 when Jonathan Swift published Gulliver’s Travels. The novel includes the description of the Engine as a machine on the island of Laputa. It was described as a project for improving speculative knowledge by practical and mechanical operations, which is in essence similar to what a computer does.
From 1822 to 1863: Mechanical machines and semantics
This period saw pioneers like Charles Babbage and Ada Lovelace work on programmable mechanical calculating machines. It was also the period when the first modern attempt to formalise semantics was made by mathematician Bernard Bolzano. George Boole invented Boolean algebra while Samuel Butler suggested that Darwinian evolution also applies to machines. His speculation that one day machines will become conscious and eventually supplant humanity is everything that modern tech companies are chasing right now.
1921: Rossum’s Universal Robots opens in London
Czech playwright Karel Čapek’s science fiction play called R.U.R. (Rossum’s Universal Robots) opened in London. The play introduced the idea of factory-made artificial people who came to be known as robots. This is the first use of the word robot in English and led to many people adopting the robot concept and applying it in their art and research.
1927: Release of science-fiction film Metropolis
The science-fiction film Metropolis was released in 1927 and it featured a robot double of a girl named Maria. The robot, obviously, unleashes chaos in Berlin of 2026 and was the first depiction of a robot on film and is the inspiration behind C-3PO in Star Wars.
1929: Makoto Nishimura designs Gakutensoku
Makoto Nishimura designed Gakutensoku, which is Japanese for “learning from the laws of nature”, in 1929 as the first robot built in Japan. It was able to change its facial expression, move its head and hands with the help of an air pressure mechanism.
1939: Development of Atanasoff Berry Computer (ABC)
At the Iowa State University, Atanasoff Berry Computer (ABC) was developed as a programmable digital computer by the inventor and physicist John Vincent Atanasoff with his graduate student Clifford Berry in 1939. The computer weighed more than 700 pounds and was capable of solving up to 29 simultaneous linear equations.
1943: A Logical Calculus of the Ideas Immanent in Nervous Activity
In 1943, Warren S. McCulloch and Walter Pitts published “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics. This influential paper discussed networks of idealised and simplified artificial neurons and how they might perform simple logical functions.
Edmund Berkeley published Giant Brains: Or Machines That Think in 1949 which highlighted how machines had become adept in effectively handling large sections of information. The book also compared the ability of these machines to that of the human brain and concluded that machines “can think”.
In 1950, Claude Shannon published “Programming a Computer for Playing Chess” as the first article on creating a chess-playing computer. An American mathematician, electrical engineer, and cryptographer, Shannon is known as the father of information theory.
This one needs no introduction. In 1950, Alan Turing published “Computing Machinery and Intelligence” and proposed the idea of “the imitation game”. Later renamed as “The Turing Test”, the paper examined a machine’s capability to think like a human. The test remains an integral element in the field of AI even today.
1952: First computer checkers-playing program
In 1952, Arthur Samuels developed the first computer checkers-playing program and the first computer to learn on its own. This is the first program with the ability to compete against human players in the game of Checkers.
The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence’ submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories) on 31st August, 1955. The workshop, which took place a year later, is generally considered as the official birthdate for this new field.
In December 1955, Herbert Simon and Allen Newell developed the Logic Theorist as the first artificial intelligence program. It eventually proved 38 of the first 52 theorems in Whitehead and Russell’s Principia Mathematica.
1957: Frank Rosenblatt develops the Perceptron
Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network, was developed by Frank Rosenblatt in 1957.
In 1958, John McCarthy developed the programming language Lisp, which became the most popular programming language used in artificial intelligence research.
Arther Samuel coined the term “machine learning” in 1959 which reported on programming a computer so “that it will learn to play a better game of checkers than can be played by the person who wrote the program”.
In 1959, John McCarthy published “Programs with Common Sense” in the Proceedings of the Symposium on Mechanisation of Thought Processes. He described Advice Taker as a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs “that learn from their experience as effectively as humans do.”
Unimate became the first industrial robot to start working on an assembly line in a General Motors plant in New Jersey in 1961.
In 1961, James Slagle developed SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus.
In 1964, Daniel Bobrow completed his MIT PhD dissertation titled “Natural Language Input for a Computer Problem Solving System” and developed STUDENT, a natural language understanding computer program written in Lisp. The program was designed for reading and solving algebra word problems.
In 1965, Joseph Weizenbaum developed ELIZA as an interactive program that carries on a dialogue in English language on any topic. Professor Weizenbaum designed it to be a parody but was surprised by the number of people who attributed human-like feelings to the computer program.
The year 1966 saw the creation of the first general-purpose mobile robot named Shakey. This project lasted from 1966 to 1972 and linked the different AI fields with navigation and computer vision. The robot now resides in the Computer History Museum.
The film 2001: Space Odyssey directed by filmmaker Stanley Kubrick was released in 1968 and placed AI into the mainstream. The film featured HAL (Heuristically programmed Algorithmic computer) as a sentient computer and can be dubbed as the inspiration behind voice assistants like Siri and Alexa.
Terry Winograd developed SHRDLU in 1968 as an early natural language understanding computer program.
In 1970, the first anthropomorphic robot – WABOT-1 – was built at Waseda University in Japan. It consisted of a limb-control system, a vision system and a conversation system.
In 1973, James Lighthill reported to the British Science Research Council on the state of artificial intelligence research. He concluded that “in no part of the field have discoveries made so far produced the major impact that was then promised” and it led to reduced government support for AI research.
Computer scientist Raj Reddy published “Speech Recognition by Machine: A Review” in the proceedings of the IEEE in 1976. Reddy summarised the early work on Natural Language Processing (NLP).
With the release of Star Wars: A New Hope, the film directed by George Lucas imagined a humanoid robot in the form of C-3PO and R2-D2 as an astromech droid that could interact through electronic beeps.
The Stanford Cart became one of the earliest examples of an autonomous vehicle in 1979. It successfully crossed a chair-filled room without human intervention in about five hours.
The year 1980 saw the Waseda University in Japan build Wabot-2 as a musician humanoid robot able to communicate with a person, read a musical score and play tunes of average difficulty on an electronic organ.
In 1981, the Japanese Ministry of International Trade and Industry allotted $850M for the Fifth Generation Computer project. The project aimed to develop computers that could translate languages, execute conversations, interpret pictures, and reason like human beings.
Electric Dreams, a film about a love triangle between a man, a woman and a personal computer, was released in 1984. The film directed by Steve Barron showed how computers could become indistinguishable from humans.
Before Winter is coming was made popular by Game of Thrones, Roger Schank and Marvin Minsky warned of the coming AI Winter. At the annual meeting of AAAI, they predicted an imminent bursting of the AI bubble and their warning proved accurate within the next three years.
A Mercedes-Benz van equipped with cameras and sensors became the first driverless car in 1986. It was built at Bundeswehr University in Munich under the direction of Ernst Dickmanns and was capable of driving at up to 55 mph on empty streets.
In October 1986, David Rumelhart, Geoffrey Hinton, and Ronald Williams published “Learning representations by back-propagating errors”. They described “a new learning procedure, back-propagation, for networks of neurons-like units.”
Computer Scientist Judea Pearl published Probabilistic Reasoning in Intelligent Systems in 1988. He was awarded the 2011 Turing Award for creating “the representational and computational foundation for the processing of information under uncertainty.”
In 1988, Rollo Carpenter developed the chat-bot Jabberwacky to “simulate natural human chat in an interesting, entertaining and humorous manner.” This was one of the earliest attempts at creating artificial intelligence through human interaction.
In 1990, Rodney Brooks published “Elephants Don’t Play Chess” and proposed a new approach to AI, especially building intelligent systems from scratch and based on the ongoing physical interaction with the environment.
The year 1993 saw the publishing of “The Coming Technological Singularity” by Vernor Vinge. Vinge predicted that we will have the technological means to create superhuman intelligence within thirty years and claimed that the human era will end shortly thereafter.
Inspired by Joseph Weizenbaum’s ELIZA program, Richard Wallace developed the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity) in 1995. The addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the web, differentiated the two.
Jürgen Schmidhuber and Sepp Hochreiter proposed Long Short-Term Memory (LSTM), a type of a recurrent neural network, in 1997. It is used today in handwriting recognition and speech recognition.
In 1997, Deep Blue became the first computer chess-playing program to beat a reigning world chess champion.
Dave Hampton and Caleb Chung created Furby in 1988, the first domestic or pet robot.
In 2000, Cynthia Breazeal at MIT published her dissertation on sociable machines and described Kismet, a robot that could recognize and simulate emotions.
Steven Spielberg released A.I. Artificial Intelligence, a film about David in 2001. The movie revolved around David, a childlike android programmed with the ability to love.
The year 2004 saw the first DARPA Grand Challenge, a prize competition for autonomous vehicles, held in the Mojave Desert. None of the autonomous vehicles finished the 150-mile route.
Oren Etzioni, Michele Banko, and Michael Cafarella coined the term “machine reading” in 2006. The term is described as an inherently unsupervised “autonomous understanding of text.”
The year 2006 also saw the publication of “Learning Multiple Layers of Representation” by Geoffrey Hinton. Hinton summarised the ideas that led to “multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it”. This forms the basis of an approach to deep learning.
In 2007, Fei Fei Li and colleagues at Princeton University started to assemble ImageNet, a large database of annotated images designed to aid in visual object recognition software research.
Rajat Raina, Anand Madhavan and Andrew Ng published “Large-scale Deep Unsupervised Learning using Graphics Processors” in 2009. They argued that “modern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionise the applicability of deep unsupervised learning methods.”
The search giant started developing a driverless car in secret in 2009. In 2014, it became the first to pass a US state self-driving test in Nevada.
IBM’s Watson, a natural language question-answering computer, participated in Jeopardy! and defeated champions, Ken Jenning and Brad Rutter. The televised game marked AI’s remarkable progress to the centre of human conversations.
Apple released Siri in 2011 as a voice-controlled personal assistant for iPhone users. The voice assistant relies on a natural language user interface to comprehend, observe and respond to human users. The release of Siri was followed by the debut of Google Now in 2012 and Microsoft Cortana in 2014.
The year 2012 saw Google researchers Jeff Dean and Andrew Ng report on an experiment in which they showed a very large neural network with 16,000 processors detect cat images without any background information from 10 million unlabeled images randomly taken from YouTube videos.
Elon Musk, Stephen Hawking and Steve Wozniak were among 3,000 others to sign an open letter requesting a ban on the development and adoption of autonomous weapons for war purposes.
Google DeepMind’s AlphaGo managed to defeat Go champion Lee Sedol in 2016. The victory of AlphaGo forced Sedol to retire from the Asian board game.
In 2016, Hanson Robotics introduced Sophia, a humanoid robot as the first “robot citizen”. With her similarity to an actual human being, ability to see, make facial expressions and communicate with the help of AI, Sophia was different from those that came before her.
In 2018, Alibaba developed an AI model that scored better than humans in a Stanford University reading and comprehension test. On a set of 100,000 questions, the AI model scored 82.44 against 82.30 scored by humans.
OpenAI GPT-3 was first introduced in May 2020 and the beta testing began in June 2020. The OpenAI GPT-3 is a language model that generates text by adopting algorithms that are pre-trained.