Mar 28, 2022

Read Time IconRead time: 4 mins

The Early History of Artificial Intelligence

The concept of artificial intelligence (AI) was introduced as early as the 1950s by pioneering minds like Alan Turing. Since then, AI has advanced from an unknown force to a universally recognized opportunity. The late Patrick Winston, former Ford Professor of Artificial Intelligence and Computer Science at MIT, analyzes Turing’s rationale for the ‘Turing Test’ and discusses the history of AI, its early role in collective intelligence, and how it has advanced to where it is today.

Transcript

What is intelligence in machines?

Modern thinking about the possibility of intelligent systems all started with Turing’s famous paper in 1950. He, of course, knew that he couldn’t define what intelligence was, so because of that, he introduced what he called the Turing Test. The idea was that if a human couldn’t tell within five minutes if he was talking to a computer or a person, then the computer would be said to have passed the Turing Test.

Turing couldn’t imagine the possibility of dealing with speech back in 1950, so he was dealing with a teletype, but much like what you would think of as texting today. And that was because Turing knew that he couldn’t actually define what intelligence was. It’s too hard. It’s too slippery. So that’s why he introduced the Turing Test.

But I’ve read that paper many times and I think that what Turing was really after was not trying to define intelligence or a test for intelligence, but really to deal with all the objections that people had about why it wasn’t going to be possible. What Turing really told us, was that serious people can think seriously about computers thinking and that there’s no reason to doubt that computers will think someday. That day is approaching.

The founders of AI

About 10 years after Turing published his paper in 1950, important laboratories were set up by Marvin Minsky and John McCarthy, by Allen Newell and Herbert Simon. McCarthy’s approach at Stanford was to start with mathematical logic: he spent his whole life trying to bend logic to his will. Newell and Simon focused on modeling human thinking. They developed systems that solve simple puzzles and work out simple problems in a manner that they believed was consistent with human experiments. Minsky’s approach was harder to characterize. He believed that one representation, method, or approach — no one of those could deliver a full understanding of intelligence.

That was the central message of his seminal paper, which was titled “Steps Toward Artificial Intelligence”, in 1961. You know, in retrospect, we can think that Turing told us we could do this and that paper by Minsky told us what to do. So that’s why Turing and Minsky are often regarded as the real pioneers, the real founders of the field of artificial intelligence. Well any event that brings us to what some people call AI’s first wave.

The first applications of AI in machines

Early in the 1960s, James Slagle wrote a program that integrated symbolic expressions. He was trying to model what a freshmen does at MIT when they learn that kind of mathematics. Because Slagle’s program performed so impressively, it’s what I consider to be the signature program of AI’s first wave. That key idea was called ‘problem reduction’. The idea is simple: you just take a hard problem and you break it into simpler problems, and then you break those simpler problems into problems that are still simpler until you’ve got something you can just do. That’s what problem reduction was about, but it’s only one of a cornucopia of ideas that have emerged from AI research.

At MIT, the work of Slagle was quickly followed by other successes, and by 1970 programs understood drawings, they learned from examples, they knew how to build structures, and one even answered questions much like Siri and Alexa do today.