This chapter provides a straightforward introduction to artificial intelligence (AI) , which in turn helps provide a framework for comprehending what AI is all about and why it is such an exciting and rapidly evolving field of study. Lets start with some historical facts about the origins of AI.
AI Historical Origins
Remarkably, AI, or something akin to it, has been around for a very long time. It has been recorded that ancient Greek philosophers discussed automatons or machines with inherent intelligence. In 1517, the Prague Golem was created; it is shown in Figure .
The Golem is made of clay, but according to Jewish folklore, it could be animated to carry out various acts of vengeance and retribution to parties responsible for anti-Semitic acts.
Ren Descartes, a famous French philosopher, wrote in 1637 about the impossibility of machine intelligence in his Discourse on Method treatise. Descartes was not advocating AI, but the treatise does show it was on his mind.
A more fanciful AI experiment exampleor more appropriately stated, a hoaxis an automated chess player that made the rounds in Europe in the late 18th to mid-19th centuries. It was known as The Turk. A lithograph of it on a modern stamp is shown in Figure .
Figure 1-2.
Automated chess player
It was purported to be an intelligent machine that could play a game of chess against a human opponent. In reality, there was a human chess player jammed into the machines supporting box. He operated manipulators to move the machines chess pieces. I would suppose that there must have been a miniature periscope or peephole available to allow this hidden chess player the opportunity to surveil the chessboard. The odd name The Turk is from the German word Schachtrke , which means automaton chess player. The typical human chess master hidden in the box was so skilled that he would often win matches against notable opponents, including Napoleon Bonaparte and Benjamin Franklin. It was not until many years later that a real machine was available to actually play a reasonable chess game.
The advent of a scientific AI approach waited until 1943, upon the publication a paper by McCulloch and Pitts, in which they described perceptrons, a mathematical model based on real biological brain cells called neurons. In their paper, they accurately described how neuron cells fired in a binary fashion, similar to electronic binary circuits. They also went well beyond that simple comparison to show how such cells could dynamically change their function with time, essentially creating rudimentary behavioral actions. This seminal paper was the first in a long series that established an important AI research area concerned with neural networks. I discuss this topic in greater detail in a later chapter.
In 1947, Alan M. Turing wrote:
In my opinion, this problem of making a large memory available at reasonably short notice is much more important than doing operations such as multiplication at high speed. Speed is necessary if the machine is to work fast enough for [it] to be commercially valuable, but a large storage is necessary if it is to be capable of anything more than rather trivial operations. The storage capability is therefore the more fundamental requirement.
Turing, who many readers may recognize as the genius behind the effort to decode the German Enigma machine that considerably shortened the duration of WWII, also recognized in this short paragraph that any future machine intelligence would be predicated upon having sufficient machine memory available and not be solely reliant on computing speed. I have more to say about Turing a bit later in this chapter when the Turing test is discussed.
In 1951, a young mathematics PhD candidate named Marvin Minsky, along with Dean Edmonds, designed and built an analog computer based on the perceptrons described in the McCulloch and Pitts paper. This computer was named the Stochastic Neural Analog Reinforcement Computer (SNARC) . It consisted of 40 vacuum tube neuron modules, which in turn controlled many additional valves, motors, gears, clutches, and actuators. This system was a randomly connected network of Hebb synapses that made up a neural network learning machine. The SNARC was possibly the first artificial self-learning machine. It successfully modeled the behavior of a rat traversing a maze in a search of food. This system exhibited some rudimentary learning behaviors that allowed the rat sim to eventually negotiate the maze.
A real turning point in AI progress happened in 1956 during an AI conference at Dartmouth College. This meeting was held at the behest of Minsky, John McCarthy, and Claude Shannon to explore the new field of AI. Claude Shannon has often been referred to as the father of information theory in recognition of his brilliant work accomplished at the prestigious Bell Telephone Lab in Holmdel, NJ.
John McCarthy was no slouch either, as he was the first to use the phrase artificial intelligence , and the creator of the Lisp programming language family. He was a significant influence in the design of the ALGOL programming language. He also contributed significantly to the concept of computer timesharing , which makes modern computer networks possible. Minsky and McCarthy were also the founders of the MIT Media Lab, now known as the MIT Computer Science and Artificial Intelligence Lab.
Returning to the 1956 conference, McCarthy stated this now classic definition of AI, which as far as I know, remains the gold standard that most people use when asked to define AI:
It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
McCarthy used the phrase human intelligence in this definition, which I further explore a little later in this chapter. There were many other fundamental AI concepts set forth in this conference, which I cannot further explain in this book, but I urge interested readers to further explore.
The 1960s was a very progressive decade in terms of AI research. Arguably, the work of Newell and Simon in detailing the General Problem Solver algorithm stands out. This approach used both computer and human problem-solving techniques. Unfortunately, computer development was still evolving, and memory and speed capabilities to efficiently handle the algorithms requirements were simply not present. (Remember Turings warning that I earlier discussed.) The General Problem Solver project was eventually abandonednot because it was theoretically incorrect, but because the hardware needed to implement it was simply not available.
Another significant AI contribution during this 1960s was Lofti Zadehs introduction of fuzzy sets and logic, which were the foundation of the impressive AI branch known as fuzzy logic . Zadeh discussed how computers do not necessarily have to behave in a precise and discrete logical pattern, but instead take a more human-like fuzzy logic approach. I present an interesting fuzzy logic project in Chapter .