Digital computers and the rise of the information age have revolutionized the modern lifestyle. The invention of digital computers has enabled us to digitize numerous areas of our lives. This digitalization allows us to outsource many tedious daily tasks to computers where previously humans may have been required. An everyday example of this would be modern word processing applications that feature built in spell checkers to automatically check documents for spelling and grammar mistakes.
As computers have grown faster and more computationally powerful, we have been able to use them to perform increasingly complex tasks such as understanding human speech and even somewhat accurately predict the weather. This constant innovation allows us to outsource a growing number of tasks to computers. A present day computer is likely able to execute billions of operations a second, but however technically capable they become, unless they can learn and adapt themselves to better suit the problems presented to them, theyll always be limited to whatever rules or code us humans write for them.
The field of artificial intelligence and the subset of genetic algorithms are beginning to tackle some of these more complex problems faced in todays digital world. By implementing genetic algorithms into real world applications it is possible to solve problems which would be nearly impossible to solve by more traditional computing methods.
What is Artificial Intelligence?
In 1950, Alan Turing a mathematician and early computer-scientist - wrote a famous paper titled, Computing Machinery and Intelligence, where he questioned, Can computers think? His question caused much debate on what intelligence actually is and what the fundamental limitations of computers might be.
Many early computer scientists believed computers would not only be able to demonstrate intelligent-like behavior, but that they would achieve human level intelligence in just a few decades of research. This notion is indicated by Herbert A. Simon in 1965 when he declared, Machines will be capable, within twenty years, of doing any work a man can do. Of course now, over 50 years later, we know that Simons prediction was far from reality, but at the time many computer scientists agreed with his position and made it their goal to create a strong AI machine. A strong AI machine is simply a machine which is at least just as intellectually capable at completing any task its given as humans.
Today, more than 50 years since Alan Turings famous question was posed, the possibility of whether machines will eventually be able to think in a similar way to humans still remains largely unanswered. To this day his paper, and thoughts, on what it means to think is still widely debated by philosophers and computer scientists alike.
Although were still far from creating machines able to replicate the intelligence of humans, we have undoubtedly made significant advances in artificial intelligence over the last few decades. Since the 1950s the focus on strong AI and developing artificial intelligence comparable to that of humans, has begun shifting in favor of weak AI. Weak AI is the development of more narrowly focused intelligent machines which is much more achievable in the short term. This narrower focus has allowed computer scientists to create practical and seemingly intelligent systems such as Apples Siri and Googles self-driving car, for example.
When creating a weak AI system, researchers will typically focus on building a system or machine which is only just as intelligent as it needs to be to complete a relatively small problem. This means we can apply simpler algorithms and use less computing power while still achieving results. In comparison, strong AI research focuses on building a machine thats intelligent and able enough to tackle any problem which we humans can. This makes building a final product using strong AI much less practical due to the scope of the problem.
In only a few decades weak AI systems have become a common component of our modern lifestyle. From playing chess, to helping humans fly fighter jets, weak AI systems have proven themselves useful in solving problems once thought only possible by humans. As digital computers become smaller and more computationally capable, the usefulness of these systems is likely to only increase in time.
Biologically Analogies
When early computer scientists were first trying to build artificially intelligent systems, they would frequently look to nature for inspiration on how their algorithms could work. By creating models which mimic processes found in nature, computer scientists were able to give their algorithms the ability to evolve, and even replicate characteristics of the human brain. It was implementing their biologically-inspired algorithms that enabled these early pioneers, for the first time, to give their machines the ability to adapt, learn and control aspects of their environments.
By using different biological analogies as a guiding metaphor to develop artificially intelligent systems, computer scientists created distinct fields of research. Naturally, the different biological systems that inspired each field of research have their own specific advantages and applications. One successful field, and the one were paying attention to in this book, is evolutionary computation - in which genetic algorithms make up the majority of the research. Other fields focused on slightly different areas, such as modeling the human brain. This field of research is called artificial neural networks, and it uses models of the biological nervous system to mimic its learning and data processing capabilities.
History of Evolutionary Computation
Evolutionary computation was first explored as an optimization tool in the 1950s when computer scientists were playing with the idea of applying Darwinian ideas of biological evolution to a population of candidate solutions. They theorized that it may be possible to apply evolutionary operators such as crossover which is an analog to biological reproduction - and mutation which is the process in which new genetic information is added to the genome. Its these operators when coupled with selection pressure that provide genetic algorithms the ability to evolve new solutions when left over a period of time.
In the 1960s evolution strategies an optimization technique applying the ideas of natural selection and evolution - was first proposed by Rechenberg (1965, 1973) and his ideas were later expanded on by Schwefel (1975, 1977). Other computer scientists at the time were working independently on similar fields of research such as Fogel L.J; Owens, A.J; and Walsh, M.J (1966), who were the first to introduce the field of evolutionary programming. Their technique involved representing candidate solutions as finite-state machines and applying mutation to create new solutions.
During the 1950s and 1960s some biologists studying evolution began experimenting with simulating evolution using computers. However, it was Holland, J.H. (1975) who first invented and developed the concept of genetic algorithms during the 1960s and 1970s. He finally presented his ideas in 1975 in his groundbreaking book, Adaption in Natural and Artificial Systems. Hollands book demonstrated how Darwinian evolution could be abstracted and modeled using computers for use in optimization strategies. His book explained how biological chromosomes can be modeled as strings of 1s and 0s, and how populations of these chromosomes can be evolved by implementing techniques that are found in natural selection such as mutation, selection and crossover.