Python Machine Learning
The Complete Guide to Understand Python Machine Learning for Beginners and Artificial Intelligence
Table of Contents
Introduction
Congratulations on purchasing Python Machine Learning: How to Learn Machine Learning with Python, The Complete Guide to Understand Python Machine Learning for Beginners and Artificial Intelligence , and thank you for doing so. The following chapters will discuss everything a beginner would want to know about Machine learning, artificial intelligence, and Python.
The first chapter is an introduction to Machine Learning and a history of where it all began back in the 1940s to where it is at today. The chapter also covers the terms popular in Machine learning and artificial intelligence circles and their definitions. This is so that a beginner will understand the language in the book without much struggle.
The second chapter is about the concept of machine learning. This chapter offers an in-depth explanation of how machines gain the ability to think for themselves the same way human beings do and the many ways people apply machine learning in various fields. It continues to explain the key elements of machine learning and gives a description of the types of Artificial Intelligence learning available today.
The third chapter is about the mathematical notation for machine learning, where the reader will understand the relationship between mathematical nomenclatures and machine learning. The chapter also explains the terminologies common in machine learning, and it concludes with a roadmap for machine learning exploration.
The fourth chapter is an introduction to using Python for machine learning, and it explains the basics an individual would need to understand about this excellent coding language. The chapter explains the various stages involved in machine learning using Python, and it contains real-life explanations of the integral features and functions making up this language.
The fifth chapter is an explanation of Artificial Neural Networks in machine learning. This chapter goes into detail to show how the human brain is the main inspiration for machine learning and how with time machines will have the ability to reason the same way human beings do. The chapter explains the meaning of neural networks, the classifiers in Python machine learning, the machine learning models, and the metrics for evaluating machine-learning models.
The seventh chapter is about Machine Learning training models. It covers the process of training machines in Python and goes into detail on the use of linear regression in training. The eighth chapter is about developing machine-learning models with Python, all the way from installing this coding language, loading the dataset and summarizing them, evaluating algorithms, and making predictions.
The ninth chapter is about training machine learning algorithms for classification, and it goes deep into explaining all the steps involved and processes implemented. Some of the areas covered in this chapter include Linear Regression, Logistic regression, decision tree, random forest, and dimensionality reduction algorithms, among others. The final chapter is about building good training sets for machine learning. The chapter explains how a beginner can build data, select the data, process it, and then convert it.
There are plenty of books on this subject on the market, thanks again for choosing this one! Every effort was made to ensure it is full of as much useful information as possible; please enjoy it!
Chapter 1: Introduction (A Small History of Machine Learning)
History of Machine Learning
Machine learning is the platform from which we develop neural networks. Currently, there are many applications of the power and capacity of neural networks in everyday life. Humans need these artificial intelligence models to assist in technical time demanding tasks. These are tasks, which would otherwise be prone to errors due to our biological limitations.
The machines need us as well to continue their process of learning and to gain more knowledge as technological intelligence improves. When we observe how the improvement and developments do go hand in hand, we can only imagine what is in store for the future of humankind. All these leaps and bounds we are currently undergoing had to start somewhere. That somewhere and some time were in the past, not far past, but within the last century.
Given the progress we have made so far; it would be hard to bet against achieving exponentially massive leaps in technology in our very near future. However, before that fantastical future gets here, let us all look back to where it all started. Let us examine how all these machine learning and artificial intelligence concepts came into existence. Here is the roughly accurate timeline of events:
The 40s
There were two leading experts in the field of math (W. Pitts) and neurophysiology (W. McCulloch). They first had the idea of combining their areas of expertise. Both of them realized than human brains are the center of decision-making for humans. Neurons govern the minds.
Neurons communicate by electrical neurotransmission. Humans already knew how electricity works, but what if the operation of neurons and electrical circuits had a relationship. The idea of combining the terms neural and networks was born then, in 1943.
The 50s
Inventions were borne out of world war two and dominated this particular decade. Many ideas that spanned this decade came about due to the war effort. The previous conscription of scientific minds to assist in the war effort had left many ideas worth further pursuing. For instance, you had the Turing test developed by the British genius Alan Turing.
The test was reasonably paradoxical as for it to be a success; it had to prove that a contrary assumption was right. For instance, the trial test had to prove to the masses that the machine was a biological human and not a mechanical computing device. This specific year was 1950.
Two years later, people all over the technological field were astonished by the success of a computer, which was programmed to play a game of checkers efficiently. The machine correctly assimilated all the rules relevant to the match. In addition, its program executed the algorithms according to the laws of the game. This event was the initial instance that people could believe that a computer could concurrently learn as it operated at the same time from a set of rules. This achievement was courtesy of a Mr. Arthur Samuel in 1952.
While these initial developments were groundbreaking at their time, they were just the baby steps that the industry needed. To cover an unimaginably massive distance, you start with a single step. These initial developments could mirror as such individual steps. These achievements were just the teetering between early concepts of machine learning Vis a Vis advanced information technology and computer science. With hindsight, you could view these initial successes as the beginning of programming a computer to perform whatever task you commanded.
The expectation came into existence from the creative mind of Mr. Frank Rosenblatt. He developed a system of programs, which was capable of identifying and subsequently, recognizing various shapes and patterns. He almost appropriately named his invention, Perceptron. He was a machine that operated based on a system and had the added advantage of recognition. It was mainly due to these factors that his invention widely claimed the title of the world's first artificial neural network. The year was 1958.
Since the revelation of the Perceptron, many machine linguists started investing more of their time and resources. These investments went directly into this relatively new subject of neural networks. Progress on neural networks carried on at a pace that only technological advancements would allow. Neural networking was a popular knowledge base that attracted a lot of civilized technical interest from scholarly and intellectual sources.