MASTERING DEEP LEARNING FUNDAMENTALS WITH PYTHON
The Absolute Ultimate Guide for Beginners To Expert and Step By Step Guide to understand Python Programming Concepts
RICHARD WILSON
Copyright 2019 by Richard Wilson
All rights reserved.
No a part of this book may be reproduced or transmitted in any kind or by any suggests that electronic or mechanical, as well as photocopying, Recording or by any data storage and
TABLE OF CONTENTS
PART I: FUNDAMENTALS OF DEEP LEARNING
In recent years, the increasing number of software technologies shaped under three specialized topics such as artificial intelligence, machine learning and deep learning has led to an increasing interest in these three subjects.
Even though the experts working in these fields have not yet reached consensus about these words, new concepts emerge every day about these three issues which are sometimes used interconnected and sometimes used in different meanings.
Artificial intelligence, which directly enters the centre of our lives with advancing technology, machine learning that regulates the decision mechanism of computers, and deep learning examining the data analysis and algorithms underneath all of these, is progressing towards becoming a common future field of study for many disciplines. As it is an older term in general, these studies, which are referred to as artificial intelligence, directly affect the course of technology.
We are now able to program devices with up to a certain level of processors with over 100 programming languages. These devices execute the commands they receive from us through their systems in the same period they receive from us. The biggest share in the advancement of technology is called the programmable feature, which has a flawless operation. However, no matter how much this talent develops, no matter how artificial intelligence enters into our lives, it is defeated every time against the human brain.
For example, it can easily understand and solve structures consisting of repetitions, control groups or mathematical operations; the current technology remains inadequate in uncertain expressions that need logic. In this way, Deep Learning, which is essential to teach learning methods to this system, manifests itself.
Starting from artificial intelligence learning, first of all, from data processing to algorithms and data sets to machine learning, image processing and analysis, which is called the highest point of artificial intelligence, and deep learning that includes all of these, is due to all these reasons and needs. In modern years it has become a popular topic.
FUNDAMENTALS OF PROBABILITY
The process of benefiting from science in decision-making processes that started with probability theory continued with the extraction of certain parameters to summarize the data. Sampling characteristics of the available data, such as mean value and variance, were used to test the hypothesis with probability distribution functions from probability theory. Today, techniques that are still used in many decision-making processes have emerged, including what some of us know as A / B testing.
Forecasting
The basis for decision-making is the prediction. Estimation is possible by generating a mathematical model as input (s / s) from inputs and inputs and then using this model in decision making. The first models were simple and naturally linear because they answered the current problems. This meant that the inputs were produced by adding and/or subtracting each other to produce output. Therefore, Regression method can be considered as a problem of passing a straight equation from the points in the data. This method is, of course, nonlinear known as non-linear multiplication and division operations are also used to bring the transition to functions. In this way, we have the opportunity to pass a curve from the points at hand and make our estimation more accurate accordingly. Using the probability distribution function in this process was necessary to model the chance factor in the problem.
Artificial intelligence
In the 1950s, academic circles engaged in artificial intelligence, while dealing with algorithms for learning and problem solving of a computer, were pondering on finding solutions without knowing the distribution function of data at hand. Living things could make decisions without knowing statistics. During these periods, two basic methods came to the fore. Artificial neural networks and decision trees. Neither of these methods was based on statistics. In artificial neural networks, the structure of the neural cells was simulated, and a layered network was formed. In this structure, the input layer was called the input layer, and the output layer was called the output layer. Layers hidden between these two are hidden layers. I have. The period of the late 1980s to the early 2000s is remembered as the golden age. At the end of this period, the complexity of the system increased, and the results could not be improved if the hidden layers exceeded four to five layers. In a sense, the process of producing economically has slowed down even if it does not stop. Although decision trees give good results in the application of certain problems, the problems resulting from the increase in data size have been applied to limited problems because they are not very successful in algorithmic solution.
Although some advanced nonlinear methods have been produced in the world of statistics, no general progress has been made which can be applied to the problems at hand. For advanced methods, nonlinear methods related to time series can be examined.
Again, the new re-use of multi-dimensional spaces in a different way
At the point where Artificial Neural Networks (ANNs) remained in the late 1990s, a so-called Support Vector Machine became a promising method of dealing with the complexity that ANN could not cope with. In this method, complex problems, space structures in mathematics and functions that allow the transition between these spaces were used to get rid of complexity. Of course, artificial neural networks also required processing in multi-dimensional spaces, but there was no transition between spaces. According to these very complex mathematical methods, such as doors used in a space size in the film pass, kernel function (kernel functions) have been introduced and taken to resolving the many problems. The main reason for this is AA complex problem in space, which requires a nonlinear solution, can be handled as a linear problem in space B passed using kernel functions. In this way, it was possible to use linear methods in this space. The reflection of this to commercial life is that some products developed using SVM allows the user to solve complex classification problems using SVM. As with artificial neural networks, the failure of the method to elaborate on the decision by the user posed a serious obstacle to its spread.
This was the general picture in the mid-2000s, but especially with the wave spread of big data, these methods were far from meeting the need.
Random Matrix Theory
At this very moment, a method of mathematical solution that was as old as finding artificial neural networks was remembered by a large audience. It was, known so much that it was used in many fields from solid state physics to chemistry. This method, which is called the Random Matrix Theory, is used by scientists to model the universality of a complex system that scientists have discovered in the modelling of complex systems. Nobel Prize winner Eugene Wigner is said to have said.