LazyProgrammer - Deep Learning in Python Prerequisites
Here you can read online LazyProgrammer - Deep Learning in Python Prerequisites full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2016, genre: Children. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:
Romance novel
Science fiction
Adventure
Detective
Science
History
Home and family
Prose
Art
Politics
Computer
Non-fiction
Religion
Business
Children
Humor
Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.
- Book:Deep Learning in Python Prerequisites
- Author:
- Genre:
- Year:2016
- Rating:4 / 5
- Favourites:Add to favourites
- Your mark:
- 80
- 1
- 2
- 3
- 4
- 5
Deep Learning in Python Prerequisites: summary, description and annotation
We offer to read an annotation, description, summary or preface (depends on what the author of the book "Deep Learning in Python Prerequisites" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.
Deep Learning in Python Prerequisites — read online for free the complete book (whole text) full work
Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Deep Learning in Python Prerequisites" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.
Font size:
Interval:
Bookmark:
Deep Learning in Python Prerequisites
Master Data Science and Machine Learning with Linear Regression and Logistic Regression in Python
By: The LazyProgrammer ( http://lazyprogrammer.me )
So you want to learn about deep learning and neural networks, but you dont have a clue what machine learning even is. This book is for you.
Perhaps youve already tried to read some tutorials about deep learning, and were just left scratching your head because you did not understand any of it. This book is for you.
Believe the hype. Deep learning is making waves. At the time of this writing (March 2016), Googles AlghaGo program just beat 9-dan professional Go player Lee Sedol at the game of Go, a Chinese board game.
Experts in the field of Artificial Intelligence thought we were 10 years away from achieving a victory against a top professional Go player, but progress seems to have accelerated!
While deep learning is a complex subject, it is not any more difficult to learn than any other machine learning algorithm. I wrote this book to introduce you to the prerequisites of neural networks, so that learning about neural networks in the future will seem like a natural extension of these topics. You will get along fine with undergraduate-level math and programming skill.
All the materials in this book can be downloaded and installed for free. We will use the Python programming language, along with the numerical computing library Numpy.
Unlike other machine learning algorithms, deep learning is particularly powerful because it automatically learns features . That means you dont need to spend your time trying to come up with and test kernels or interaction effects - something only statisticians love to do. Instead, we will eventually let the neural network learn these things for us. Each layer of the neural network is made up of logistic regression units.
Do you want a gentle introduction to this dark art, with practical code examples that you can try right away and apply to your own data? Then this book is for you.
This book was designed to contain all the prerequisite information you need for my next book, Deep Learning in Python: Master Data Science and Machine Learning with Modern Neural Networks written in Python, Theano, and TensorFlow .
There are many techniques that you should be comfortable with before diving into deep learning. For example, the backpropagation algorithm is just gradient descent, which is the same technique that is used to solve logistic regression.
The error functions and output functions of a neural network are exactly the same as those used in linear regression and logistic regression. The training process is nearly identical. Thus, learning about linear regression and logistic regression before you embark on your deep learning journey will make things much, much simpler for you.
Computer programs typically follow very deterministic processes.
IF THIS
THEN THAT
This is desired behavior for most programs. You wouldnt want a human doing arithmetic for you, or your operating system to make human errors when youre trying to get your work done.
One very useful application of computer programs is modeling or simulation. You can write physics simulations and models and use them in video games to produce realistic graphics and motion. We can do simulations using the equations of fluid mechanics to determine how an airplane with a new design would move through the air, without actually building it.
This leads us to an interesting question: Can we model the brain?
The brain is a complex object but we have decades of research that tells us how it works. The brain is made up of neurons that send electrical and chemical signals to each other.
We can certainly do electrical circuit simulations. We can do simulations of chemical reactions. We have circuit models of the neuron that simulate its behavior pretty accurately. So why cant we just hook these up and make a brain?
Realize that there is a heavy assumption here - that there is no soul, and that your consciousness is merely the product of very specifically organized biological circuits.
Whether or not that is true remains to be seen.
This is a very high-level view that kind of disappears when you study machine learning. The study of machine learning involves lots of math and optimization (finding the minimum or maximum of a function).
Another way to think of machine learning is its pattern recognition.
In human terms you would think of this as learning by example.
You learn that 1 + 1 = 2. 2 + 2 = 4. 1 + 2 = 3. And so on. You begin to figure out the pattern and then you learn to generalize that pattern to new problems.
You dont need to re-learn how to add 1000 + 1000, you just know how to add.
This is what machine learning tries to achieve.
In less abstract terms, machine learning very often works as follows:
You have a set of training samples:
X = { x , x , , x N }
Y = { y , y , , y N }
We call X the inputs and Y the outputs. Sometimes we call Y the target or labels and name it T instead of Y.
These come in pairs, so y is the output when the input is x , and so on.
We hope that, given enough examples of xs and ys, our machine learning algorithm will learn the pattern.
Then, when we later input a new x NEW into our model, we hope that the y NEW that it outputs is accurate.
Note that there are other types of learning, but what I described above, where we are given X and try to make it so that we can predict Y accurately, is called supervised learning.
There is a type of machine learning called unsupervised learning where we try to learn the distribution of the data (we are just given X). Clustering algorithms are an example of unsupervised learning. Principal components analysis is another example. While deep learning and neural networks can be used to do unsupervised learning and they are indeed very useful in that context, unsupervised learning doesnt really come into play with linear regression or logistic regression.
Another way to view machine learning is that we are trying to accurately model a system.
As an example, think of your brain driving a car. The inputs are the environment. The outputs are how you steer the car.
X [real world system] Y
An automated system to drive the car would be a program that outputs the best Ys.
X [machine learning model] Y prediction
We hope that after training or learning or fitting, Y prediction is approximately equal to Y.
To look at this from an API perspective, all supervised machine learning algorithms have 2 functions:
train(X, Y) where the model is adjusted to be able to predict Y accurately.
predict(X) where the model makes a prediction for each input it is given.
Within supervised learning there are 2 distinct tasks: classification and regression.
Classification is making predictions that are categories.
For example, the famous MNIST dataset is a set of images that are labeled 0 to 9.
A similar example is character recognition. This task is harder because you not only have to classify all the digits from 0 to 9, but all the uppercase and lowercase letters as well.
Another example is binary classification: given some measurements taken from a blood test, determine whether or not a person has a disease.
Binary classification, as its name suggests, always outputs 1 of only 2 categories.
Regression is making real-valued predictions, i.e. a number.
You might think that because the MNIST labels are numbers that the task is regression. This is not true!
In the MNIST problem the numbers are just labels. 8 is not closer to 9 than 7 is. They are all just distinct, unrelated labels.
Font size:
Interval:
Bookmark:
Similar books «Deep Learning in Python Prerequisites»
Look at similar books to Deep Learning in Python Prerequisites. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.
Discussion, reviews of the book Deep Learning in Python Prerequisites and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.