• Complain

Ahmed Menshawy - Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks

Here you can read online Ahmed Menshawy - Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2018, publisher: Packt Publishing, genre: Home and family. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

No cover
  • Book:
    Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks
  • Author:
  • Publisher:
    Packt Publishing
  • Genre:
  • Year:
    2018
  • Rating:
    4 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 80
    • 1
    • 2
    • 3
    • 4
    • 5

Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Grasp the fundamental concepts of deep learning using Tensorflow in a hands-on manner

Key Features
  • Get a first-hand experience of the deep learning concepts and techniques with this easy-to-follow guide
  • Train different types of neural networks using Tensorflow for real-world problems in language processing, computer vision, transfer learning, and more
  • Designed for those who believe in the concept of learn by doing, this book is a perfect blend of theory and code examples
Book Description

Deep learning is a popular subset of machine learning, and it allows you to build complex models that are faster and give more accurate predictions. This book is your companion to take your first steps into the world of deep learning, with hands-on examples to boost your understanding of the topic.

This book starts with a quick overview of the essential concepts of data science and machine learning which are required to get started with deep learning. It introduces you to Tensorflow, the most widely used machine learning library for training deep learning models. You will then work on your first deep learning problem by training a deep feed-forward neural network for digit classification, and move on to tackle other real-world problems in computer vision, language processing, sentiment analysis, and more. Advanced deep learning models such as generative adversarial networks and their applications are also covered in this book.

By the end of this book, you will have a solid understanding of all the essential concepts in deep learning. With the help of the examples and code provided in this book, you will be equipped to train your own deep learning models with more confidence.

What you will learn
  • Understand the fundamentals of deep learning and how it is different from machine learning
  • Get familiarized with Tensorflow, one of the most popular libraries for advanced machine learning
  • Increase the predictive power of your model using feature engineering
  • Understand the basics of deep learning by solving a digit classification problem of MNIST
  • Demonstrate face generation based on the CelebA database, a promising application of generative models
  • Apply deep learning to other domains like language modeling, sentiment analysis, and machine translation
Who This Book Is For

This book targets data scientists and machine learning developers who wish to get started with deep learning. If you know what deep learning is but are not quite sure of how to use it, this book will help you as well. An understanding of statistics and data science concepts is required. Some familiarity with Python programming will also be beneficial.

Table of Contents
  1. Data science: Birds-eye view
  2. Data Modeling in Action - The Titanic Example
  3. Feature Engineering and Model Complexity - The Titanic Example Revisited
  4. Get Up and Running with TensorFlow
  5. Tensorflow in Action - Some Basic Examples
  6. Deep Feed-forward Neural Networks - Implementing Digit Classification
  7. Introduction to Convolutional Neural Networks
  8. Object Detection - CIFAR-10 Example
  9. Object Detection - Transfer Learning with CNNs
  10. Recurrent-Type Neural Networks - Language modeling
  11. Representation Learning - Implementing Word Embeddings
  12. Neural sentiment Analysis
  13. Autoencoders - Feature Extraction and Denoising
  14. Generative Adversarial Networks in Action - Generating New Images
  15. Face Generation and Handling Missing Labels
  16. Appendix - Implementing Fish Recognition

Ahmed Menshawy: author's other books


Who wrote Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks? Find out the surname, the name of the author of the book and a list of all author's works by series.

Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Generalization/true error

This is the second and more important type of error in data science. The whole purpose of building learning systems is the ability to get a smaller generalization error on the test set; in other words, to get the model to work well on a set of observation/samples that haven't been used in the training phase. If you still consider the class scenario from the previous section, you can think of generalization error as the ability to solve exam problems that werent necessarily similar to the problems you solved in the classroom to learn and get familiar with the subject. So, generalization performance is the model's ability to use the skills (parameters) that it learned in the training phase in order to correctly predict the outcome/output of unseen data.

In Figure 13, the light blue line represents the generalization error. You can see that as you increase the model complexity, the generalization error will be reduced, until some point when the model will start to lose its increasing power and the generalization error will decrease. This part of the curve where you get the generalization error to lose its increasing generalization power, is called overfitting.

The takeaway message from this section is to minimize the generalization error as much as you can.

Building the network

It's now time to build the core of our classification application, which is the computational graph of this CNN architecture, but to maximize the benefits of this implementation, we aren't going to use the TensorFlow layers API. Instead, we are going to use the TensorFlow neural network version of it.

So, let's start off by defining the model input placeholders which will input the images, target classes, and the keep probability parameter of the dropout layer (this helps us to reduce the complexity of the architecture by dropping some connections and hence reducing the chances of overfitting):


# Defining the model inputs
def images_input(img_shape):
return tf.placeholder(tf.float32, (None, ) + img_shape, name="input_images")
def target_input(num_classes):
target_input = tf.placeholder(tf.int32, (None, num_classes), name="input_images_target")
return target_input
#define a function for the dropout layer keep probability
def keep_prob_input():
return tf.placeholder(tf.float32, name="keep_prob")

Next up, we need to use the TensorFlow neural network implementation version to build up our convolution layers with max pooling:

# Applying a convolution operation to the input tensor followed by max pooling
def conv2d_layer(input_tensor, conv_layer_num_outputs, conv_kernel_size, conv_layer_strides, pool_kernel_size, pool_layer_strides):
input_depth = input_tensor.get_shape()[3].value
weight_shape = conv_kernel_size + (input_depth, conv_layer_num_outputs,)
#Defining layer weights and biases
weights = tf.Variable(tf.random_normal(weight_shape))
biases = tf.Variable(tf.random_normal((conv_layer_num_outputs,)))
#Considering the biase variable
conv_strides = (1,) + conv_layer_strides + (1,)
conv_layer = tf.nn.conv2d(input_tensor, weights, strides=conv_strides, padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, biases)
conv_kernel_size = (1,) + conv_kernel_size + (1,)
pool_strides = (1,) + pool_layer_strides + (1,)
pool_layer = tf.nn.max_pool(conv_layer, ksize=conv_kernel_size, strides=pool_strides, padding='SAME')
return pool_layer

As you have probably seen in the previous chapter, the output of the max pooling operation is a 4D tensor, which is not compatible with the required input format for the fully connected layers. So, we need to implement a flattened layer to convert the output of the max pooling layer from 4D to 2D tensor:

#Flatten the output of max pooling layer to be fing to the fully connected layer which only accepts the output
# to be in 2D
def flatten_layer(input_tensor):
return tf.contrib.layers.flatten(input_tensor)

Next up, we need to define a helper function that will enable us to add a fully connected layer to our architecture:

#Define the fully connected layer that will use the flattened output of the stacked convolution layers
#to do the actuall classification
def fully_connected_layer(input_tensor, num_outputs):
return tf.layers.dense(input_tensor, num_outputs)

Finally, before using these helper functions to create the entire architecture, we need to create another one that will take the output of the fully connected layer and produce 10 real-valued corresponding to the number of classes that we have in the dataset:

#Defining the output function
def output_layer(input_tensor, num_outputs):
return tf.layers.dense(input_tensor, num_outputs)

So, let's go ahead and define the function that will put all these bits and pieces together and create a CNN with three convolution layers. Each one of them is followed by max pooling operations. We'll also have two fully connected layers, where each one of them is followed by a dropout layer to reduce the model complexity and prevent overfitting. Finally, we'll have the output layer to produce 10 real-valued vectors, where each value represents a score for each class being the correct one:

def build_convolution_net(image_data, keep_prob):
# Applying 3 convolution layers followed by max pooling layers
conv_layer_1 = conv2d_layer(image_data, 32, (3,3), (1,1), (3,3), (3,3))
conv_layer_2 = conv2d_layer(conv_layer_1, 64, (3,3), (1,1), (3,3), (3,3))
conv_layer_3 = conv2d_layer(conv_layer_2, 128, (3,3), (1,1), (3,3), (3,3))
# Flatten the output from 4D to 2D to be fed to the fully connected layer
flatten_output = flatten_layer(conv_layer_3)
# Applying 2 fully connected layers with drop out
fully_connected_layer_1 = fully_connected_layer(flatten_output, 64)
fully_connected_layer_1 = tf.nn.dropout(fully_connected_layer_1, keep_prob)
fully_connected_layer_2 = fully_connected_layer(fully_connected_layer_1, 32)
fully_connected_layer_2 = tf.nn.dropout(fully_connected_layer_2, keep_prob)
#Applying the output layer while the output size will be the number of categories that we have
#in CIFAR-10 dataset
output_logits = output_layer(fully_connected_layer_2, 10)
#returning output
return output_logits

Let's call the preceding helper functions to build the network and define its loss and optimization criteria:

#Using the helper function above to build the network
#First off, let's remove all the previous inputs, weights, biases form the previous runs
tf.reset_default_graph()
# Defining the input placeholders to the convolution neural network
input_images = images_input((32, 32, 3))
input_images_target = target_input(10)
keep_prob = keep_prob_input()
# Building the models
logits_values = build_convolution_net(input_images, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits_values = tf.identity(logits_values, name='logits')
# defining the model loss
model_cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits_values, labels=input_images_target))
# Defining the model optimizer
model_optimizer = tf.train.AdamOptimizer().minimize(model_cost)
# Calculating and averaging the model accuracy
correct_prediction = tf.equal(tf.argmax(logits_values, 1), tf.argmax(input_images_target, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='model_accuracy')
tests.test_conv_net(build_convolution_net)

Now that we have built the computational architecture of this network, it's time to kick off the training process and see some results.

Importing data with pandas

There are lots of libraries out there in Python that you can use to read, transform, or write data. One of these libraries is pandas (http://pandas.pydata.org/). Pandas is an open source library and has great functionality and tools for data analysis as well as very easy-to-use data structures.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks»

Look at similar books to Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks»

Discussion, reviews of the book Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.