• Complain

it-ebooks - TensorFlow Zero To All

Here you can read online it-ebooks - TensorFlow Zero To All full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2018, publisher: iBooker it-ebooks, genre: Romance novel. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

it-ebooks TensorFlow Zero To All
  • Book:
    TensorFlow Zero To All
  • Author:
  • Publisher:
    iBooker it-ebooks
  • Genre:
  • Year:
    2018
  • Rating:
    3 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 60
    • 1
    • 2
    • 3
    • 4
    • 5

TensorFlow Zero To All: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "TensorFlow Zero To All" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

it-ebooks: author's other books


Who wrote TensorFlow Zero To All? Find out the surname, the name of the author of the book and a list of all author's works by series.

TensorFlow Zero To All — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "TensorFlow Zero To All" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Table of Contents
  1. 1.1
  2. 1.2
  3. 1.3
  4. 1.4
  5. 1.5
  6. 1.6
  7. 1.7
  8. 1.8
  9. 1.9
  10. 1.10
  11. 1.11
  12. 1.12
  13. 1.13
  14. 1.14
  15. 1.15
  16. 1.16
  17. 1.17
  18. 1.18
  19. 1.19
  20. 1.20
  21. 1.21
  22. 1.22
  23. 1.23
  24. 1.24
  25. 1.25
  26. 1.26
  27. 1.27
  28. 1.28
  29. 1.29
  30. 1.30
  31. 1.31
  32. 1.32
  33. 1.33
  34. 1.34
  35. 1.35
  36. 1.36
  37. 1.37
  38. 1.38
  39. 1.39
  40. 1.40
  41. 1.41
  42. 1.42
  43. 1.43
  44. 1.44
  45. 1.45
  46. 1.46
  47. 1.47
  48. 1.48
  49. 1.49
  50. 1.50
  51. 1.51
  52. 1.52
  53. 1.53
  54. 1.54
  55. 1.55
TensorFlow Zero To All
TensorFlow Zero To All

From: hunkim/DeepLearningZeroToAll

Author: hunkim

lab 01 basics
lab 01 Getting Started With TensorFlow

https://www.tensorflow.org/get_started/get_started

Check TF version
import tensorflow as tftf.__version__'1.0.0'
Hello TensorFlow!
# Create a constant op # This op is added as a node to the default graph hello = tf.constant( "Hello, TensorFlow!" ) # start a TF session sess = tf.Session() # run the op and get result print(sess.run(hello))b'Hello, TensorFlow!'
Tensors
# a rank 0 tensor; this is a scalar with shape [] [ ,, ] # a rank 1 tensor; this is a vector with shape [3] [[, , ], [, , ]] # a rank 2 tensor; a matrix with shape [2, 3] [[[, , ]], [[, , ]]] # a rank 3 tensor with shape [2, 1, 3][[[1.0, 2.0, 3.0]], [[7.0, 8.0, 9.0]]]
Computational Graph
node1 = tf.constant( 3.0 , tf.float32)node2 = tf.constant( 4.0 ) # also tf.float32 implicitly node3 = tf.add(node1, node2)print( "node1:" , node1, "node2:" , node2)print( "node3: " , node3)node1: Tensor("Const_1:0", shape=(), dtype=float32) node2: Tensor("Const_2:0", shape=(), dtype=float32)node3: Tensor("Add:0", shape=(), dtype=float32)

sess tfSessionprint sessrunnode1 node2 sessrunnode1 - photo 1

sess = tf.Session()print( "sess.run(node1, node2): " , sess.run([node1, node2]))print( "sess.run(node3): " , sess.run(node3))sess.run(node1, node2): [3.0, 4.0]sess.run(node3): 7.0a = tf.placeholder(tf.float32)b = tf.placeholder(tf.float32)adder_node = a + b # + provides a shortcut for tf.add(a, b) print(sess.run(adder_node, feed_dict={a: , b: 4.5 }))print(sess.run(adder_node, feed_dict={a: [,], b: [, ]}))7.5[ 3. 7.]add_and_triple = adder_node * print(sess.run(add_and_triple, feed_dict={a: , b: 4.5 }))22.5
lab 02.1 linear regression
lab 02.1 linear regression
# Lab 2 Linear Regression import tensorflow as tftf.set_random_seed() # for reproducibility # X and Y data x_train = [, , ]y_train = [, , ] # Try to find values for W and b to compute y_data = x_data * W + b # We know that W should be 1 and b should be 0 # But let TensorFlow figure it out W = tf.Variable(tf.random_normal([]), name= 'weight' )b = tf.Variable(tf.random_normal([]), name= 'bias' ) # Our hypothesis XW+b hypothesis = x_train * W + b # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - y_train)) # Minimize optimizer = tf.train.GradientDescentOptimizer(learning_rate= 0.01 )train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) # Fit the line for step in range( 2001 ): sess.run(train) if step % == : print(step, sess.run(cost), sess.run(W), sess.run(b)) # Learns best fit W:[ 1.], b:[ 0.] '''0 2.82329 [ 2.12867713] [-0.85235667]20 0.190351 [ 1.53392804] [-1.05059612]40 0.151357 [ 1.45725465] [-1.02391243]...1920 1.77484e-05 [ 1.00489295] [-0.01112291]1940 1.61197e-05 [ 1.00466311] [-0.01060018]1960 1.46397e-05 [ 1.004444] [-0.01010205]1980 1.32962e-05 [ 1.00423515] [-0.00962736]2000 1.20761e-05 [ 1.00403607] [-0.00917497]'''
lab 02.2 linear regression feed
lab 02.2 linear regression feed
# Lab 2 Linear Regression import tensorflow as tftf.set_random_seed() # for reproducibility # Try to find values for W and b to compute y_data = W * x_data + b # We know that W should be 1 and b should be 0 # But let's use TensorFlow to figure it out W = tf.Variable(tf.random_normal([]), name= 'weight' )b = tf.Variable(tf.random_normal([]), name= 'bias' ) # Now we can use X and Y in place of x_data and y_data # # placeholders for a tensor that will be always fed using feed_dict # See http://stackoverflow.com/questions/36693740/ X = tf.placeholder(tf.float32, shape=[ None ])Y = tf.placeholder(tf.float32, shape=[ None ]) # Our hypothesis XW+b hypothesis = X * W + b # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize optimizer = tf.train.GradientDescentOptimizer(learning_rate= 0.01 )train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) # Fit the line for step in range( 2001 ): cost_val, W_val, b_val, _ = \ sess.run([cost, W, b, train], feed_dict={X: [, , ], Y: [, , ]}) if step % == : print(step, cost_val, W_val, b_val) # Learns best fit W:[ 1.], b:[ 0] '''...1980 1.32962e-05 [ 1.00423515] [-0.00962736]2000 1.20761e-05 [ 1.00403607] [-0.00917497]''' # Testing our model print(sess.run(hypothesis, feed_dict={X: []}))print(sess.run(hypothesis, feed_dict={X: [ 2.5 ]}))print(sess.run(hypothesis, feed_dict={X: [ 1.5 , 3.5 ]})) '''[ 5.0110054][ 2.50091505][ 1.49687922 3.50495124]''' # Fit the line with new training data for step in range( 2001 ): cost_val, W_val, b_val, _ = \ sess.run([cost, W, b, train], feed_dict={X: [, , , , ], Y: [ 2.1 , 3.1 , 4.1 , 5.1 , 6.1 ]}) if step % == : print(step, cost_val, W_val, b_val) # Testing our model print(sess.run(hypothesis, feed_dict={X: []}))print(sess.run(hypothesis, feed_dict={X: [ 2.5 ]}))print(sess.run(hypothesis, feed_dict={X: [ 1.5 , 3.5 ]})) '''1960 3.32396e-07 [ 1.00037301] [ 1.09865296]1980 2.90429e-07 [ 1.00034881] [ 1.09874094]2000 2.5373e-07 [ 1.00032604] [ 1.09882331][ 6.10045338][ 3.59963846][ 2.59931231 4.59996414]'''
lab 02.3 linear regression tensorflow.org
lab 02.3 linear regression tensorflow.org
# From https://www.tensorflow.org/get_started/get_started import tensorflow as tf # Model parameters W = tf.Variable([ .3 ], tf.float32)b = tf.Variable([ -.3 ], tf.float32) # Model input and output x = tf.placeholder(tf.float32)y = tf.placeholder(tf.float32)linear_model = x * W + b # cost/loss function loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer( 0.01 )train = optimizer.minimize(loss) # training data x_train = [, , , ]y_train = [, -1 , -2 , -3 ] # training loop init = tf.global_variables_initializer()sess = tf.Session()sess.run(init) # reset values to wrong for i in range( 1000 ): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})print( "W: %s b: %s loss: %s" % (curr_W, curr_b, curr_loss))
lab 03.1 minimizing cost show graph
lab 03.1 minimizing cost show graph
# Lab 3 Minimizing Cost import tensorflow as tf import matplotlib.pyplot as plttf.set_random_seed() # for reproducibility X = [, , ]Y = [, , ]W = tf.placeholder(tf.float32) # Our hypothesis for linear model X * W hypothesis = X * W # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Launch the graph in a session. sess = tf.Session() # Variables for plotting cost function W_history = []cost_history = [] for i in range( -30 , ): curr_W = i * 0.1 curr_cost = sess.run(cost, feed_dict={W: curr_W}) W_history.append(curr_W) cost_history.append(curr_cost) # Show the cost function plt.plot(W_history, cost_history)plt.show()
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «TensorFlow Zero To All»

Look at similar books to TensorFlow Zero To All. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «TensorFlow Zero To All»

Discussion, reviews of the book TensorFlow Zero To All and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.