• Complain

it-ebooks - Mastering TensorFlow 1.x Code Notes

Here you can read online it-ebooks - Mastering TensorFlow 1.x Code Notes full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2018, publisher: iBooker it-ebooks, genre: Humor. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

it-ebooks Mastering TensorFlow 1.x Code Notes
  • Book:
    Mastering TensorFlow 1.x Code Notes
  • Author:
  • Publisher:
    iBooker it-ebooks
  • Genre:
  • Year:
    2018
  • Rating:
    3 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 60
    • 1
    • 2
    • 3
    • 4
    • 5

Mastering TensorFlow 1.x Code Notes: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Mastering TensorFlow 1.x Code Notes" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

it-ebooks: author's other books


Who wrote Mastering TensorFlow 1.x Code Notes? Find out the surname, the name of the author of the book and a list of all author's works by series.

Mastering TensorFlow 1.x Code Notes — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Mastering TensorFlow 1.x Code Notes" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
1 TensorFlow 101
1 TensorFlow 101
import tensorflow as tf import numpy as np
Get an Interactive TensorFlow Session
tfs = tf.InteractiveSession()
Customary Hello TensorFlow !!!
hello = tf.constant( "Hello TensorFlow !!" )print(tfs.run(hello))b'Hello TensorFlow !!'
Constants
c1 = tf.constant(, name= 'x' )c2 = tf.constant( 6.0 , name= 'y' )c3 = tf.constant( 7.0 , tf.float32, name= 'z' )print( 'c1 (x): ' , c1)print( 'c2 (y): ' , c2)print( 'c3 (z): ' , c3)c1 (x): Tensor("x:0", shape=(), dtype=int32)c2 (y): Tensor("y:0", shape=(), dtype=float32)c3 (z): Tensor("z:0", shape=(), dtype=float32)print( 'run([c1,c2,c3]) : ' , tfs.run([c1, c2, c3]))run([c1,c2,c3]) : [5, 6.0, 7.0]
Operations
op1 = tf.add(c2, c3)op2 = tf.multiply(c2, c3)print( 'op1 : ' , op1)print( 'op2 : ' , op2)op1 : Tensor("Add:0", shape=(), dtype=float32)op2 : Tensor("Mul:0", shape=(), dtype=float32)print( 'run(op1) : ' , tfs.run(op1))print( 'run(op2) : ' , tfs.run(op2))run(op1) : 13.0run(op2) : 42.0
Placeholders
p1 = tf.placeholder(tf.float32)p2 = tf.placeholder(tf.float32)print( 'p1 : ' , p1)print( 'p2 : ' , p2)p1 : Tensor("Placeholder:0", dtype=float32)p2 : Tensor("Placeholder_1:0", dtype=float32)op4 = p1 * p2 # shorthand for tf.multiply(p1, p2)print( 'run(op4,{p1:2.0, p2:3.0}) : ' ,tfs.run(op4,{p1: 2.0 , p2: 3.0 }))run(op4,{p1:2.0, p2:3.0}) : 6.0print( 'run(op4,feed_dict = {p1:3.0, p2:4.0}) : ' , tfs.run(op4, feed_dict={p1: 3.0 , p2: 4.0 }))run(op4,feed_dict = {p1:3.0, p2:4.0}) : 12.0print( 'run(op4,feed_dict={p1:[2.0,3.0,4.0], p2:[3.0,4.0,5.0]}):' , tfs.run(op4, feed_dict={p1: [ 2.0 , 3.0 , 4.0 ], p2: [ 3.0 , 4.0 , 5.0 ]}))run(op4,feed_dict = {p1:[2.0,3.0,4.0], p2:[3.0,4.0,5.0]}) : [ 6. 12. 20.]
Creating Tensors from Existing Objects
0-Dimensional Tensors (Scalars)
tf_t = tf.convert_to_tensor( 5.0 , dtype=tf.float64)print( 'tf_t : ' , tf_t)print( 'run(tf_t) : \n' , tfs.run(tf_t))tf_t : Tensor("Const_1:0", shape=(), dtype=float64)run(tf_t) : 5.0
1-Dimensional Tensors (Vectors)
a1dim = np.array([, , , , 5.99 ])print( "a1dim Shape : " , a1dim.shape)tf_t = tf.convert_to_tensor(a1dim, dtype=tf.float64)print( 'tf_t : ' , tf_t)print( 'tf_t[0] : ' , tf_t[])print( 'tf_t[0] : ' , tf_t[])print( 'run(tf_t) : \n' , tfs.run(tf_t))a1dim Shape : (5,)tf_t : Tensor("Const_2:0", shape=(5,), dtype=float64)tf_t[0] : Tensor("strided_slice:0", shape=(), dtype=float64)tf_t[0] : Tensor("strided_slice_1:0", shape=(), dtype=float64)run(tf_t) : [ 1. 2. 3. 4. 5.99]
2-Dimensional Tensors (Matrices)
a2dim = np.array([(, , , , 5.99 ), (, , , , 6.99 ), (, , , , 7.99 ) ])print( "a2dim Shape : " , a2dim.shape)tf_t = tf.convert_to_tensor(a2dim, dtype=tf.float64)print( 'tf_t : ' , tf_t)print( 'tf_t[0][0] : ' , tf_t[][])print( 'tf_t[1][2] : ' , tf_t[][])print( 'run(tf_t) : \n' , tfs.run(tf_t))a2dim Shape : (3, 5)tf_t : Tensor("Const_3:0", shape=(3, 5), dtype=float64)tf_t[0][0] : Tensor("strided_slice_3:0", shape=(), dtype=float64)tf_t[1][2] : Tensor("strided_slice_5:0", shape=(), dtype=float64)run(tf_t) : [[ 1. 2. 3. 4. 5.99] [ 2. 3. 4. 5. 6.99] [ 3. 4. 5. 6. 7.99]]
3-Dimensional Tensors
a3dim = np.array([[[, ], [, ] ], [[, ], [, ] ] ])print( "a3dim Shape : " , a3dim.shape)tf_t = tf.convert_to_tensor(a3dim, dtype=tf.float64)print( 'tf_t : ' , tf_t)print( 'tf_t[0][0][0] : ' , tf_t[][][])print( 'tf_t[1][1][1] : ' , tf_t[][][])print( 'run(tf_t) : \n' , tfs.run(tf_t))a3dim Shape : (2, 2, 2)tf_t : Tensor("Const_4:0", shape=(2, 2, 2), dtype=float64)tf_t[0][0][0] : Tensor("strided_slice_8:0", shape=(), dtype=float64)tf_t[1][1][1] : Tensor("strided_slice_11:0", shape=(), dtype=float64)run(tf_t) : [[[ 1. 2.] [ 3. 4.]] [[ 5. 6.] [ 7. 8.]]]
Variables
# Assume Linear Model y = w * x + b # Define model parameters w = tf.Variable([ .3 ], tf.float32)b = tf.Variable([ -.3 ], tf.float32) # Define model input and output x = tf.placeholder(tf.float32)y = w * x + bprint( "w:" , w)print( "x:" , x)print( "b:" , b)print( "y:" , y)w: x: Tensor("Placeholder_2:0", dtype=float32)b: y: Tensor("add:0", dtype=float32)# initialize and print the variable y tf.global_variables_initializer().run()print( 'run(y,{x:[1,2,3,4]}) : ' , tfs.run(y, {x: [, , , ]}))run(y,{x:[1,2,3,4]}) : [ 0. 0.30000001 0.60000002 0.90000004]
Creating Tensors from Library Functions
a = tf.zeros((,))print(tfs.run(a))[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
Close the interactive session
tfs.close()
Computation Graphs
Building and Running simple computation graph
# Assume Linear Model y = w * x + b # Define model parameters w = tf.Variable([ .3 ], tf.float32)b = tf.Variable([ -.3 ], tf.float32) # Define model input and output x = tf.placeholder(tf.float32)y = w * x + boutput = with tf.Session() as tfs: # initialize and print the variable y tf.global_variables_initializer().run() output = tfs.run(y, {x: [, , , ]})print( 'output : ' , output)output : [ 0. 0.30000001 0.60000002 0.90000004]
Graph on Compute Devices
from tensorflow.python.client import device_libprint(device_lib.list_local_devices())[name: "/cpu:0"device_type: "CPU"memory_limit: 268435456locality {}incarnation: 12445270569278384213, name: "/gpu:0"device_type: "GPU"memory_limit: 25628672locality { bus_id: 1}incarnation: 5284662930836416221physical_device_desc: "device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0"]tf.reset_default_graph() # Define model parameters w = tf.get_variable(name= 'w' , initializer=[ .3 ], dtype=tf.float32)b = tf.get_variable(name= 'b' , initializer=[ -.3 ], dtype=tf.float32) # Define model input and output x = tf.placeholder(name= 'x' , dtype=tf.float32)y = w * x + bconfig = tf.ConfigProto()config.log_device_placement = True with tf.Session(config=config) as tfs: # initialize and print the variable y tfs.run(tf.global_variables_initializer()) print( 'output' , tfs.run(y, {x: [, , , ]}))output from GPU: [ 0. 0.30000001 0.60000002 0.90000004]tf.reset_default_graph() with tf.device( '/device:CPU:0' ): # Define model parameters w = tf.get_variable(name= 'w' , initializer=[ .3 ], dtype=tf.float32) b = tf.get_variable(name= 'b' , initializer=[ -.3 ], dtype=tf.float32) # Define model input and output x = tf.placeholder(name= 'x' , dtype=tf.float32) y = w * x + bconfig = tf.ConfigProto()config.log_device_placement = True with tf.Session(config=config) as tfs: # initialize and print the variable y tfs.run(tf.global_variables_initializer()) print( 'output' , tfs.run(y, {x: [, , , ]}))tf.reset_default_graph() with tf.device( '/device:CPU:0' ): # Define model parameters w = tf.get_variable(name= 'w' , initializer=[ .3 ], dtype=tf.float32) b = tf.get_variable(name= 'b' , initializer=[ -.3 ], dtype=tf.float32) # Define model input and output x = tf.placeholder(name= 'x' , dtype=tf.float32) with tf.device( '/device:GPU:0' ): y = w * x + bconfig = tf.ConfigProto()config.log_device_placement = True with tf.Session(config=config) as tfs: # initialize and print the variable y tfs.run(tf.global_variables_initializer()) print( 'output' , tfs.run(y, {x: [, , , ]}))
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Mastering TensorFlow 1.x Code Notes»

Look at similar books to Mastering TensorFlow 1.x Code Notes. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Mastering TensorFlow 1.x Code Notes»

Discussion, reviews of the book Mastering TensorFlow 1.x Code Notes and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.