• Complain

it-ebooks - Tensorflow Tutorials (pkmital)

Here you can read online it-ebooks - Tensorflow Tutorials (pkmital) full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2018, publisher: iBooker it-ebooks, genre: Computer. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

it-ebooks Tensorflow Tutorials (pkmital)
  • Book:
    Tensorflow Tutorials (pkmital)
  • Author:
  • Publisher:
    iBooker it-ebooks
  • Genre:
  • Year:
    2018
  • Rating:
    4 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 80
    • 1
    • 2
    • 3
    • 4
    • 5

Tensorflow Tutorials (pkmital): summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Tensorflow Tutorials (pkmital)" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

it-ebooks: author's other books


Who wrote Tensorflow Tutorials (pkmital)? Find out the surname, the name of the author of the book and a list of all author's works by series.

Tensorflow Tutorials (pkmital) — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Tensorflow Tutorials (pkmital)" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
basics
"""Summary of tensorflow basics.Parag K. Mital, Jan 2016."""# %% Import tensorflow and pyplot import tensorflow as tf import matplotlib.pyplot as plt# %% tf.Graph represents a collection of tf.Operations # You can create operations by writing out equations. # By default, there is a graph: tf.get_default_graph() # and any new operations are added to this graph. # The result of a tf.Operation is a tf.Tensor, which holds # the values.# %% First a tf.Tensor n_values = x = tf.linspace( -3.0 , 3.0 , n_values)# %% Construct a tf.Session to execute the graph. sess = tf.Session()result = sess.run(x)# %% Alternatively pass a session to the eval fn: x.eval(session=sess) # x.eval() does not work, as it requires a session!# %% We can setup an interactive session if we don't # want to keep passing the session around: sess.close()sess = tf.InteractiveSession()# %% Now this will work! x.eval()# %% Now a tf.Operation # We'll use our values from [-3, 3] to create a Gaussian Distribution sigma = 1.0 mean = 0.0 z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0 ) / ( 2.0 * tf.pow(sigma, 2.0 )))) * ( 1.0 / (sigma * tf.sqrt( 2.0 * 3.1415 ))))# %% By default, new operations are added to the default Graph assert z.graph is tf.get_default_graph()# %% Execute the graph and plot the result plt.plot(x.eval(), z.eval())plt.show()# %% We can find out the shape of a tensor like so: print(z.get_shape())# %% Or in a more friendly format print(z.get_shape().as_list())# %% Sometimes we may not know the shape of a tensor # until it is computed in the graph. In that case # we should use the tf.shape fn, which will return a # Tensor which can be eval'ed, rather than a discrete # value of tf.Dimension print(tf.shape(z).eval())# %% We can combine tensors like so: print(tf.pack([tf.shape(z), tf.shape(z), [], []]).eval())# %% Let's multiply the two to get a 2d gaussian z_2d = tf.matmul(tf.reshape(z, [n_values, ]), tf.reshape(z, [, n_values]))# %% Execute the graph and store the value that `out` represents in `result`. plt.imshow(z_2d.eval())plt.show()# %% For fun let's create a gabor patch: x = tf.reshape(tf.sin(tf.linspace( -3.0 , 3.0 , n_values)), [n_values, ])y = tf.reshape(tf.ones_like(x), [, n_values])z = tf.mul(tf.matmul(x, y), z_2d)plt.imshow(z.eval())plt.show()# %% We can also list all the operations of a graph: ops = tf.get_default_graph().get_operations()print([op.name for op in ops])# %% Lets try creating a generic function for computing the same thing: def gabor (n_values=, sigma= 1.0 , mean= 0.0 ) : x = tf.linspace( -3.0 , 3.0 , n_values) z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0 ) / ( 2.0 * tf.pow(sigma, 2.0 )))) * ( 1.0 / (sigma * tf.sqrt( 2.0 * 3.1415 )))) gauss_kernel = tf.matmul( tf.reshape(z, [n_values, ]), tf.reshape(z, [, n_values])) x = tf.reshape(tf.sin(tf.linspace( -3.0 , 3.0 , n_values)), [n_values, ]) y = tf.reshape(tf.ones_like(x), [, n_values]) gabor_kernel = tf.mul(tf.matmul(x, y), gauss_kernel) return gabor_kernel# %% Confirm this does something: plt.imshow(gabor().eval())plt.show()# %% And another function which can convolve def convolve (img, W) : # The W matrix is only 2D # But conv2d will need a tensor which is 4d: # height x width x n_input x n_output if len(W.get_shape()) == : dims = W.get_shape().as_list() + [, ] W = tf.reshape(W, dims) if len(img.get_shape()) == : # num x height x width x channels dims = [] + img.get_shape().as_list() + [] img = tf.reshape(img, dims) elif len(img.get_shape()) == : dims = [] + img.get_shape().as_list() img = tf.reshape(img, dims) # if the image is 3 channels, then our convolution # kernel needs to be repeated for each input channel W = tf.concat(, [W, W, W]) # Stride is how many values to skip for the dimensions of # num, height, width, channels convolved = tf.nn.conv2d(img, W, strides=[, , , ], padding= 'SAME' ) return convolved# %% Load up an image: from skimage import dataimg = data.astronaut()plt.imshow(img)plt.show()print(img.shape)# %% Now create a placeholder for our graph which can store any input: x = tf.placeholder(tf.float32, shape=img.shape)# %% And a graph which can convolve our image with a gabor out = convolve(x, gabor())# %% Now send the image into the graph and compute the result result = tf.squeeze(out).eval(feed_dict={x: img})plt.imshow(result)plt.show()
linear_regression
"""Simple tutorial for using TensorFlow to compute a linear regression.Parag K. Mital, Jan. 2016"""'Simple tutorial for using TensorFlow to compute a linear regression.\n\nParag K. Mital, Jan. 2016'# %% imports %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt# %% Let's create some toy data plt.ion()n_observations = fig, ax = plt.subplots(, )xs = np.linspace( -3 , , n_observations)ys = np.sin(xs) + np.random.uniform( -0.5 , 0.5 , n_observations)ax.scatter(xs, ys)fig.show()plt.draw()/home/heythisischo/anaconda2/lib/python2.7/site-packages/matplotlib/figure.py:397: UserWarning: matplotlib is currently using a non-GUI backend, so cannot show the figure "matplotlib is currently using a non-GUI backend, "

tfplaceholders for the input and output of the network Placeholders are - photo 1

# %% tf.placeholders for the input and output of the network. Placeholders are # variables which we need to fill in when we are ready to compute the graph. X = tf.placeholder(tf.float32)Y = tf.placeholder(tf.float32)# %% We will try to optimize min_(W,b) ||(X*w + b) - y||^2 # The `Variable()` constructor requires an initial value for the variable, # which can be a `Tensor` of any type and shape. The initial value defines the # type and shape of the variable. After construction, the type and shape of # the variable are fixed. The value can be changed using one of the assign # methods. W = tf.Variable(tf.random_normal([]), name= 'weight' )b = tf.Variable(tf.random_normal([]), name= 'bias' )Y_pred = tf.add(tf.mul(X, W), b)# %% Loss function will measure the distance between our observations # and predictions and average over them. cost = tf.reduce_sum(tf.pow(Y_pred - Y, )) / (n_observations - )# %% if we wanted to add regularization, we could add other terms to the cost, # e.g. ridge regression has a parameter controlling the amount of shrinkage # over the norm of activations. the larger the shrinkage, the more robust # to collinearity. # cost = tf.add(cost, tf.mul(1e-6, tf.global_norm([W])))# %% Use gradient descent to optimize W,b # Performs a single step in the negative gradient learning_rate = 0.01 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)# %% We create a session to use the graph n_epochs = 1000 with tf.Session() as sess: # Here we tell tensorflow that we want to initialize all # the variables in the graph so we can use them sess.run(tf.initialize_all_variables()) # Fit all training data prev_training_cost = 0.0 for epoch_i in range(n_epochs): for (x, y) in zip(xs, ys): sess.run(optimizer, feed_dict={X: x, Y: y}) training_cost = sess.run( cost, feed_dict={X: xs, Y: ys}) print(training_cost) if epoch_i % == : ax.plot(xs, Y_pred.eval( feed_dict={X: xs}, session=sess), 'k' , alpha=epoch_i / n_epochs) fig.show() plt.draw() # Allow the training to quit if we've reached a minimum if np.abs(prev_training_cost - training_cost) < 0.000001 : break prev_training_cost = training_costfig.show()1.439891.306861.189161.085030.9928790.9113250.8391410.775240.7186630.6685640.6241920.5848850.5500580.5191940.4918350.4675780.4460630.4269770.4100390.3950010.3816470.3697830.3592380.3498610.3415190.3340940.3274810.3215870.3163320.3116430.3074550.3037130.3003660.297370.2946860.2922780.2901170.2881750.2864270.2848530.2834330.2821510.2809910.2799420.278990.2781250.2773390.2766230.275970.2753730.2748260.2743250.2738650.2734420.2730520.2726930.272360.2720520.2717670.2715010.2712540.2710240.270810.2706090.2704210.2702450.270080.2699250.2697790.2696420.2695120.269390.2692740.2691650.2690620.2689640.2688710.2687830.2686990.268620.2685440.2684720.2684040.2683390.2682770.2682180.2681610.2681070.2680560.2680070.267960.2679160.2678730.2678320.2677940.2677560.2677210.2676870.2676540.2676230.2675940.2675650.2675380.2675120.2674870.2674640.2674410.2674190.2673980.2673780.2673590.2673410.2673240.2673070.2672910.2672760.2672610.2672470.2672340.2672210.2672090.2671970.2671860.2671760.2671650.2671560.2671460.2671370.2671290.2671210.2671130.2671050.2670980.2670920.2670850.2670790.2670730.2670670.2670620.2670570.2670520.2670470.2670430.2670390.2670340.2670310.2670270.2670230.267020.2670170.2670140.2670110.2670080.2670060.2670030.2670010.2669980.2669960.2669940.2669920.266990.2669890.2669870.2669850.2669840.2669820.2669810.266980.2669790.2669780.2669760.2669750.266974
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Tensorflow Tutorials (pkmital)»

Look at similar books to Tensorflow Tutorials (pkmital). We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Tensorflow Tutorials (pkmital)»

Discussion, reviews of the book Tensorflow Tutorials (pkmital) and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.