Ariel University

Course name: Deep learning and natural language processing

Course number: 7061510-1

Lecturer: Dr. Amos Azaria

Edited by: Moshe Hanukoglu

Date: First Semester 2018-2019

Based on presentations by Dr. Amos Azaria

TensorFlow

Installation, see: link

tensorflow is build as graph, meaning you can create nodes as constant, variable and function. You can connect the nodes and do action on them.

In [1]:
import tensorflow as tf

^ Contents

Constants

Create two nodes as constant.

In [2]:
a = tf.constant(3)
b = tf.constant(4)

When we print a node the result is an information about the node.

In [3]:
a
Out[3]:
<tf.Tensor 'Const:0' shape=() dtype=int32>

C is a node that contains a multiplication operation between nodes a and b

In [4]:
c = a*b
c
Out[4]:
<tf.Tensor 'mul:0' shape=() dtype=int32>

When we want to display the value of multiplication we need to create a session and run it.

In [5]:
sess = tf.Session()
sess.run(a)
Out[5]:
3
In [6]:
sess.run(c)
Out[6]:
12

^ Contents

Variables

Variable is a node like a constant but the difference between them is that vriable, as his name, can change his value.

In [7]:
var1 = tf.Variable(3)
var2 = tf.Variable(4)
c2 = var1 * var2
In [8]:
print(var1)
print(c2)
<tf.Variable 'Variable:0' shape=() dtype=int32_ref>
Tensor("mul_1:0", shape=(), dtype=int32)
In [9]:
sess.run(tf.global_variables_initializer())
sess.run(var1)
Out[9]:
3
In [10]:
sess.run(c2)
Out[10]:
12

^ Contents

Simple Counting Program

Creating two nodes is one counter and the other is a constant containing the step value.

The third node contains the function we want to do.

In [11]:
import tensorflow as tf
x = tf.Variable(1)
step = tf.constant(2)
update = tf.assign(x, x+step)

In each time that we run sess.run(update) we increace the value of x by 2.

In [12]:
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for i in range(4):
    print(sess.run(update))
3
5
7
9

Display the value of x.

In [13]:
sess.run(x)
Out[13]:
9

To initialise the value of x to the start value, write

In [14]:
sess.run(tf.global_variables_initializer())
sess.run(x)
Out[14]:
1

^ Contents

Definitions

  • Data: set of x's and their true labels/values (y's).
  • Model Function: A function we use to get the y from the x. (We will use this function to predict the y's, when we don't have them, or during the test phase). As we will see this model may be very complex.
  • Loss Function: a function that determines the error which we intend to minimize.

^ Contents

Learning What Numbers Were Noisily Multiplied by

In [15]:
import tensorflow as tfx = tf.constant([7.01, 3.02, 4.99, 8.])
y_ = tf.constant([14.01, 6.01, 10., 16.04])
m = tf.Variable(0.) #note the dot

Definition of loss function

In [16]:
y = m*x
loss = tf.reduce_mean(tf.pow(y - y_, 2))

Select the GradientDescentOptimizer optimization method. Also define the size of the step $(\alpha)$ and delivery of the loss function.

In [17]:
update = tf.train.GradientDescentOptimizer(0.0001).minimize(loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())

Run 1000 epochs

In [18]:
for _ in range(0,1000):
	sess.run(update)
print(sess.run(m))
2.0005188

^ Contents

PlaceHolders

If we do not want to load the data at the start of the run but during the run, we can use PlaceHolders node.
PlaceHolders gets a type of the data and the dimension of the data.

x = tf.placeholder(tf.float32, [None, 1])
y_ = tf.placeholder(tf.float32, [None, 1]).
.
.for _ in range(0,1000):
sess.run(update, feed_dict = {x:[[7.01], [3.02], [4.99], [8.]], y_:[[14.01], [6.01], [10.], [16.04]]})

^ Contents

tensorflow.train.Saver

We can use tensorflow.train.Saver to save the data and weights we get during the run.
This help us to resume training.

A good practice is to save checkpoints every X updates.

saver = tf.train.Saver()
saver.save(sess, filename)
saver.restore(sess, filename)

If you want to see more link

We will demonstrate the use of these commands in one of the previous examples.

In [19]:
import tensorflow as tf
import numpy as npfeatures = 2
x = tf.placeholder(tf.float32, [None, features])
y_ = tf.placeholder(tf.float32, [None, 1])
W = tf.Variable(tf.zeros([features,1]))
b = tf.Variable(tf.zeros([1]))
data_x = np.array([[2,4],[3,9],[4,16],[6,36],[7,49]])
data_y = np.array([[70],[110],[165],[390],[550]])y = tf.matmul(x,W) + b
loss = tf.reduce_mean(tf.pow(y - y_, 2))
update = tf.train.GradientDescentOptimizer(0.001).minimize(loss)

Saving information

Create saver object and run the session.

In [20]:
saver = tf.train.Saver()with tf.Session() as sess:
    # initialize all of the variables in the session
    sess.run(tf.global_variables_initializer())    for i in range(1000):
        sess.run(update, feed_dict={x: data_x, y_: data_y})
        if 1 % 100 == 0:
            print('Iteration:', i, ' W:', sess.run(W), ' b:', sess.run(b), ' loss:', loss.eval(session=sess, feed_dict={x: data_x, y_: data_y}))    # Save the variable in the file
    saved_path = saver.save(sess, './saved_variable')
    print('model saved in {}'.format(saved_path))
model saved in ./saved_variable

Restoring information

In [21]:
with tf.Session() as sess:
    # Restore the saved vairable
    saver.restore(sess, './saved_variable')
    # Print the loaded variable
    a_out, b_out = sess.run([W, b])
    print('W = ', a_out)
    print('b = ', b_out)
INFO:tensorflow:Restoring parameters from ./saved_variable
W =  [[ 2.0441222]
 [10.644336 ]]
b =  [3.9046898]

^ Contents

TensorBoard

TensorBoard can help visualize the built graph along with presenting different charts.

To create a graph.
The function gets a path for store the graph and the session

tf.summary.FileWriter('./my_graph', sess.graph)

In order to open the browser of graph write at the terminal.

tensorboard --port=8008 --logdir ./my_graph/

In order to give a name to graph

m = tf.Variable(<data>, name = <graphName>)
In [22]:
import tensorflow as tfx = tf.placeholder(tf.float32, [None, 1])
y_ = tf.placeholder(tf.float32, [None, 1])
m = tf.Variable(0.)
y = m*x
loss = tf.reduce_mean(tf.pow(y - y_, 2))
update = tf.train.GradientDescentOptimizer(0.0001).minimize(loss)
In [23]:
msum = tf.summary.scalar('msum', m)
losssum = tf.summary.scalar('losssum', loss)
merged = tf.summary.merge_all()
In [24]:
sess = tf.Session()
file_writer = tf.summary.FileWriter('./my_graph', sess.graph)
sess.run(tf.global_variables_initializer())
In [25]:
data_dict = {x:[[7.01], [3.02], [4.99], [8.]], y_:[[14.01], [6.01], [10.], [16.04]]}
for i in range(0,1000):
   [_,curr_sammary] = sess.run([update,merged], feed_dict = data_dict)
   file_writer.add_summary(curr_sammary, i)file_writer.close()
print(sess.run(m))
2.0005188