A Comprehensive Beginners Guide to Tensorflow

An article by Ashish Mokalkar (Software Engineer, IOT department, Mobiliya)

Hey Deep Learners/Dear Readers,

Recently, Deep Learning has become that hot girl in the class whom everyone wants to impress and show off their superiority to other classmates. And why not? Deep Learning has been able to give seemingly impossible breakthroughs over the past 3 years. May it be tech giants like Google, Amazon and Microsoft or start-ups like Niki.ai and Snapshopr, everyone is trying to harness the power of deep learning to prove their dominance in the AI industry. Google Trends also proves this by showing a steady increase in the search term “deep learning” over the past few years, with an even more noticeable uptick since late 2014.

Picture1.png

As a data science enthusiast, experiencing such breakthroughs in my field inspired me to jump into this flourishing field. I came across Tensorflow and have gone through its official documentation. It was abstruse and diverting for a beginner like me. Moreover I couldn’t find potential tutorials on understanding Tensorflow’s core functionality. Everyone would claim to know Tensorflow by showing off mnist problems, image classifiers, music generation, object detection etc. Consequently I faced difficulties in applying deep learning concepts in my projects. Therefore, after working with Tensorflow in my professional career for the past 16 months, I aim to develop an easy to comprehend guide about Tensorflow which would be extensively used by data scientists wishing to add a badge of deep learning to their prospects.

So, what is Tensorflow??

During my early learning period, I asked this question to many folks and I have never ever received the same answer twice. At the core, Tensorflow is just a computational library by Google which first generates the graph of computations and actually runs them when instructed.

In other words, it’s a form of lazy computing. At heart, it is neither a deep learning framework nor a computer vision library. All these functionalities are built using its computational representations.

Picture 2.png

Cool, but why Tensorflow?

Python uses numpy as computation library. So why did Googlers come up with Tensorflow which performs the same task? Let’s understand it using an example python code.

X = 5

Y = 10

Z = x + y

W = z – 5

Print W

The above python script basically says “create two variables x with value 5 and y with value 10, add both and set it as the value of a new variable z, subtract 5 from it and create new variable w and print it out”. When running this without Tensorflow, every operation is performed as the lines get executed. When executed using Tensorflow, the computation of z and w is never actually performed.  Instead, it is effectively an equation that means “when z is computed, take the value of z (as it is then) and subtract 5 from it”. It just generates the computational graph with variables as nodes.

The main advantages of using such mechanism are as follows:

1)    Reduced redundancy in some computations

2)    Faster computation of complex variables

How do I get Tensorflow on my PC?

For ubuntu:

Step 1:  Install or upgrade pip package

sudo apt-get install python-pip python-dev

Step 2: Install TensorFlow

$ pip install tensorflow     # Python 2.7; CPU support (no GPU support)

$ pip3 install tensorflow  # Python 3.n; CPU support (no GPU support)

$ pip install tensorflow-gpu  # Python 2.7;  GPU support

$ pip3 install tensorflow-gpu # Python 3.n; GPU support

Step 3: Upgrade tensorflow

$ sudo pip  install –upgrade TF_BINARY_URL   # Python 2.7
$ sudo pip3 install –upgrade TF_BINARY_URL   # Python 3.N

For Windows:

Step 1: TensorFlow only supports version 3.5.x of Python on Windows. If its not installed, install it now. It comes with the pip3 package

https://www.python.org/downloads/release/python-352/

Step 2: Install Tensorflow

$ pip install tensorflow     # Python 2.7; CPU support (no GPU support)

$ pip3 install tensorflow  # Python 3.n; CPU support (no GPU support)

$ pip install tensorflow-gpu  # Python 2.7;  GPU support

$ pip3 install tensorflow-gpu # Python 3.n; GPU support

Step 3 : Upgrade Tensorflow

$ sudo pip  install –upgrade TF_BINARY_URL   # Python 2.7
$ sudo pip3 install –upgrade TF_BINARY_URL   # Python 3.N

Know about Tensors

 As it is important to know swimming before diving into river, it is important to know about tensors before doing magic using Tensorflow programming. You can think of a tensor as an n-dimensional array or list. Only tensors may be passed between nodes in the computation graph. These tensors flow from one node to another in the computational graph. Hence the name

“TensorFlow”. For example,

[[1, 2, 3], [4, 5, 6], [7, 8, 9]] is a tensor of rank 2.

[[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18]] is a tensor of rank 3.

3 core data managing structures in Tensorflow

Every Tensorflow program will contain these data managing structures:

  • Constants: Constants are the data structure which are initialized when declared and their value doesn’t change. A constant’s value is stored in the graph.

Import tensorflow as tf

X = tf.constant(10, name = ‘X’)

  • Variables: Variables are the data structure whose value can change. They require initial value to be provided while declaration.

Import tensorflow as tf

y = tf.Variable(x + 5, name = ‘y

  • Placeholders: A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. We don’t have to provide an initial value and we can specify it at run time.

Import tensorflow as tf

x = tf.placeholder(“float”, None)

‘None’ indicates our program to allow x to take on any length.

Computational Graphs:

A computational graph defines the computation. It doesn’t compute anything nor does it hold any values. It just defines the operations that you specified in your code. Below is a sample computation graph for addition of a constant and a placeholder.

b = tf.constant(1,name = “b”)

x = tf.placeholder(“float”, None)

         Y = tf.add(x,b)

Picture3.pngSessions:

The actual execution of the tensorflow operations is performed using sessions. It allows to execute graphs or part of graphs. It allocates resources (on one or more machines) for that and holds the actual values of intermediate results and variables. For example,

sess = tf.Session()                 #initiating a session

result = sess.run(product)      #running the tensorflow operations in session

sess.close()

The flow of Tensorflow programs

Picture4.png
The typical flow of every tensorflow program you will write

Sample tensorflow program

import tensorflow as tf

X = tf.constant(10, name = “X”)

Y = tf.Variable(5, name = “Y”)

a = tf.add(X,Y)

b = tf.multiply(Y, a)

f = tf.div(b, 15)

model = tf.initialize_all_variables()

with tf.Session() as sess:

    writer = tf.summary.FileWriter(“output1”, sess.graph)

    sess.run(model)

    print(sess.run(f))

    writer.close()

Picture5.png
Computational Graph Visualization

End Notes

Not a day goes by when I don’t hear of an Artificial Intelligence program gone wrong. The AI community is coming up with critical breakthroughs every day. There is a lot of confusion about what is actually possible today using deep learning. One can get easily lost in the hype around AI, leading to misconceptions. Understanding the core functionality is very important in order to do magic using Tensorflow.

After reading this article, you can now understand the flow of any Tensorflow program. Also, you can go on to implement your own neural network in Tensorflow. In fact, you can now actually unbox the secret behind the AI magic like MNIST problems, Image classifiers, music generation etc.

Resources :

https://www.tensorflow.org/

http://learningtensorflow.com/

– ASHISH MOKALKAR (Software Engineer, IOT department, Mobiliya)

A global software engineering company enabling digital transformation for world’s leading organizations by leveraging emerging technology areas like Deep Learning, Augmented Reality and Internet-of-Things along with core capabilities in software engineering and security. Headquartered in Texas, US, the company also has global R&D centres based out of Canada, India, China and S. Korea. For more information, visit: https://www.mobiliya.com/

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s