Just upgraded Tensorflow 1.0.1 and Keras 2.0.1

$ pip install –upgrade keras –user

Collecting keras
 Downloading Keras-2.0.1.tar.gz (192kB)
 100% |████████████████████████████████| 194kB 2.9MB/s
Requirement already up-to-date: theano in ./.local/lib/python2.7/site-packages (from keras)
Requirement already up-to-date: pyyaml in ./.local/lib/python2.7/site-packages (from keras)
Requirement already up-to-date: six in ./.local/lib/python2.7/site-packages (from keras)
Requirement already up-to-date: numpy>=1.7.1 in /usr/local/lib/python2.7/dist-packages (from theano->keras)
Collecting scipy>=0.11 (from theano->keras)
 Downloading scipy-0.19.0-cp27-cp27mu-manylinux1_x86_64.whl (45.0MB)
 100% |████████████████████████████████| 45.0MB 34kB/s
Building wheels for collected packages: keras
 Running setup.py bdist_wheel for keras ... done
 Stored in directory: /home/avkash/.cache/pip/wheels/fa/15/f9/57473734e407749529bf55e6b5038640dc7279d5718b2c368a
Successfully built keras
Installing collected packages: keras, scipy
 Found existing installation: Keras 1.2.2
 Uninstalling Keras-1.2.2:
 Successfully uninstalled Keras-1.2.2
Successfully installed keras-2.0.1 scipy-0.19.0

$ pip install –upgrade tensorflow-gpu –user

Collecting tensorflow-gpu
 Downloading tensorflow_gpu-1.0.1-cp27-cp27mu-manylinux1_x86_64.whl (94.8MB)
 100% |████████████████████████████████| 94.8MB 16kB/s
Requirement already up-to-date: mock>=2.0.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow-gpu)
Requirement already up-to-date: numpy>=1.11.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow-gpu)
Requirement already up-to-date: protobuf>=3.1.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow-gpu)
Requirement already up-to-date: wheel in /usr/lib/python2.7/dist-packages (from tensorflow-gpu)
Requirement already up-to-date: six>=1.10.0 in ./.local/lib/python2.7/site-packages (from tensorflow-gpu)
Requirement already up-to-date: funcsigs>=1; python_version < "3.3" in /usr/local/lib/python2.7/dist-packages (from mock>=2.0.0->tensorflow-gpu)
Requirement already up-to-date: pbr>=0.11 in ./.local/lib/python2.7/site-packages (from mock>=2.0.0->tensorflow-gpu)
Requirement already up-to-date: setuptools in ./.local/lib/python2.7/site-packages (from protobuf>=3.1.0->tensorflow-gpu)
Requirement already up-to-date: appdirs>=1.4.0 in ./.local/lib/python2.7/site-packages (from setuptools->protobuf>=3.1.0->tensorflow-gpu)
Requirement already up-to-date: packaging>=16.8 in ./.local/lib/python2.7/site-packages (from setuptools->protobuf>=3.1.0->tensorflow-gpu)
Requirement already up-to-date: pyparsing in ./.local/lib/python2.7/site-packages (from packaging>=16.8->setuptools->protobuf>=3.1.0->tensorflow-gpu)
Installing collected packages: tensorflow-gpu
Successfully installed tensorflow-gpu-1.0.1

$ python -c ‘import keras as tf;print tf.version

Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
2.0.1

$ python -c ‘import tensorflow as tf;print tf.version

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
1.0.1
Advertisements

Deep Learning session from Google Next 2017

TensorFlow and Deep Learning without a PhD:

With TensorFlow, deep machine learning transitions from an area of research to mainstream software engineering. In this video, Martin Gorner demonstrates how to construct and train a neural network that recognizes handwritten digits. Along the way, he’ll describe some “tricks of the trade” used in neural network design, and finally, he’ll bring the recognition accuracy of his model above 99%.

Part 1:

Part 2:

 

Tensorflow 0.12.1, Cuda 8.0 and CudNN 5.1 on Ubuntu 16.04 with Titan X

OS: Ubuntu 16.04, Nvidia Display Card – Titan X

Problem:

>> import tensorflow as tf

I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:119] Couldn’t open CUDA library libcudnn.so. LD_LIBRARY_PATH:
I tensorflow/stream_executor/cuda/cuda_dnn.cc:3459] Unable to load cuDNN DSO
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally

Note: You can see above that NOT all TF specific libraries are loaded and some failed to load.

>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

E tensorflow/stream_executor/cuda/cuda_driver.cc:509] failed call to cuInit: CUresult(-1)
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: mr-dl8
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: mr-dl8
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: Invalid argument: expected %d.%d or %d.%d.%d form for driver version; got “1”
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:363] driver version file contents: “””NVRM version: NVIDIA UNIX x86_64 Kernel Module 370.28 Thu Sep 1 19:45:04 PDT 2016
GCC version: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)
“””
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 370.28.0
Device mapping: no known devices.
I tensorflow/core/common_runtime/direct_session.cc:255] Device mapping:

Note: You can see that  TF could not initialize the CUDA driver.

Solution:

We are going to install Nvidia 370 drivers with CUDA 8.0 and CudNN 5.1

  • Remove all nvidia drivers
    • sudo apt-get purge nvidia*
  • Search for nvidia drivers in the cache
    • apt-cache search nvidia
    • apt search nvidia
  • Add proper driver repo
    • sudo add-apt-repository ppa:graphics-drivers/ppa
  • Update the system repo
    • sudo apt-get update
  • Search for nvidia 370 driver
    • sudo apt search nvidia | grep 370
    • Note: You must see something nvidia 370 in results
  • Install the driver:
    • sudo apt-get install nvidia-370 nvidia-settings
  • Must Reboot
    • sudo reboot

After reboot you will see the following: (Shows driver Version 370)

$ nvidia-smi

Tue Jan 10 19:05:18 2017
+————————————————————–+
| NVIDIA-SMI 370.28 Driver Version: 370.28 |
|—————–+———————-+———————-+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|=======================+======================+======================|
| 0 TITAN X (Pascal) Off | 0000:81:00.0 On | N/A |
| 23% 32C P8 10W / 250W | 11590MiB / 12188MiB | 0% Default |
+——————–+———————-+———————-+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|===================================================================|
| 0 1162 G /usr/lib/xorg/Xorg 60MiB |
+—————————————————————+

Disable unattended upgrade in the system so machine does not update the driver.

$ sudo apt-get remove unattended-upgrades

Now Install CUDA Tool Kit for Ubuntu 16.04 as below:

$ wget https://developer.nvidia.com/compute/cuda/8.0/prod/local_installers/cuda_8.0.44_linux-run
$ bash ./cuda_8.0.44_linux-run -override

Note: Do not install driver above and only install cuda 8.0 tool kit and sample at default location.

Get and Update CudNN 5.1 properly:

  • Create account at Nvidia website and download cudnn-8.0-linux-x64-v5.1.tgz
  • Unzip cudnn-8.0-linux-x64-v5.1.tgz
  • update Cuda libs and include with cudnn as below:
    • sudo cp cudnn/lib64/* /usr/local/cuda/lib64/
    • sudo cp cudnn/include/* /usr/local/cuda/include/
    • sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

REBOOT the Machine. 

Install Tensorflow 0.12.1 (Prebuilt Binaries)

export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-0.12.1-cp27-none-linux_x86_64.whl
sudo pip install –upgrade $TF_BINARY_URL

Test now:

>> import tensorflow as tf

I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally

>>> a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name=’a’)

>>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name=’b’)
>>> c = tf.matmul(a, b)
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: TITAN X (Pascal)
major: 6 minor: 1 memoryClockRate (GHz) 1.531
pciBusID 0000:81:00.0
Total memory: 11.90GiB
Free memory: 11.69GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:81:00.0)
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: TITAN X (Pascal), pci bus id: 0000:81:00.0
I tensorflow/core/common_runtime/direct_session.cc:255] Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: TITAN X (Pascal), pci bus id: 0000:81:00.0

>> print sess.run(c)

MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:827] MatMul: (MatMul)/job:localhost/replica:0/task:0/gpu:0
b: (Const): /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:827] b: (Const)/job:localhost/replica:0/task:0/gpu:0
a: (Const): /job:localhost/replica:0/task:0/gpu:0
I tensorflow/core/common_runtime/simple_placer.cc:827] a: (Const)/job:localhost/replica:0/task:0/gpu:0
[[ 22. 28.]
[ 49. 64.]]

Tensorflow and Tensor Board – working together

This article give you how you can write your tensorflow program and enable tensorboard together with it to analyze tensor graph and other components i.e. scalers, images, audio etc..

If you are new to tensorflow, your can learn about tensorflow here… And if you want to know “What is tensorboard”, best look at here

How to facilitate it:

  1. Define your tensorflow placeholder, variables, etc
  2. Defined session and Run session first
    1. Note: See following example on how to do it.
  3. Define the tensorboard summary fileWriter and graph in specificd dir as below:
    1. writer = tf.summary.FileWriter(“/tmp/test-tf”, sess.graph)
    2. tf.train.write_graph(sess.graph_def, ‘/tmp/test-tf’, ‘graph.pbtxt’)
  4. Start tensorboard as below:

    $ tensorboard –logdir /tmp/test-tf/

  5. Check your folder for events and graph logs:
    • $ ll /tmp/test-tf/
    • -rw-rw-r– 1 ubuntu ubuntu 23565 Dec 11 19:45 events.out.tfevents.1481485540.ip-10-236-190-133
    • -rw-rw-r– 1 ubuntu ubuntu 24354 Dec 11 19:45 graph.pbtxt

Follow is an example of jupyter notebook:

https://github.com/Avkash/mldl/blob/master/notebook/TF%2Bwith%2Btensorboard%2Bexample.ipynb

Here is an screenshot of tensorboard and graph:

screen-shot-2017-01-04-at-9-48-39-pm

 

Shoot like an artist – Using imagination, artificial intelligence, Tensorflow (& GPU)

natural-art

After I got opencv, mxnet and tensorflow working with CUDA, I was looking for tensorflow implementation of “A Neural Algorithm of Artistic Styleresearch paper and I found this.

I found  tensorflow implementation by Anish for the above research paper and I took from there.

Why Tensorflow:

  • TensorFlow supports automatic differentiation and has a clean API
  • Research paper steps are translated into code here
  • It has support for GPU (CUDA) so I can get works done faster (Time is $$)

 

Pre-requisite:

  • Ubuntu 16.04
  • Python 2.7
  • Tensorflow with GPU support

Commands:

  • Command for help:
    • $ python neural_style.py –help
  • Basic command:
    • $ python neural_style.py –content your_content.jpg –style your_style.jpg –output output_file_name.png –iteration 500
  • If you decided to use the output as input style you sure can do to get improved results
    • $ python neural_style.py –content your_content.jpg –style your_previous_output.png –output new_output_file_name.png –iteration 500

Few things I found:

  • If you have both content and style image as png you may get the following error
    • ValueError: Dimensions must be equal, but are 4 and 3
  • To solve it just use both content and style image as jpg.
  • If you have less memory in machine use both content and style image smaller under 480×480 size.

Example:

Top left after 500 iteration, top right after 2000 iteration and bottom image after 3500 iteration:

 

Tensorflow – A working MNIST Example notebook for starters

Key Features:

  • Download training and testing data
  • Pass the data frame to tensorflow
  • Build 3 layer DNN with 10, 20, 10 units
  • Fit model
  • Evaluate accuracy
  • Classify two new flower samples

Notebook:

In [1]:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf
import numpy as np

tf.logging.set_verbosity(tf.logging.INFO)
In [2]:
import csv
import tempfile
import urllib
import urllib2

#Location for training and test data set
#http://download.tensorflow.org/data/iris_training.csv
#http://download.tensorflow.org/data/iris_test.csv

#url = 'http://download.tensorflow.org/data/iris_training.csv'
#response = urllib2.urlopen(url)
#IRIS_TRAIN = csv.reader(response)

#url = 'http://download.tensorflow.org/data/iris_test.csv'
#response = urllib2.urlopen(url)
#IRIS_TEST = csv.reader(response)
In [3]:
# TRAINING 
train_file = tempfile.NamedTemporaryFile(delete=False)
urllib.urlretrieve("http://download.tensorflow.org/data/iris_training.csv", train_file.name)
train_file_name = train_file.name
train_file.close()
print("Training data is downloaded to %s" % train_file_name)
    
Training data is downloaded to /tmp/tmpMORAXc
In [4]:
# TEST DATA
test_file = tempfile.NamedTemporaryFile(delete=False)
urllib.urlretrieve("http://download.tensorflow.org/data/iris_test.csv", test_file.name)
test_file_name = test_file.name
test_file.close()
print("Training data is downloaded to %s" % test_file_name)
Training data is downloaded to /tmp/tmpJEW5rv
In [5]:
# Data sets
#IRIS_TRAINING = "iris_training.csv"
#IRIS_TEST = "iris_test.csv"
IRIS_TRAINING = train_file_name
IRIS_TEST = test_file_name
In [7]:
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
    filename=IRIS_TRAINING,
    target_dtype=np.int,
    features_dtype=np.float32)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
    filename=IRIS_TEST,
    target_dtype=np.int,
    features_dtype=np.float32)
In [10]:
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
In [11]:
feature_columns
Out[11]:
[_RealValuedColumn(column_name='', dimension=4, default_value=None, dtype=tf.float32, normalizer=None)]
In [12]:
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
                                            hidden_units=[10, 20, 10],
                                            n_classes=3,
                                            model_dir="/tmp/iris_model")
WARNING:tensorflow:Change warning: default value of `enable_centered_bias` will change after 2016-10-09. It will be disabled by default.Instructions for keeping existing behaviour:
Explicitly set `enable_centered_bias` to 'True' if you want to keep existing behaviour.
WARNING:tensorflow:Using default config.
INFO:tensorflow:Using config: {'task': 0, 'save_summary_steps': 100, 'keep_checkpoint_max': 5, '_is_chief': True, 'save_checkpoints_secs': 600, 'evaluation_master': '', 'tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1
}
, 'master': '', 'keep_checkpoint_every_n_hours': 10000, '_job_name': None, 'cluster_spec': None, 'tf_random_seed': None, 'num_ps_replicas': 0}
In [13]:
# Fit model.
classifier.fit(x=training_set.data,
               y=training_set.target,
               steps=2000)
INFO:tensorflow:Setting feature info to TensorSignature(dtype=tf.float32, shape=TensorShape([Dimension(None), Dimension(4)]), is_sparse=False)
INFO:tensorflow:Setting targets info to TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(None)]), is_sparse=False)
INFO:tensorflow:Transforming feature_column _RealValuedColumn(column_name='', dimension=4, default_value=None, dtype=tf.float32, normalizer=None)
INFO:tensorflow:Create CheckpointSaverHook
INFO:tensorflow:loss = 1.30827, step = 1
INFO:tensorflow:Saving checkpoints for 1 into /tmp/iris_model/model.ckpt.
INFO:tensorflow:loss = 0.23881, step = 101
INFO:tensorflow:loss = 0.0918469, step = 201
INFO:tensorflow:loss = 0.0948484, step = 301
INFO:tensorflow:loss = 0.0614423, step = 401
INFO:tensorflow:loss = 0.0585238, step = 501
INFO:tensorflow:loss = 0.0556661, step = 601
INFO:tensorflow:loss = 0.0546077, step = 701
INFO:tensorflow:loss = 0.0524153, step = 801
INFO:tensorflow:loss = 0.0511613, step = 901
INFO:tensorflow:loss = 0.0499391, step = 1001
INFO:tensorflow:loss = 0.0488964, step = 1101
INFO:tensorflow:loss = 0.0479813, step = 1201
INFO:tensorflow:loss = 0.0471655, step = 1301
INFO:tensorflow:loss = 0.046428, step = 1401
INFO:tensorflow:loss = 0.0456749, step = 1501
INFO:tensorflow:loss = 0.0450335, step = 1601
INFO:tensorflow:loss = 0.0444158, step = 1701
INFO:tensorflow:loss = 0.0437809, step = 1801
INFO:tensorflow:loss = 0.0432727, step = 1901
INFO:tensorflow:Saving checkpoints for 2000 into /tmp/iris_model/model.ckpt.
INFO:tensorflow:Loss for final step: 0.0428384.
Out[13]:
Estimator(params={'enable_centered_bias': True, 'activation_fn': , 'weight_column_name': None, 'hidden_units': [10, 20, 10], 'feature_columns': [_RealValuedColumn(column_name='', dimension=4, default_value=None, dtype=tf.float32, normalizer=None)], 'n_classes': 3, 'optimizer': 'Adagrad', 'dropout': None, 'gradient_clip_norm': None, 'num_ps_replicas': 0})
In [14]:
# Evaluate accuracy.
accuracy_score = classifier.evaluate(x=test_set.data,
                                     y=test_set.target)["accuracy"]
print('Accuracy: {0:f}'.format(accuracy_score))
WARNING:tensorflow:Given features: Tensor("input:0", shape=(?, 4), dtype=float32), required signatures: TensorSignature(dtype=tf.float32, shape=TensorShape([Dimension(None), Dimension(4)]), is_sparse=False).
WARNING:tensorflow:Given targets: Tensor("output:0", shape=(?,), dtype=int64), required signatures: TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(None)]), is_sparse=False).
INFO:tensorflow:Transforming feature_column _RealValuedColumn(column_name='', dimension=4, default_value=None, dtype=tf.float32, normalizer=None)
INFO:tensorflow:Restored model from /tmp/iris_model
INFO:tensorflow:Eval steps [0,inf) for training step 2000.
INFO:tensorflow:Input iterator is exhausted.
INFO:tensorflow:Saving evaluation summary for 2000 step: loss = 0.0670678, accuracy = 0.966667
Accuracy: 0.966667
In [15]:
# Classify two new flower samples.
new_samples = np.array(
    [[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=float)
y = list(classifier.predict(new_samples, as_iterable=True))
print('Predictions: {}'.format(str(y)))
WARNING:tensorflow:float64 is not supported by many models, consider casting to float32.
INFO:tensorflow:Transforming feature_column _RealValuedColumn(column_name='', dimension=4, default_value=None, dtype=tf.float32, normalizer=None)
INFO:tensorflow:Loading model from checkpoint: /tmp/iris_model/model.ckpt-2000-?????-of-00001.
Predictions: [array([1]), array([2])]

Handeling ImportError: cannot import name pywrap_tensorflow

Problem: importing tensotflow in CLI python works fine however when importing tensorflow in jupyter it gives  following error:

ImportError: cannot import name pywrap_tensorflow

Importing tensorflow in jupyter notebook (Not working Error):

 import tensorflow as tf
ImportErrorTraceback (most recent call last)
 in ()
      2 import cv2 as cv2
      3 from PIL import Image
----> 4 import tensorflow as tf
      5 #'/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow'

/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/__init__.py in ()
     21 from __future__ import print_function
     22 
---> 23 from tensorflow.python import *
     24 
     25 

/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/python/__init__.py in ()
     47 _default_dlopen_flags = sys.getdlopenflags()
     48 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL)
---> 49 from tensorflow.python import pywrap_tensorflow
     50 sys.setdlopenflags(_default_dlopen_flags)
     51 

ImportError: cannot import name pywrap_tensorflow

Importing tensorflow in CLI python (Working):

$ python

Python 2.7.12 (default, Jul 1 2016, 15:12:24)
[GCC 5.4.0 20160609] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.

>>> import tensorflow
>>> import tensorflow as tf
>>> print(tf)
<module ‘tensorflow’ from ‘/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/__init__.pyc’>
>>> help(tf)

>>> print tf.__version__
0.11.0rc2
>>> print tf.__path__
[‘/home/ubuntu/.local/lib/python2.7/site-packages/tensorflow’]

Troubleshooting:

Checking where the tensorflow package is located:

$ ls /home/ubuntu/.local/lib/python2.7/site-packages/tensorflow/
contrib core examples include init.py init.pyc models python tensorboard tools

Where is jupyter?

$ ps -ef | grep jupyter
ubuntu 1347 1 0 Nov11 ? 00:00:46 /usr/bin/python /usr/local/bin/jupyter-notebook –no-browser –ip=* –port=8888

Where most of the python packages are located: 

>> import sklearn
>>> print(sklearn.__path__)
[‘/home/ubuntu/.local/lib/python2.7/site-packages/sklearn’]
>>> import mxnet
>>> print(mxnet.__path__)
[‘/usr/local/lib/python2.7/dist-packages/mxnet-0.7.0-py2.7.egg/mxnet’]

Solution:

  1. If you have installed tensorflow while jupyter is running, importing tensorflow will not work in jupyter (check if it works on python CLI). You just need to Restart jupyter notebook and it should work.
  2. If you have problem in both jupyter and python CLI, then you just need to start jupyter from other location.