June 6, 2018

Tensorflow Serving Inception Model Using Docker

The docs regarding tensorflow serving are pretty convoluted and hard to follow. Here are the easy steps to take to serve Inception model in Docker:

Install docker (e.g. on ubuntu):

sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common 
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable" 

Download Dockerfile, build its container image, access its shell, download Tensorflow repos and build them (building can take about an hour).

wget https://raw.githubusercontent.com/tensorflow/serving/master/tensorflow_serving/tools/docker/Dockerfile.devel
docker build --pull -t $USER/tensorflow-serving-devel -f Dockerfile.devel .
docker run -it $USER/tensorflow-serving-devel
cd ~  # go to home
git clone --recurse-submodules https://github.com/tensorflow/serving
cd serving && git clone --recursive https://github.com/tensorflow/tensorflow.git  
cd tensorflow  && ./configure
cd ..
bazel test tensorflow_serving/...



Once completed we can test it out by running the model server

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server

Output should look like this if install was successful

Usage: model_server [--port=8500] [--enable_batching] [--model_name=my_name] --model_base_path=/path/to/export

Download and run inception model:


curl -O http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz
tar xzf inception-v3-2016-03-01.tar.gz

 ./serving/bazel-bin/tensorflow_serving/example/inception_saved_model --checkpoint_dir=inception-v3 --output_dir=/tmp/inception-export

ls /tmp/inception-export

ls inception-v3

./serving/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9002 --model_name=inception --model_base_path=/tmp/inception-export &> inception_log &

wget https://upload.wikimedia.org/wikipedia/en/a/ac/Xiang_Xiang_panda.jpg

./serving/bazel-bin/tensorflow_serving/example/inception_client --server=localhost:9002 --image=./Xiang_Xiang_panda.jpg



By now you should get the result of running the inception model on the panda image printed on your console. If oyu get timeout, just increase timeout from 10 to 30 seconds in the inception_client.py.


Some sources:
https://github.com/llSourcell/How-to-Deploy-a-Tensorflow-Model-in-Production/blob/master/demo.ipynb
https://www.youtube.com/watch?v=T_afaArR0E8
https://www.youtube.com/watch?v=CSbfk9jXItc
https://www.tensorflow.org/serving/docker
https://www.tensorflow.org/serving/serving_inception

No comments:

Post a Comment