tensorflow pb file inferencing takes more than 3seconds for an imageSave plot to image file instead of displaying it using MatplotlibTensorFlow: training on my own imageTensorflow: how to draw mini-batch using tf.train.batch from cifar10?(TensorFlow) Create Batch in tf.SesstionSimple Feedforward Neural Network with TensorFlow won't learnHow re-activate TensorFlow in Python with Docker, OS XWhy does tensorflow take more GPU RAM than the model file?tflite outputs don't match with tensorflow outputs for conv2d_transposepredicting in tensor flow

How did Arya survive the stabbing?

How does the UK government determine the size of a mandate?

Method to test if a number is a perfect power?

How does Loki do this?

Go Pregnant or Go Home

How does it work when somebody invests in my business?

Unreliable Magic - Is it worth it?

Sequence of Tenses: Translating the subjunctive

Return the Closest Prime Number

Why Were Madagascar and New Zealand Discovered So Late?

Did Dumbledore lie to Harry about how long he had James Potter's invisibility cloak when he was examining it? If so, why?

Class Action - which options I have?

A particular customize with green line and letters for subfloat

Opposite of a diet

How to write papers efficiently when English isn't my first language?

What is paid subscription needed for in Mortal Kombat 11?

Two monoidal structures and copowering

Customer Requests (Sometimes) Drive Me Bonkers!

Purchasing a ticket for someone else in another country?

Was Spock the First Vulcan in Starfleet?

Detecting if an element is found inside a container

How did Doctor Strange see the winning outcome in Avengers: Infinity War?

Large drywall patch supports

Avoiding estate tax by giving multiple gifts



tensorflow pb file inferencing takes more than 3seconds for an image


Save plot to image file instead of displaying it using MatplotlibTensorFlow: training on my own imageTensorflow: how to draw mini-batch using tf.train.batch from cifar10?(TensorFlow) Create Batch in tf.SesstionSimple Feedforward Neural Network with TensorFlow won't learnHow re-activate TensorFlow in Python with Docker, OS XWhy does tensorflow take more GPU RAM than the model file?tflite outputs don't match with tensorflow outputs for conv2d_transposepredicting in tensor flow













0















I am generating a .pb file from the piece of code given below



import tensorflow as tf
with tf.Session() as sess:
gom = tf.train.import_meta_graph('C:\chhaya\CLITP\Tvs_graphs\job.ckpt-20.meta')
gom.restore(sess,tf.train.latest_checkpoint('C:\chhaya\CLITP\Tvs_graphs'))
graph = tf.get_default_graph()
input_graph = graph.as_graph_def()
output_node_name = "predictions"
output_graph = tf.graph_util.convert_variables_to_constants(sess,input_graph,output_node_name.split(','))
res_file = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel.pb'
with tf.gfile.GFile(res_file,'wb') as f:
f.write(output_graph.SerializeToString())


But while inferencing from the pb file it takes 5seconds for the first image and 3 seconds for others after that. The code for inferencing is given below.



import tensorflow as tf
import os
from tensorflow.python.platform import gfile
from PIL import Image
import numpy as np
import scipy
from scipy import misc
import matplotlib.pyplot as plt
import cv2
import time
from aug_tool import data_aug
frozen_graph = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel1.pb'
with tf.gfile.GFile(frozen_graph,'rb') as f:
reco = tf.GraphDef()
reco.ParseFromString(f.read())

with tf.Graph().as_default() as gre:
tf.import_graph_def(reco,input_map=None,return_elements=None,name='')

l_input = gre.get_tensor_by_name('input_image:0') # Input Tensor
l_output = gre.get_tensor_by_name('predictions:0')
files=os.listdir('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\')
imageslst=np.zeros((len(files),224,224,3))
i=0
for file in files:
image = scipy.misc.imread('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\'+file)
image = image.astype(np.uint8)
Input_image_shape=(224,224,3)
resized_img=cv2.resize(image,(224,224))
channels = image.shape[2]
#print(channels)
if channels == 4:
resized_img = cv2.cvtColor(resized_img,cv2.COLOR_RGBA2RGB)
#image = np.expand_dims(resized_img,axis=2)
#print()
image=np.expand_dims(resized_img,axis=0)
height,width,channels = Input_image_shape
imageslst[i,:,:,:]=image
i=i+1

init = tf.global_variables_initializer()
with tf.Session(graph=gre) as sess:
sess.run(init)
for i in range(4):
t1=time.time()
Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
print(Session_out.shape)
t2=time.time()
print('time:'+str(t2-t1))


I am using tensorflow-gpu 1.12 with windows 7 using anaconda python 3.5
I also tried assigning gpu and cpu to predictions something like this



with tf.device('/gpu:0'):
for i in range(4):
t1=time.time()
Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
print(Session_out.shape)
t2=time.time()
print('time:'+str(t2-t1))


The thing I noted here is no matter what I assign cpu or gpu the time taken is always the same.The model is a transfer learning vgg16 model. Am i doing something wrong in the code? My gpu is Quadro 8GB










share|improve this question




























    0















    I am generating a .pb file from the piece of code given below



    import tensorflow as tf
    with tf.Session() as sess:
    gom = tf.train.import_meta_graph('C:\chhaya\CLITP\Tvs_graphs\job.ckpt-20.meta')
    gom.restore(sess,tf.train.latest_checkpoint('C:\chhaya\CLITP\Tvs_graphs'))
    graph = tf.get_default_graph()
    input_graph = graph.as_graph_def()
    output_node_name = "predictions"
    output_graph = tf.graph_util.convert_variables_to_constants(sess,input_graph,output_node_name.split(','))
    res_file = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel.pb'
    with tf.gfile.GFile(res_file,'wb') as f:
    f.write(output_graph.SerializeToString())


    But while inferencing from the pb file it takes 5seconds for the first image and 3 seconds for others after that. The code for inferencing is given below.



    import tensorflow as tf
    import os
    from tensorflow.python.platform import gfile
    from PIL import Image
    import numpy as np
    import scipy
    from scipy import misc
    import matplotlib.pyplot as plt
    import cv2
    import time
    from aug_tool import data_aug
    frozen_graph = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel1.pb'
    with tf.gfile.GFile(frozen_graph,'rb') as f:
    reco = tf.GraphDef()
    reco.ParseFromString(f.read())

    with tf.Graph().as_default() as gre:
    tf.import_graph_def(reco,input_map=None,return_elements=None,name='')

    l_input = gre.get_tensor_by_name('input_image:0') # Input Tensor
    l_output = gre.get_tensor_by_name('predictions:0')
    files=os.listdir('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\')
    imageslst=np.zeros((len(files),224,224,3))
    i=0
    for file in files:
    image = scipy.misc.imread('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\'+file)
    image = image.astype(np.uint8)
    Input_image_shape=(224,224,3)
    resized_img=cv2.resize(image,(224,224))
    channels = image.shape[2]
    #print(channels)
    if channels == 4:
    resized_img = cv2.cvtColor(resized_img,cv2.COLOR_RGBA2RGB)
    #image = np.expand_dims(resized_img,axis=2)
    #print()
    image=np.expand_dims(resized_img,axis=0)
    height,width,channels = Input_image_shape
    imageslst[i,:,:,:]=image
    i=i+1

    init = tf.global_variables_initializer()
    with tf.Session(graph=gre) as sess:
    sess.run(init)
    for i in range(4):
    t1=time.time()
    Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
    print(Session_out.shape)
    t2=time.time()
    print('time:'+str(t2-t1))


    I am using tensorflow-gpu 1.12 with windows 7 using anaconda python 3.5
    I also tried assigning gpu and cpu to predictions something like this



    with tf.device('/gpu:0'):
    for i in range(4):
    t1=time.time()
    Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
    print(Session_out.shape)
    t2=time.time()
    print('time:'+str(t2-t1))


    The thing I noted here is no matter what I assign cpu or gpu the time taken is always the same.The model is a transfer learning vgg16 model. Am i doing something wrong in the code? My gpu is Quadro 8GB










    share|improve this question


























      0












      0








      0








      I am generating a .pb file from the piece of code given below



      import tensorflow as tf
      with tf.Session() as sess:
      gom = tf.train.import_meta_graph('C:\chhaya\CLITP\Tvs_graphs\job.ckpt-20.meta')
      gom.restore(sess,tf.train.latest_checkpoint('C:\chhaya\CLITP\Tvs_graphs'))
      graph = tf.get_default_graph()
      input_graph = graph.as_graph_def()
      output_node_name = "predictions"
      output_graph = tf.graph_util.convert_variables_to_constants(sess,input_graph,output_node_name.split(','))
      res_file = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel.pb'
      with tf.gfile.GFile(res_file,'wb') as f:
      f.write(output_graph.SerializeToString())


      But while inferencing from the pb file it takes 5seconds for the first image and 3 seconds for others after that. The code for inferencing is given below.



      import tensorflow as tf
      import os
      from tensorflow.python.platform import gfile
      from PIL import Image
      import numpy as np
      import scipy
      from scipy import misc
      import matplotlib.pyplot as plt
      import cv2
      import time
      from aug_tool import data_aug
      frozen_graph = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel1.pb'
      with tf.gfile.GFile(frozen_graph,'rb') as f:
      reco = tf.GraphDef()
      reco.ParseFromString(f.read())

      with tf.Graph().as_default() as gre:
      tf.import_graph_def(reco,input_map=None,return_elements=None,name='')

      l_input = gre.get_tensor_by_name('input_image:0') # Input Tensor
      l_output = gre.get_tensor_by_name('predictions:0')
      files=os.listdir('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\')
      imageslst=np.zeros((len(files),224,224,3))
      i=0
      for file in files:
      image = scipy.misc.imread('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\'+file)
      image = image.astype(np.uint8)
      Input_image_shape=(224,224,3)
      resized_img=cv2.resize(image,(224,224))
      channels = image.shape[2]
      #print(channels)
      if channels == 4:
      resized_img = cv2.cvtColor(resized_img,cv2.COLOR_RGBA2RGB)
      #image = np.expand_dims(resized_img,axis=2)
      #print()
      image=np.expand_dims(resized_img,axis=0)
      height,width,channels = Input_image_shape
      imageslst[i,:,:,:]=image
      i=i+1

      init = tf.global_variables_initializer()
      with tf.Session(graph=gre) as sess:
      sess.run(init)
      for i in range(4):
      t1=time.time()
      Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
      print(Session_out.shape)
      t2=time.time()
      print('time:'+str(t2-t1))


      I am using tensorflow-gpu 1.12 with windows 7 using anaconda python 3.5
      I also tried assigning gpu and cpu to predictions something like this



      with tf.device('/gpu:0'):
      for i in range(4):
      t1=time.time()
      Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
      print(Session_out.shape)
      t2=time.time()
      print('time:'+str(t2-t1))


      The thing I noted here is no matter what I assign cpu or gpu the time taken is always the same.The model is a transfer learning vgg16 model. Am i doing something wrong in the code? My gpu is Quadro 8GB










      share|improve this question
















      I am generating a .pb file from the piece of code given below



      import tensorflow as tf
      with tf.Session() as sess:
      gom = tf.train.import_meta_graph('C:\chhaya\CLITP\Tvs_graphs\job.ckpt-20.meta')
      gom.restore(sess,tf.train.latest_checkpoint('C:\chhaya\CLITP\Tvs_graphs'))
      graph = tf.get_default_graph()
      input_graph = graph.as_graph_def()
      output_node_name = "predictions"
      output_graph = tf.graph_util.convert_variables_to_constants(sess,input_graph,output_node_name.split(','))
      res_file = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel.pb'
      with tf.gfile.GFile(res_file,'wb') as f:
      f.write(output_graph.SerializeToString())


      But while inferencing from the pb file it takes 5seconds for the first image and 3 seconds for others after that. The code for inferencing is given below.



      import tensorflow as tf
      import os
      from tensorflow.python.platform import gfile
      from PIL import Image
      import numpy as np
      import scipy
      from scipy import misc
      import matplotlib.pyplot as plt
      import cv2
      import time
      from aug_tool import data_aug
      frozen_graph = 'C:\chhaya\CLITP\Tvs_graphs\Savedmodel1.pb'
      with tf.gfile.GFile(frozen_graph,'rb') as f:
      reco = tf.GraphDef()
      reco.ParseFromString(f.read())

      with tf.Graph().as_default() as gre:
      tf.import_graph_def(reco,input_map=None,return_elements=None,name='')

      l_input = gre.get_tensor_by_name('input_image:0') # Input Tensor
      l_output = gre.get_tensor_by_name('predictions:0')
      files=os.listdir('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\')
      imageslst=np.zeros((len(files),224,224,3))
      i=0
      for file in files:
      image = scipy.misc.imread('C:\chhayaCLITP\Tvs Mysore\NQCOVERFRONTLR\'+file)
      image = image.astype(np.uint8)
      Input_image_shape=(224,224,3)
      resized_img=cv2.resize(image,(224,224))
      channels = image.shape[2]
      #print(channels)
      if channels == 4:
      resized_img = cv2.cvtColor(resized_img,cv2.COLOR_RGBA2RGB)
      #image = np.expand_dims(resized_img,axis=2)
      #print()
      image=np.expand_dims(resized_img,axis=0)
      height,width,channels = Input_image_shape
      imageslst[i,:,:,:]=image
      i=i+1

      init = tf.global_variables_initializer()
      with tf.Session(graph=gre) as sess:
      sess.run(init)
      for i in range(4):
      t1=time.time()
      Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
      print(Session_out.shape)
      t2=time.time()
      print('time:'+str(t2-t1))


      I am using tensorflow-gpu 1.12 with windows 7 using anaconda python 3.5
      I also tried assigning gpu and cpu to predictions something like this



      with tf.device('/gpu:0'):
      for i in range(4):
      t1=time.time()
      Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )
      print(Session_out.shape)
      t2=time.time()
      print('time:'+str(t2-t1))


      The thing I noted here is no matter what I assign cpu or gpu the time taken is always the same.The model is a transfer learning vgg16 model. Am i doing something wrong in the code? My gpu is Quadro 8GB







      python tensorflow computer-vision conv-neural-network






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 7 at 15:53







      chhaya kumar das

















      asked Mar 7 at 12:55









      chhaya kumar daschhaya kumar das

      12




      12






















          1 Answer
          1






          active

          oldest

          votes


















          0














          After a lot of trial and error what I noticed was if I remove the tf.nn.sigmoid from while inferencing then the time taken was less than 10 milliseconds exactly how the model should have performed.
          Before we were using the below line to get predictions which resulted in high inference time:-



          Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )


          So instead of using the above line now we use this:-



          Session_out = sess.run(l_output , feed_dict = l_input : imageslst[:1] )


          This proves to give us the accurate results. Although I am pretty unsure as to why that happens.






          share|improve this answer






















            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55044331%2ftensorflow-pb-file-inferencing-takes-more-than-3seconds-for-an-image%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            After a lot of trial and error what I noticed was if I remove the tf.nn.sigmoid from while inferencing then the time taken was less than 10 milliseconds exactly how the model should have performed.
            Before we were using the below line to get predictions which resulted in high inference time:-



            Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )


            So instead of using the above line now we use this:-



            Session_out = sess.run(l_output , feed_dict = l_input : imageslst[:1] )


            This proves to give us the accurate results. Although I am pretty unsure as to why that happens.






            share|improve this answer



























              0














              After a lot of trial and error what I noticed was if I remove the tf.nn.sigmoid from while inferencing then the time taken was less than 10 milliseconds exactly how the model should have performed.
              Before we were using the below line to get predictions which resulted in high inference time:-



              Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )


              So instead of using the above line now we use this:-



              Session_out = sess.run(l_output , feed_dict = l_input : imageslst[:1] )


              This proves to give us the accurate results. Although I am pretty unsure as to why that happens.






              share|improve this answer

























                0












                0








                0







                After a lot of trial and error what I noticed was if I remove the tf.nn.sigmoid from while inferencing then the time taken was less than 10 milliseconds exactly how the model should have performed.
                Before we were using the below line to get predictions which resulted in high inference time:-



                Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )


                So instead of using the above line now we use this:-



                Session_out = sess.run(l_output , feed_dict = l_input : imageslst[:1] )


                This proves to give us the accurate results. Although I am pretty unsure as to why that happens.






                share|improve this answer













                After a lot of trial and error what I noticed was if I remove the tf.nn.sigmoid from while inferencing then the time taken was less than 10 milliseconds exactly how the model should have performed.
                Before we were using the below line to get predictions which resulted in high inference time:-



                Session_out = sess.run((tf.nn.sigmoid(l_output)) , feed_dict = l_input : imageslst[:1] )


                So instead of using the above line now we use this:-



                Session_out = sess.run(l_output , feed_dict = l_input : imageslst[:1] )


                This proves to give us the accurate results. Although I am pretty unsure as to why that happens.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Mar 8 at 12:27









                chhaya kumar daschhaya kumar das

                12




                12





























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55044331%2ftensorflow-pb-file-inferencing-takes-more-than-3seconds-for-an-image%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    1928 у кіно

                    Захаров Федір Захарович

                    Ель Греко