Batch normalization in pretrained vgg in tensorflow2019 Community Moderator ElectionOrdering of batch normalization and dropout?How to use Batch Normalization correctly in tensorflow?How does Tensorflow Batch Normalization work?Can not use both bias and batch normalization in convolution layersTensorFlow Example vs SequenceExampletf.layers.batch_normalization throws data_type error while using Tensorflow v 1.4 but not 1.0Tensorflow batch normalization: difference momentum and renorm_momentumWhy Batch-normalization Layer follows scale layer in caffe?Batch Normalization tensorflowTensorflow: Download and run pretrained VGG or ResNet model

Ban on all campaign finance?

Can the druid cantrip Thorn Whip really defeat a water weird this easily?

US to Europe trip with Canada layover- is 52 minutes enough?

If Invisibility ends because the original caster casts a non-concentration spell, does Invisibility also end on other targets of the original casting?

It's a yearly task, alright

How to make readers know that my work has used a hidden constraint?

What happens with multiple copies of Humility and Glorious Anthem on the battlefield?

Making a sword in the stone, in a medieval world without magic

Running a subshell from the middle of the current command

How does Dispel Magic work against Stoneskin?

what does the apostrophe mean in this notation?

Does Linux have system calls to access all the features of the file systems it supports?

Can you reject a postdoc offer after the PI has paid a large sum for flights/accommodation for your visit?

Is going from continuous data to categorical always wrong?

Is it true that real estate prices mainly go up?

Should we release the security issues we found in our product as CVE or we can just update those on weekly release notes?

Why would a jet engine that runs at temps excess of 2000°C burn when it crashes?

Coworker uses her breast-pump everywhere in the office

Can "semicircle" be used to refer to a part-circle that is not a exact half-circle?

Can someone explain this Mudra being done by Ramakrishna Paramhansa in Samadhi?

Want to switch to tankless, but can I use my existing wiring?

Do I need to leave some extra space available on the disk which my database log files reside, for log backup operations to successfully occur?

How is the Swiss post e-voting system supposed to work, and how was it wrong?

Replacing Windows 7 security updates with anti-virus?



Batch normalization in pretrained vgg in tensorflow



2019 Community Moderator ElectionOrdering of batch normalization and dropout?How to use Batch Normalization correctly in tensorflow?How does Tensorflow Batch Normalization work?Can not use both bias and batch normalization in convolution layersTensorFlow Example vs SequenceExampletf.layers.batch_normalization throws data_type error while using Tensorflow v 1.4 but not 1.0Tensorflow batch normalization: difference momentum and renorm_momentumWhy Batch-normalization Layer follows scale layer in caffe?Batch Normalization tensorflowTensorflow: Download and run pretrained VGG or ResNet model










0















I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



To use dropout we can determine the amount of dropping as an input when calling the model as follows:



with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


1- Do we have something like this to use Batch Normalization?



2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



OR



I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



**my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)









share|improve this question


























    0















    I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



    To use dropout we can determine the amount of dropping as an input when calling the model as follows:



    with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
    model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


    1- Do we have something like this to use Batch Normalization?



    2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



    OR



    I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



    **my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
    with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
    model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)









    share|improve this question
























      0












      0








      0








      I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



      To use dropout we can determine the amount of dropping as an input when calling the model as follows:



      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


      1- Do we have something like this to use Batch Normalization?



      2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



      OR



      I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



      **my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)









      share|improve this question














      I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



      To use dropout we can determine the amount of dropping as an input when calling the model as follows:



      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


      1- Do we have something like this to use Batch Normalization?



      2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



      OR



      I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



      **my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)






      tensorflow batch-normalization






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 6 at 17:39









      JacobJacob

      207




      207






















          0






          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55029152%2fbatch-normalization-in-pretrained-vgg-in-tensorflow%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55029152%2fbatch-normalization-in-pretrained-vgg-in-tensorflow%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          1928 у кіно

          Захаров Федір Захарович

          Ель Греко