Batch normalization in pretrained vgg in tensorflow2019 Community Moderator ElectionOrdering of batch normalization and dropout?How to use Batch Normalization correctly in tensorflow?How does Tensorflow Batch Normalization work?Can not use both bias and batch normalization in convolution layersTensorFlow Example vs SequenceExampletf.layers.batch_normalization throws data_type error while using Tensorflow v 1.4 but not 1.0Tensorflow batch normalization: difference momentum and renorm_momentumWhy Batch-normalization Layer follows scale layer in caffe?Batch Normalization tensorflowTensorflow: Download and run pretrained VGG or ResNet model

Ban on all campaign finance?

Can the druid cantrip Thorn Whip really defeat a water weird this easily?

US to Europe trip with Canada layover- is 52 minutes enough?

If Invisibility ends because the original caster casts a non-concentration spell, does Invisibility also end on other targets of the original casting?

It's a yearly task, alright

How to make readers know that my work has used a hidden constraint?

What happens with multiple copies of Humility and Glorious Anthem on the battlefield?

Making a sword in the stone, in a medieval world without magic

Running a subshell from the middle of the current command

How does Dispel Magic work against Stoneskin?

what does the apostrophe mean in this notation?

Does Linux have system calls to access all the features of the file systems it supports?

Can you reject a postdoc offer after the PI has paid a large sum for flights/accommodation for your visit?

Is going from continuous data to categorical always wrong?

Is it true that real estate prices mainly go up?

Should we release the security issues we found in our product as CVE or we can just update those on weekly release notes?

Why would a jet engine that runs at temps excess of 2000°C burn when it crashes?

Coworker uses her breast-pump everywhere in the office

Can "semicircle" be used to refer to a part-circle that is not a exact half-circle?

Can someone explain this Mudra being done by Ramakrishna Paramhansa in Samadhi?

Want to switch to tankless, but can I use my existing wiring?

Do I need to leave some extra space available on the disk which my database log files reside, for log backup operations to successfully occur?

How is the Swiss post e-voting system supposed to work, and how was it wrong?

Replacing Windows 7 security updates with anti-virus?



Batch normalization in pretrained vgg in tensorflow



2019 Community Moderator ElectionOrdering of batch normalization and dropout?How to use Batch Normalization correctly in tensorflow?How does Tensorflow Batch Normalization work?Can not use both bias and batch normalization in convolution layersTensorFlow Example vs SequenceExampletf.layers.batch_normalization throws data_type error while using Tensorflow v 1.4 but not 1.0Tensorflow batch normalization: difference momentum and renorm_momentumWhy Batch-normalization Layer follows scale layer in caffe?Batch Normalization tensorflowTensorflow: Download and run pretrained VGG or ResNet model










0















I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



To use dropout we can determine the amount of dropping as an input when calling the model as follows:



with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


1- Do we have something like this to use Batch Normalization?



2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



OR



I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



**my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)









share|improve this question


























    0















    I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



    To use dropout we can determine the amount of dropping as an input when calling the model as follows:



    with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
    model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


    1- Do we have something like this to use Batch Normalization?



    2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



    OR



    I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



    **my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
    with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
    model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)









    share|improve this question
























      0












      0








      0








      I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



      To use dropout we can determine the amount of dropping as an input when calling the model as follows:



      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


      1- Do we have something like this to use Batch Normalization?



      2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



      OR



      I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



      **my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)









      share|improve this question














      I have a naive question about how to implement Batch Normalization in Tensorflow. I appreciate your explanation, sample codes, and links.



      To use dropout we can determine the amount of dropping as an input when calling the model as follows:



      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(x_inputs, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)


      1- Do we have something like this to use Batch Normalization?



      2- if I want to implement the instructions of this link https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization, do I need to change network codes i.e. https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py?



      OR



      I can simply use my_inputs_norm = tf.layers.batch_normalization(x, training=training) before calling the model, like this:



      **my_inputs_norm** = tf.layers.batch_normalization(x, training=training)
      with slim.arg_scope(vgg.vgg_arg_scope(weight_decay=args.weight_decay)):
      model_outputs, _ = vgg.vgg_16(**my_inputs_norm**, num_classes=TOT_CLASSES, is_training=True, **dropout_keep_prob=args.DROPOUT_PROB**)






      tensorflow batch-normalization






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 6 at 17:39









      JacobJacob

      207




      207






















          0






          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55029152%2fbatch-normalization-in-pretrained-vgg-in-tensorflow%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55029152%2fbatch-normalization-in-pretrained-vgg-in-tensorflow%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Save data to MySQL database using ExtJS and PHP [closed]2019 Community Moderator ElectionHow can I prevent SQL injection in PHP?Which MySQL data type to use for storing boolean valuesPHP: Delete an element from an arrayHow do I connect to a MySQL Database in Python?Should I use the datetime or timestamp data type in MySQL?How to get a list of MySQL user accountsHow Do You Parse and Process HTML/XML in PHP?Reference — What does this symbol mean in PHP?How does PHP 'foreach' actually work?Why shouldn't I use mysql_* functions in PHP?

          Compiling GNU Global with universal-ctags support Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Data science time! April 2019 and salary with experience The Ask Question Wizard is Live!Tags for Emacs: Relationship between etags, ebrowse, cscope, GNU Global and exuberant ctagsVim and Ctags tips and trickscscope or ctags why choose one over the other?scons and ctagsctags cannot open option file “.ctags”Adding tag scopes in universal-ctagsShould I use Universal-ctags?Universal ctags on WindowsHow do I install GNU Global with universal ctags support using Homebrew?Universal ctags with emacsHow to highlight ctags generated by Universal Ctags in Vim?

          Add ONERROR event to image from jsp tldHow to add an image to a JPanel?Saving image from PHP URLHTML img scalingCheck if an image is loaded (no errors) with jQueryHow to force an <img> to take up width, even if the image is not loadedHow do I populate hidden form field with a value set in Spring ControllerStyling Raw elements Generated from JSP tagds with Jquery MobileLimit resizing of images with explicitly set width and height attributeserror TLD use in a jsp fileJsp tld files cannot be resolved