How to get the values of convolutional layes in tensorflow? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30 pm US/Eastern) Data science time! April 2019 and salary with experience The Ask Question Wizard is Live!How to merge two dictionaries in a single expression?How do I check if a list is empty?How do I check whether a file exists without exceptions?How can I safely create a nested directory in Python?How to get the current time in PythonHow do I sort a dictionary by value?How to make a chain of function decorators?How do I get the number of elements in a list in Python?How do I list all files of a directory?How to access environment variable values?

RIP Packet Format

Was Objective-C really a hindrance to Apple software development?

What is a 'Key' in computer science?

How long can a nation maintain a technological edge over the rest of the world?

Is a self contained air-bullet cartridge feasible?

Co-worker works way more than he should

Why do people think Winterfell crypts is the safest place for women, children & old people?

Has a Nobel Peace laureate ever been accused of war crimes?

A journey... into the MIND

In search of the origins of term censor, I hit a dead end stuck with the greek term, to censor, λογοκρίνω

What was Apollo 13's "Little Jolt" after MECO?

My admission is revoked after accepting the admission offer

What happened to Viserion in Season 7?

What is the numbering system used for the DSN dishes?

Why doesn't the university give past final exams' answers?

Will temporary Dex penalties prevent you from getting the benefits of the "Two Weapon Fighting" feat if your Dex score falls below the prerequisite?

Simulate round-robin tournament draw

Does a Draconic Bloodline sorcerer's doubled proficiency bonus for Charisma checks against dragons apply to all dragon types or only the chosen one?

Why I cannot instantiate a class whose constructor is private in a friend class?

What does the black goddess statue do and what is it?

Is there a verb for listening stealthily?

How was Lagrange appointed professor of mathematics so early?

Is it appropriate to mention a relatable company blog post when you're asked about the company?

/bin/ls sorts differently than just ls



How to get the values of convolutional layes in tensorflow?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30 pm US/Eastern)
Data science time! April 2019 and salary with experience
The Ask Question Wizard is Live!How to merge two dictionaries in a single expression?How do I check if a list is empty?How do I check whether a file exists without exceptions?How can I safely create a nested directory in Python?How to get the current time in PythonHow do I sort a dictionary by value?How to make a chain of function decorators?How do I get the number of elements in a list in Python?How do I list all files of a directory?How to access environment variable values?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I have the code below from Github Tutorial, and I want to access the values of each "x layer" and save it into numpy array after the training is completed.



def decoder(sampled_z, keep_prob):
with tf.variable_scope("decoder", reuse=None):
x = tf.layers.dense(sampled_z, units=inputs_decoder, activation=lrelu)
x = tf.layers.dense(x, units=inputs_decoder * 2 + 1, activation=lrelu)
x = tf.reshape(x, reshaped_dim)
x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=2,
padding='same', activation=tf.nn.relu)
x = tf.nn.dropout(x, keep_prob)
x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
padding='same', activation=tf.nn.relu)
x = tf.nn.dropout(x, keep_prob)
x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
padding='same', activation=tf.nn.relu)
x = tf.contrib.layers.flatten(x)
x = tf.layers.dense(x, units=28*28, activation=tf.nn.sigmoid)
img = tf.reshape(x, shape=[-1, 28, 28])
return img









share|improve this question






























    1















    I have the code below from Github Tutorial, and I want to access the values of each "x layer" and save it into numpy array after the training is completed.



    def decoder(sampled_z, keep_prob):
    with tf.variable_scope("decoder", reuse=None):
    x = tf.layers.dense(sampled_z, units=inputs_decoder, activation=lrelu)
    x = tf.layers.dense(x, units=inputs_decoder * 2 + 1, activation=lrelu)
    x = tf.reshape(x, reshaped_dim)
    x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=2,
    padding='same', activation=tf.nn.relu)
    x = tf.nn.dropout(x, keep_prob)
    x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
    padding='same', activation=tf.nn.relu)
    x = tf.nn.dropout(x, keep_prob)
    x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
    padding='same', activation=tf.nn.relu)
    x = tf.contrib.layers.flatten(x)
    x = tf.layers.dense(x, units=28*28, activation=tf.nn.sigmoid)
    img = tf.reshape(x, shape=[-1, 28, 28])
    return img









    share|improve this question


























      1












      1








      1








      I have the code below from Github Tutorial, and I want to access the values of each "x layer" and save it into numpy array after the training is completed.



      def decoder(sampled_z, keep_prob):
      with tf.variable_scope("decoder", reuse=None):
      x = tf.layers.dense(sampled_z, units=inputs_decoder, activation=lrelu)
      x = tf.layers.dense(x, units=inputs_decoder * 2 + 1, activation=lrelu)
      x = tf.reshape(x, reshaped_dim)
      x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=2,
      padding='same', activation=tf.nn.relu)
      x = tf.nn.dropout(x, keep_prob)
      x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
      padding='same', activation=tf.nn.relu)
      x = tf.nn.dropout(x, keep_prob)
      x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
      padding='same', activation=tf.nn.relu)
      x = tf.contrib.layers.flatten(x)
      x = tf.layers.dense(x, units=28*28, activation=tf.nn.sigmoid)
      img = tf.reshape(x, shape=[-1, 28, 28])
      return img









      share|improve this question
















      I have the code below from Github Tutorial, and I want to access the values of each "x layer" and save it into numpy array after the training is completed.



      def decoder(sampled_z, keep_prob):
      with tf.variable_scope("decoder", reuse=None):
      x = tf.layers.dense(sampled_z, units=inputs_decoder, activation=lrelu)
      x = tf.layers.dense(x, units=inputs_decoder * 2 + 1, activation=lrelu)
      x = tf.reshape(x, reshaped_dim)
      x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=2,
      padding='same', activation=tf.nn.relu)
      x = tf.nn.dropout(x, keep_prob)
      x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
      padding='same', activation=tf.nn.relu)
      x = tf.nn.dropout(x, keep_prob)
      x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1,
      padding='same', activation=tf.nn.relu)
      x = tf.contrib.layers.flatten(x)
      x = tf.layers.dense(x, units=28*28, activation=tf.nn.sigmoid)
      img = tf.reshape(x, shape=[-1, 28, 28])
      return img






      python tensorflow python-3.6






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 9 at 20:30









      Vlad

      2,06311124




      2,06311124










      asked Mar 9 at 4:42









      Alla AbdellaAlla Abdella

      247




      247






















          1 Answer
          1






          active

          oldest

          votes


















          1














          Regardless of whether you have a convolutional or a dense layer, and whether you have finished your training or not, you can access the values of your variables via session interface (once you have initialized them).



          Consider following example:



          import tensorflow as tf

          def two_layer_perceptron(x):
          with x.graph.as_default():
          with tf.name_scope('fc'):
          fc = tf.layers.dense(
          inputs=x, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          with tf.name_scope('logits'):
          logits = tf.layers.dense(
          inputs=fc, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          return logits

          x = tf.placeholder(tf.float32, shape=(None, 2))
          logits = two_layer_perceptron(x)

          # define loss, train operation and start training

          with tf.Session() as sess:
          sess.run(tf.global_variables_initializer())
          # train here
          # ...
          # sess.run(train_op, feed_dict=...)
          # ...
          # when training is finished, do:
          trainable_vars = tf.trainable_variables()
          vars_vals = sess.run(trainable_vars)
          vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)]


          for val, name in vars_and_names:
          print(name, type(val), 'n', val)

          # dense/kernel:0 <class 'numpy.ndarray'>
          # [[ 0.23275916 0.7079906 ]
          # [-1.0366516 1.9141678 ]]
          # dense/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]
          # dense_1/kernel:0 <class 'numpy.ndarray'>
          # [[-0.55649596 -1.4910121 ]
          # [ 0.54917735 0.39449152]]
          # dense_1/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]


          If you want access to specific variables in you network you may add them to collection via tf.add_to_collection() and later access them via tf.get_collection() OR you can just filter by variable name from the list of all variables (e.g. [v if 'conv' in v.name for v in tf.trainable_variables()])






          share|improve this answer























          • Should I apply this code: trainable_vars = tf.trainable_variables() vars_vals = sess.run(trainable_vars) vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)] After the training completed, or while I'm training inside the training loop. Also, all the layers in my example have the same name "x", how to access each one individually.

            – Alla Abdella
            Mar 9 at 20:35







          • 1





            1. You can apply this code whenever you want to see the values - during training or after training is completed. 2. Each “x” stores a reference to a variable and when you assign a new variable to this reference you lose the access to previous variable. Without storing the reference you can access them only if you add them to collections or by assigning names and ‘filtering’ the variables you need from all trainable variables as I mentioned in my answer.

            – Vlad
            Mar 9 at 20:56











          • Thank you for your time and explanation

            – Alla Abdella
            Mar 9 at 21:37











          • Glad to help ..

            – Vlad
            Mar 9 at 21:48











          • If answered your question, please consider pressing on Accept this answer button.

            – Vlad
            Mar 11 at 9:06











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55074048%2fhow-to-get-the-values-of-convolutional-layes-in-tensorflow%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          Regardless of whether you have a convolutional or a dense layer, and whether you have finished your training or not, you can access the values of your variables via session interface (once you have initialized them).



          Consider following example:



          import tensorflow as tf

          def two_layer_perceptron(x):
          with x.graph.as_default():
          with tf.name_scope('fc'):
          fc = tf.layers.dense(
          inputs=x, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          with tf.name_scope('logits'):
          logits = tf.layers.dense(
          inputs=fc, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          return logits

          x = tf.placeholder(tf.float32, shape=(None, 2))
          logits = two_layer_perceptron(x)

          # define loss, train operation and start training

          with tf.Session() as sess:
          sess.run(tf.global_variables_initializer())
          # train here
          # ...
          # sess.run(train_op, feed_dict=...)
          # ...
          # when training is finished, do:
          trainable_vars = tf.trainable_variables()
          vars_vals = sess.run(trainable_vars)
          vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)]


          for val, name in vars_and_names:
          print(name, type(val), 'n', val)

          # dense/kernel:0 <class 'numpy.ndarray'>
          # [[ 0.23275916 0.7079906 ]
          # [-1.0366516 1.9141678 ]]
          # dense/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]
          # dense_1/kernel:0 <class 'numpy.ndarray'>
          # [[-0.55649596 -1.4910121 ]
          # [ 0.54917735 0.39449152]]
          # dense_1/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]


          If you want access to specific variables in you network you may add them to collection via tf.add_to_collection() and later access them via tf.get_collection() OR you can just filter by variable name from the list of all variables (e.g. [v if 'conv' in v.name for v in tf.trainable_variables()])






          share|improve this answer























          • Should I apply this code: trainable_vars = tf.trainable_variables() vars_vals = sess.run(trainable_vars) vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)] After the training completed, or while I'm training inside the training loop. Also, all the layers in my example have the same name "x", how to access each one individually.

            – Alla Abdella
            Mar 9 at 20:35







          • 1





            1. You can apply this code whenever you want to see the values - during training or after training is completed. 2. Each “x” stores a reference to a variable and when you assign a new variable to this reference you lose the access to previous variable. Without storing the reference you can access them only if you add them to collections or by assigning names and ‘filtering’ the variables you need from all trainable variables as I mentioned in my answer.

            – Vlad
            Mar 9 at 20:56











          • Thank you for your time and explanation

            – Alla Abdella
            Mar 9 at 21:37











          • Glad to help ..

            – Vlad
            Mar 9 at 21:48











          • If answered your question, please consider pressing on Accept this answer button.

            – Vlad
            Mar 11 at 9:06















          1














          Regardless of whether you have a convolutional or a dense layer, and whether you have finished your training or not, you can access the values of your variables via session interface (once you have initialized them).



          Consider following example:



          import tensorflow as tf

          def two_layer_perceptron(x):
          with x.graph.as_default():
          with tf.name_scope('fc'):
          fc = tf.layers.dense(
          inputs=x, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          with tf.name_scope('logits'):
          logits = tf.layers.dense(
          inputs=fc, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          return logits

          x = tf.placeholder(tf.float32, shape=(None, 2))
          logits = two_layer_perceptron(x)

          # define loss, train operation and start training

          with tf.Session() as sess:
          sess.run(tf.global_variables_initializer())
          # train here
          # ...
          # sess.run(train_op, feed_dict=...)
          # ...
          # when training is finished, do:
          trainable_vars = tf.trainable_variables()
          vars_vals = sess.run(trainable_vars)
          vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)]


          for val, name in vars_and_names:
          print(name, type(val), 'n', val)

          # dense/kernel:0 <class 'numpy.ndarray'>
          # [[ 0.23275916 0.7079906 ]
          # [-1.0366516 1.9141678 ]]
          # dense/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]
          # dense_1/kernel:0 <class 'numpy.ndarray'>
          # [[-0.55649596 -1.4910121 ]
          # [ 0.54917735 0.39449152]]
          # dense_1/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]


          If you want access to specific variables in you network you may add them to collection via tf.add_to_collection() and later access them via tf.get_collection() OR you can just filter by variable name from the list of all variables (e.g. [v if 'conv' in v.name for v in tf.trainable_variables()])






          share|improve this answer























          • Should I apply this code: trainable_vars = tf.trainable_variables() vars_vals = sess.run(trainable_vars) vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)] After the training completed, or while I'm training inside the training loop. Also, all the layers in my example have the same name "x", how to access each one individually.

            – Alla Abdella
            Mar 9 at 20:35







          • 1





            1. You can apply this code whenever you want to see the values - during training or after training is completed. 2. Each “x” stores a reference to a variable and when you assign a new variable to this reference you lose the access to previous variable. Without storing the reference you can access them only if you add them to collections or by assigning names and ‘filtering’ the variables you need from all trainable variables as I mentioned in my answer.

            – Vlad
            Mar 9 at 20:56











          • Thank you for your time and explanation

            – Alla Abdella
            Mar 9 at 21:37











          • Glad to help ..

            – Vlad
            Mar 9 at 21:48











          • If answered your question, please consider pressing on Accept this answer button.

            – Vlad
            Mar 11 at 9:06













          1












          1








          1







          Regardless of whether you have a convolutional or a dense layer, and whether you have finished your training or not, you can access the values of your variables via session interface (once you have initialized them).



          Consider following example:



          import tensorflow as tf

          def two_layer_perceptron(x):
          with x.graph.as_default():
          with tf.name_scope('fc'):
          fc = tf.layers.dense(
          inputs=x, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          with tf.name_scope('logits'):
          logits = tf.layers.dense(
          inputs=fc, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          return logits

          x = tf.placeholder(tf.float32, shape=(None, 2))
          logits = two_layer_perceptron(x)

          # define loss, train operation and start training

          with tf.Session() as sess:
          sess.run(tf.global_variables_initializer())
          # train here
          # ...
          # sess.run(train_op, feed_dict=...)
          # ...
          # when training is finished, do:
          trainable_vars = tf.trainable_variables()
          vars_vals = sess.run(trainable_vars)
          vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)]


          for val, name in vars_and_names:
          print(name, type(val), 'n', val)

          # dense/kernel:0 <class 'numpy.ndarray'>
          # [[ 0.23275916 0.7079906 ]
          # [-1.0366516 1.9141678 ]]
          # dense/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]
          # dense_1/kernel:0 <class 'numpy.ndarray'>
          # [[-0.55649596 -1.4910121 ]
          # [ 0.54917735 0.39449152]]
          # dense_1/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]


          If you want access to specific variables in you network you may add them to collection via tf.add_to_collection() and later access them via tf.get_collection() OR you can just filter by variable name from the list of all variables (e.g. [v if 'conv' in v.name for v in tf.trainable_variables()])






          share|improve this answer













          Regardless of whether you have a convolutional or a dense layer, and whether you have finished your training or not, you can access the values of your variables via session interface (once you have initialized them).



          Consider following example:



          import tensorflow as tf

          def two_layer_perceptron(x):
          with x.graph.as_default():
          with tf.name_scope('fc'):
          fc = tf.layers.dense(
          inputs=x, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          with tf.name_scope('logits'):
          logits = tf.layers.dense(
          inputs=fc, units=2,
          kernel_initializer=tf.initializers.truncated_normal)
          return logits

          x = tf.placeholder(tf.float32, shape=(None, 2))
          logits = two_layer_perceptron(x)

          # define loss, train operation and start training

          with tf.Session() as sess:
          sess.run(tf.global_variables_initializer())
          # train here
          # ...
          # sess.run(train_op, feed_dict=...)
          # ...
          # when training is finished, do:
          trainable_vars = tf.trainable_variables()
          vars_vals = sess.run(trainable_vars)
          vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)]


          for val, name in vars_and_names:
          print(name, type(val), 'n', val)

          # dense/kernel:0 <class 'numpy.ndarray'>
          # [[ 0.23275916 0.7079906 ]
          # [-1.0366516 1.9141678 ]]
          # dense/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]
          # dense_1/kernel:0 <class 'numpy.ndarray'>
          # [[-0.55649596 -1.4910121 ]
          # [ 0.54917735 0.39449152]]
          # dense_1/bias:0 <class 'numpy.ndarray'>
          # [0. 0.]


          If you want access to specific variables in you network you may add them to collection via tf.add_to_collection() and later access them via tf.get_collection() OR you can just filter by variable name from the list of all variables (e.g. [v if 'conv' in v.name for v in tf.trainable_variables()])







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 9 at 9:57









          VladVlad

          2,06311124




          2,06311124












          • Should I apply this code: trainable_vars = tf.trainable_variables() vars_vals = sess.run(trainable_vars) vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)] After the training completed, or while I'm training inside the training loop. Also, all the layers in my example have the same name "x", how to access each one individually.

            – Alla Abdella
            Mar 9 at 20:35







          • 1





            1. You can apply this code whenever you want to see the values - during training or after training is completed. 2. Each “x” stores a reference to a variable and when you assign a new variable to this reference you lose the access to previous variable. Without storing the reference you can access them only if you add them to collections or by assigning names and ‘filtering’ the variables you need from all trainable variables as I mentioned in my answer.

            – Vlad
            Mar 9 at 20:56











          • Thank you for your time and explanation

            – Alla Abdella
            Mar 9 at 21:37











          • Glad to help ..

            – Vlad
            Mar 9 at 21:48











          • If answered your question, please consider pressing on Accept this answer button.

            – Vlad
            Mar 11 at 9:06

















          • Should I apply this code: trainable_vars = tf.trainable_variables() vars_vals = sess.run(trainable_vars) vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)] After the training completed, or while I'm training inside the training loop. Also, all the layers in my example have the same name "x", how to access each one individually.

            – Alla Abdella
            Mar 9 at 20:35







          • 1





            1. You can apply this code whenever you want to see the values - during training or after training is completed. 2. Each “x” stores a reference to a variable and when you assign a new variable to this reference you lose the access to previous variable. Without storing the reference you can access them only if you add them to collections or by assigning names and ‘filtering’ the variables you need from all trainable variables as I mentioned in my answer.

            – Vlad
            Mar 9 at 20:56











          • Thank you for your time and explanation

            – Alla Abdella
            Mar 9 at 21:37











          • Glad to help ..

            – Vlad
            Mar 9 at 21:48











          • If answered your question, please consider pressing on Accept this answer button.

            – Vlad
            Mar 11 at 9:06
















          Should I apply this code: trainable_vars = tf.trainable_variables() vars_vals = sess.run(trainable_vars) vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)] After the training completed, or while I'm training inside the training loop. Also, all the layers in my example have the same name "x", how to access each one individually.

          – Alla Abdella
          Mar 9 at 20:35






          Should I apply this code: trainable_vars = tf.trainable_variables() vars_vals = sess.run(trainable_vars) vars_and_names = [(val, var.name) for val, var in zip(vars_vals, trainable_vars)] After the training completed, or while I'm training inside the training loop. Also, all the layers in my example have the same name "x", how to access each one individually.

          – Alla Abdella
          Mar 9 at 20:35





          1




          1





          1. You can apply this code whenever you want to see the values - during training or after training is completed. 2. Each “x” stores a reference to a variable and when you assign a new variable to this reference you lose the access to previous variable. Without storing the reference you can access them only if you add them to collections or by assigning names and ‘filtering’ the variables you need from all trainable variables as I mentioned in my answer.

          – Vlad
          Mar 9 at 20:56





          1. You can apply this code whenever you want to see the values - during training or after training is completed. 2. Each “x” stores a reference to a variable and when you assign a new variable to this reference you lose the access to previous variable. Without storing the reference you can access them only if you add them to collections or by assigning names and ‘filtering’ the variables you need from all trainable variables as I mentioned in my answer.

          – Vlad
          Mar 9 at 20:56













          Thank you for your time and explanation

          – Alla Abdella
          Mar 9 at 21:37





          Thank you for your time and explanation

          – Alla Abdella
          Mar 9 at 21:37













          Glad to help ..

          – Vlad
          Mar 9 at 21:48





          Glad to help ..

          – Vlad
          Mar 9 at 21:48













          If answered your question, please consider pressing on Accept this answer button.

          – Vlad
          Mar 11 at 9:06





          If answered your question, please consider pressing on Accept this answer button.

          – Vlad
          Mar 11 at 9:06



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55074048%2fhow-to-get-the-values-of-convolutional-layes-in-tensorflow%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Save data to MySQL database using ExtJS and PHP [closed]2019 Community Moderator ElectionHow can I prevent SQL injection in PHP?Which MySQL data type to use for storing boolean valuesPHP: Delete an element from an arrayHow do I connect to a MySQL Database in Python?Should I use the datetime or timestamp data type in MySQL?How to get a list of MySQL user accountsHow Do You Parse and Process HTML/XML in PHP?Reference — What does this symbol mean in PHP?How does PHP 'foreach' actually work?Why shouldn't I use mysql_* functions in PHP?

          Compiling GNU Global with universal-ctags support Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Data science time! April 2019 and salary with experience The Ask Question Wizard is Live!Tags for Emacs: Relationship between etags, ebrowse, cscope, GNU Global and exuberant ctagsVim and Ctags tips and trickscscope or ctags why choose one over the other?scons and ctagsctags cannot open option file “.ctags”Adding tag scopes in universal-ctagsShould I use Universal-ctags?Universal ctags on WindowsHow do I install GNU Global with universal ctags support using Homebrew?Universal ctags with emacsHow to highlight ctags generated by Universal Ctags in Vim?

          Add ONERROR event to image from jsp tldHow to add an image to a JPanel?Saving image from PHP URLHTML img scalingCheck if an image is loaded (no errors) with jQueryHow to force an <img> to take up width, even if the image is not loadedHow do I populate hidden form field with a value set in Spring ControllerStyling Raw elements Generated from JSP tagds with Jquery MobileLimit resizing of images with explicitly set width and height attributeserror TLD use in a jsp fileJsp tld files cannot be resolved