Reputation: 93
I'm trying to implement a U_NET architecture using tensorflow 1.15, these is the first convolutional layer :
import tensorflow as tf
print("############################### VERSION TENSORFLOW ###############################################")
print(tf.__version__)
print("############################### VERSION TENSORFLOW ###############################################")
def u_net_model(feature):
w_init = tf.truncated_normal_initializer(stddev=0.01)
print("--------------------------------------------------------------------------------- w_init")
print(w_init)
b_init = tf.constant_initializer(value=0.40)
gamma_init = tf.random_normal_initializer(1., 0.02)
with tf.variable_scope("u_network",reuse=True):
x = tf.keras.Input(batch_size = 5,tensor=feature)
#y = tf.keras.layers.Dense(16, activation='softmax')(x)
conv1 = tf.keras.layers.Conv2D(64,4,(2,2),activation = 'relu',padding='same',kernel_initializer= w_init,bias_initializer=b_init, name = "convolution1")(x)
print("conv1")
print(conv1)
conv2 = tf.keras.layers.Conv2D(128,4,(2,2),activation = 'relu',padding='same', kernel_initializer= w_init,bias_initializer=b_init, name = "convolution2")(conv1)
print("conv2")
print(conv2)
conv2 = tf.keras.layers.BatchNormalization()(conv2)
print("conv2")
print(conv2)
In the main.py I have:
nw, nh, nz = X_train.shape[1:]
t_image_good = tf.placeholder('float32', [25, nw, nh, nz], name='good_image')
print(t_image_good)
t_image_good_samples = tf.placeholder('float32', [50, nw, nh, nz], name='good_image_samples')
print(t_image_good_samples)
t_PROVA = t_image_good
t_PROVA_samples = t_image_good_samples
g_nmse_a = tf.sqrt(tf.reduce_sum(tf.squared_difference(t_PROVA, t_PROVA), axis=[1, 2, 3]))
g_nmse_b = tf.sqrt(tf.reduce_sum(tf.square(t_PROVA), axis=[1, 2, 3]))
g_nmse = tf.reduce_mean(g_nmse_a / g_nmse_b)
generator_loss = g_alpha *g_nmse
print("generator_loss")
#geneator_loss è un tensore
print(generator_loss)
learning_rate = 0.0001
beta = 0.5
print("\n")
generator_variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'u_network')
print("--------------------------------------- generator_variables")
print(generator_variables)
generator_gradient_optimum = tf.train.AdamOptimizer(learning_rate, beta1=beta).minimize(generator_loss, var_list = generator_variables )
nw, nh, nz = X_train.shape[1:]
t_image_good = tf.placeholder('float32', [25, nw, nh, nz], name='good_image')
print(t_image_good)
t_image_good_samples = tf.placeholder('float32', [50, nw, nh, nz], name='good_image_samples')
print(t_image_good_samples)
t_PROVA = t_image_good
t_PROVA_samples = t_image_good_samples
g_nmse_a = tf.sqrt(tf.reduce_sum(tf.squared_difference(t_PROVA, t_PROVA), axis=[1, 2, 3]))
g_nmse_b = tf.sqrt(tf.reduce_sum(tf.square(t_PROVA), axis=[1, 2, 3]))
g_nmse = tf.reduce_mean(g_nmse_a / g_nmse_b)
generator_loss = g_alpha *g_nmse
print("generator_loss")
#geneator_loss è un tensore
print(generator_loss)
learning_rate = 0.0001
beta = 0.5
print("\n")
generator_variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'u_network')
print("--------------------------------------- generator_variables")
print(generator_variables)
generator_gradient_optimum = tf.train.AdamOptimizer(learning_rate, beta1=beta).minimize(generator_loss, var_list = generator_variables )
When I run it I get:
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'u_network/convolution1/kernel:0' shape=(4, 4, 1, 64) dtype=float32>", "<tf.Variable 'u_network/convolution1/bias:0' shape=(64,) dtype=float32>", "<tf.Variable 'u_network/convolution2/kernel:0' shape=(4, 4, 64, 128) dtype=float32>", "<tf.Variable 'u_network/convolution2/bias:0' shape=(128,) dtype=float32>", "<tf.Variable 'u_network/batch_normalization/gamma:0' shape=(128,) dtype=float32>", "<tf.Variable 'u_network/batch_normalization/beta:0' shape=(128,) dtype=float32>", "<tf.Variable 'u_network/convolution3/kernel:0' shape=(4, 4, 128, 256) dtype=float32>", "<tf.Variable 'u_network/convolution3/bias:0' shape=(256,) dtype=float32>", "<tf.Variable 'u_network/batch_normalization_1/gamma:0' shape=(256,) dtype=float32>"
... many lines of this type, that finally ends with:
and loss Tensor("mul_10:0", shape=(), dtype=float32).
What I would do is passing the parameters, weights and biases, such that to start the AdamOptimizer.
What am I doing wrong?
Upvotes: 1
Views: 366
Reputation: 6034
In the code provided by you, there is no where you are calling the function of u_net_model
. The code provided by you is only have couple of placeholders in the graph with some operations being performed on it. Operations used by you are tf.square
and tf.squared_difference
which do not have any learnable parameters in it and hence there is nothing for the optimizer to minimize (or converge) upon.
Upvotes: 1