Reputation: 412
I am new to python and tensorflow. After better(maybe) understanding DNN and its math. I start to learn to use tensorflow by exercises.
One of my exercises is to predict x^2. Which means after fine training. When I give 5.0, it will predict 25.0.
Cost function = E((y-y')^2)
Two hidden Layers and they are fully connected.
learning_rate = 0.001
n_hidden_1 = 3
n_hidden_2 = 2
n_input = 1
n_output = 1
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
def generate_input():
import random
val = random.uniform(-10000, 10000)
return np.array([val]).reshape(1, -1), np.array([val*val]).reshape(1, -1)
# tf Graph input
# given one value and output one value
x = tf.placeholder("float", [None, 1])
y = tf.placeholder("float", [None, 1])
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
distance = tf.sub(pred, y)
cost = tf.reduce_mean(tf.pow(distance, 2))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
avg_cost = 0.0
for iter in range(10000):
inp, ans = generate_input()
_, c = sess.run([optimizer, cost], feed_dict={x: inp, y: ans})
print('iter: '+str(iter)+' cost='+str(c))
However, it turns out that c sometimes gets larger,and sometimes lower. (but it is big)
Upvotes: 0
Views: 265
Reputation: 1828
It seems that your training data have big range because of the statement val = random.uniform(-10000, 10000)
, try to do some data preprocessing before you train. for example,
val = random.uniform(-10000, 10000)
val = np.asarray(val).reshape(1, -1)
val -= np.mean(val, axis=0)
val /= np.std(val, axis=0)
As for loss value, it's ok that sometimes it gets larger,and sometimes lower, just make sure the loss is decreasing when training epoch increase in general. PS: we are often using SGD optimizer.
Upvotes: 2