Reputation: 2296
I'm doing electricity load forecasting using a simple feedforward neural network. Following is my code:
...
num_periods = 24
f_horizon = 48 #forecast horizon
...
#RNN designning
tf.reset_default_graph()
inputs = num_periods #input vector size
hidden = 100
output = num_periods #output vector size
learning_rate = 0.01
seed = 128
x = tf.placeholder(tf.float32, [None, inputs])
y = tf.placeholder(tf.float32, [None, output])
weights = {
'hidden': tf.Variable(tf.random_normal([inputs, hidden], seed=seed)),
'output': tf.Variable(tf.random_normal([hidden, output], seed=seed))
}
biases = {
'hidden': tf.Variable(tf.random_normal([1,hidden], seed=seed)),
'output': tf.Variable(tf.random_normal([1,output], seed=seed))
}
hidden_layer = tf.add(tf.matmul(x, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.relu(hidden_layer)
output_layer = tf.matmul(hidden_layer, weights['output']) + biases['output']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = output_layer, labels = y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
init = tf.initialize_all_variables() #initialize all the variables
epochs = 1000 #number of iterations or training cycles, includes both the FeedFoward and Backpropogation
mape = []
...
for st in state.values():
print("State: ", st, end='\n')
with tf.Session() as sess:
init.run()
for ep in range(epochs):
sess.run([optimizer, cost], feed_dict={x: x_batches[st], y: y_batches[st]})
print("\n")
Here is what I'm getting the output for the NSW
state:
As we can see that the cost is increasing continuously with epochs. Why is this happening ?
Upvotes: 0
Views: 127
Reputation: 56357
You are using the wrong loss, as forecasting electricity load sounds like a regression problem, while cross entropy is only for classification.
Something like mean squared error should work instead.
Upvotes: 1