Reputation: 552
I'm new to python and neural networks. I have a simple network written in Keras that can predict the next number in a linear sequence:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
data = [[i for i in range(6)]];
data = np.array(data, dtype=int);
target = [[i for i in range(10, 16)]];
target = np.array(target, dtype=int);
model = Sequential();
model.add(Dense(1, input_dim=1))
model.add(Dense(1));
model.compile(loss='mean_absolute_error', optimizer = 'adam', metrics = ['accuracy']);
model.summary();
for i in range (10000):
dataIterator = 0;
for targetValue in target:
model.train_on_batch(data[dataIterator], targetValue)
dataIterator = dataIterator + 1;
predict = model.predict([28]);
print(predict);
Gives me output:
[[38.0199]]
And that is to be expected. I'm not sure if my code has some stupid errors in it and would appreciate any feedback and explanations. I used Dense because I'm not sure what LSTM exacly does. Another thing is that my model requires input to have 2 dimensions when it's specified:
input_dim=1
I don't understand why. Next, I would like to create a network that can predict the next numbers in a sequence like [1, 4, 9, 16, 25]. This one doesn't.
Note that this is my first program written in Python and first use of neural networks :). Thanks in advance!
UPDATE 1
According to tip of using scale i came up with something like this:
import numpy as np
from keras.models import Sequential
from pandas import Series
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import StandardScaler
data = [[i for i in range (1,30)]];
data = np.array(data, dtype=int);
target = np.power(data, 2);
target = np.array(target, dtype=int);
target = target.reshape((len(target[0]), 1))
data = data.reshape((len(data[0]), 1))
scale = StandardScaler()
dataTest = [[i for i in range (2,4)]];
dataTest = np.array(dataTest, dtype=int);
dataTest = dataTest.reshape((len(dataTest[0]), 1))
model = Sequential();
model.add(Dense(1, input_dim=1))
model.add(Dense(1));
model.compile(loss='mean_absolute_error', optimizer = 'adam');
model.fit(scale.fit_transform(data), target, batch_size=1, epochs=200, verbose=1)
print(model.predict(scale.transform(dataTest)));
Despite this predictions are way off ideal. For given test data, output:
[[27.616932]
[28.265278]]
I'm out of ideas at the moment :(. Not feeling it at all.
Upvotes: 4
Views: 11991
Reputation: 48367
And that is to be expected. I'm not sure if my code has some stupid errors in it, would appreciate any feedback and explanations. I used Dense because im not sure what LSTM exactly does.
LSTM
, which stands for Long-short term memory, are a special kind of RNN
, capable of learning long-term dependencies. Also, its use mainly sequential processing
over time. For instance, you can use LSTM
when you want to predict the Google Stock prices.
Since you have to predict next numbers in sequence like [1, 4, 9, 16, 25]
that's means it's a regression learning system model, which belongs to supervised learning. When you're using regression
models, there is no accuracy
. The accuracy from regression models it's called COD
or Coefficient of determination
or R squared score.
The metric
that you use- metrics=['accuracy']
corresponds to a classification problem. If you want to do regression, remove metrics=['accuracy']
. That is, just use
model.compile(optimizer = 'adam',loss = 'mean_absolute_error')
Upvotes: 8