Reputation: 69
using keras model i get zero accuracy for perfectly linear relation of output vs input, i'm not sure if i interpreted wrongly the accuracy or doing something wrong with my code any help will be appreciated
i'v tried adding more layers, more epochs and so on nothing changed
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from keras import models
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras import optimizers
from sklearn.model_selection import KFold
from sklearn.preprocessing import MinMaxScaler
tf.reset_default_graph()
from keras.optimizers import SGD
siz=100000
inp=np.random.randint(100, 1000000 , size=[siz,1])
a1=1.5;
uop=np.dot(inp,a1)
normzer_inp = MinMaxScaler()
inp_norm = normzer_inp.fit_transform\
(inp)
normzer_uop = MinMaxScaler()
uop_norm = normzer_uop.fit_transform\
(uop)
X=inp_norm
Y=uop_norm
kfold = KFold(n_splits=2, random_state=None, shuffle=False)
cvscores = []
opti_SGD = SGD(lr=0.01, momentum=0.9)
model1 = Sequential()
accc_trn=0
accc_tst=0
for train, test in kfold.split(X, Y):
model = Sequential()
model.add(Dense(16, input_dim=X.shape[1], activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mean_squared_error', optimizer=opti_SGD,\
metrics=['accuracy'])
history = model.fit(X[train], Y[train], \
validation_data=(X[test], Y[test]), \
epochs=10,batch_size=2048, verbose=2)
_, train_acc = model.evaluate(X[train], Y[train], verbose=0)
_, test_acc = model.evaluate(X[test], Y[test], verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
plt.plot(history.history['acc'], label='train')
plt.plot(history.history['val_acc'], label='test')
plt.legend()
plt.show()
cvscores.append(test_acc * 100)
print("%.2f%% (+/- %.2f%%)" % (np.mean(cvscores)\
, np.std(cvscores)))
expected about 100% accuracy, received about 0%
Train on 50000 samples, validate on 50000 samples Epoch 1/10 - 0s - loss: 0.1351 - acc: 2.0000e-05 - val_loss: 0.0476 - val_acc: 2.0000e-05 Epoch 2/10 - 0s - loss: 0.0386 - acc: 2.0000e-05 - val_loss: 0.0243 - val_acc: 2.0000e-05 Epoch 3/10 - 0s - loss: 0.0146 - acc: 2.0000e-05 - val_loss: 0.0063 - val_acc: 2.0000e-05 Epoch 4/10 - 0s - loss: 0.0029 - acc: 2.0000e-05 - val_loss: 6.9764e-04 - val_acc: 2.0000e-05 Epoch 5/10 - 0s - loss: 2.8476e-04 - acc: 2.0000e-05 - val_loss: 9.0012e-05 - val_acc: 2.0000e-05 Epoch 6/10 - 0s - loss: 8.0603e-05 - acc: 2.0000e-05 - val_loss: 6.6961e-05 - val_acc: 2.0000e-05 Epoch 7/10 - 0s - loss: 6.3046e-05 - acc: 2.0000e-05 - val_loss: 5.2784e-05 - val_acc: 2.0000e-05 Epoch 8/10 - 0s - loss: 5.0725e-05 - acc: 2.0000e-05 - val_loss: 4.3357e-05 - val_acc: 2.0000e-05 Epoch 9/10 - 0s - loss: 4.2132e-05 - acc: 2.0000e-05 - val_loss: 3.6418e-05 - val_acc: 2.0000e-05 Epoch 10/10 - 0s - loss: 3.5651e-05 - acc: 2.0000e-05 - val_loss: 3.1116e-05 - val_acc: 2.0000e-05 Train: 0.000, Test: 0.000
0.00% (+/- 0.00%)
Upvotes: 2
Views: 79
Reputation: 6034
You are performing a regression task. Accuracy is used in classification type of tasks whereby you are measuring out of total number of samples, how many of them got predicted correctly.
For regression tasks, usually model's performance is defined by it's validation loss. This may be mean squared error (as you are doing already) or mean absolute error etc.
Just change your line of model compilation to:
model.compile(loss='mean_squared_error', optimizer=opti_SGD)
Now, no accuracy details will be printed.
Upvotes: 1