John
John

Reputation: 141

Why is the accuracy always 0.00%with high loss on Keras for deep learning

enter image description hereI have built a code in Keras to train the neural networks to mimic the behavior of a system that I developed in MATLAB. I exported the output and input data from MATLAB to Keras. Whenever I train, the accuracy is 0.00% and the loss is always 382.9722....

I tried everything ( increasing hidden layers, activation functions, batch sizes, epochs, etc.), nothing seems to fix the issue. I would appreciate it if someone could tell me if something is wrong with the code or with my data.

data = pd.read_csv('testkeras.txt')
print(data.head())

Y = data.output
X = data.drop('output', axis=1)

xtrain, xtest, ytrain, ytest = train_test_split(X,Y,test_size=0.5)


model = Sequential()
model.add(Dense(units = 64, input_dim = 6, init = 'uniform',             
activation='relu'))
model.add(Dense(units = 32, activation='relu'))
model.add(Dense(units = 16, activation='relu'))
model.add(Dense(1, activation='sigmoid')) #output layer


model.compile(optimizer = 'rmsprop', loss = 'mean_absolute_error', 
metrics=['acc'])


history = model.fit(xtrain, ytrain, batch_size = 2048, epochs = 20,      
validation_split= 0.2, verbose=1)


score = model.evaluate(xtest, ytest, batch_size=2048)
print(score)

The sample original data from matlab is (first 6 columns are input and last column is output)

2,2,2,2,2,2,2.5404e+05
2,2,2,2,2,2,2.5404e+05
2,2,1.9998,1.9998,1.9998,1.9998,2.5404e+05
2,2,1.9988,1.9988,1.9988,1.9988,2.5404e+05
2,2,1.9938,1.9938,1.9938,1.9938,2.5404e+05
2,2,1.9687,1.9687,1.9687,1.9687,2.5403e+05
2,2,1.8431,1.8431,1.8431,1.8431,2.5401e+05
2,2,1.2153,1.2153,1.2153,1.2153,2.5388e+05
2,2,-1.9186,-1.9186,-1.9186,-1.9186,2.5324e+05
2,2,-17.469,-17.469,-17.469,-17.469,2.5007e+05
2,1.9997,-92.331,-92.331,-92.331,-92.331,2.3481e+05
2,1.9936,-402.94,-402.94,-402.94,-402.94,1.7135e+05
2,1.9724,-723.02,-723.02,-723.02,-723.02,1.0558e+05
2,1.9373,-938.65,-938.65,-938.65,-938.65,60759
1.9999,1.8683,-1105.7,-1105.7,-1105.7,-1105.7,24988
1.9999,1.8212,-1152.8,-1152.8,-1152.8,-1152.8,14210
1.9997,1.7097,-1190.6,-1190.6,-1190.6,-1190.6,3712
1.9996,1.6936,-1192.1,-1192.1,-1192.1,-1192.1,3012.4
1.9994,1.6126,-1192.5,-1192.5,-1192.5,-1192.5,898.37
1.9992,1.5645,-1189.5,-1189.5,-1189.5,-1189.5,291.6
1.9987,1.4363,-1176.9,-1176.9,-1176.9,-1176.9,-362.02
1.9981,1.3097,-1161.9,-1161.9,-1161.9,-1161.9,-523.72
1.9974,1.1848,-1146.5,-1146.5,-1146.5,-1146.5,-564.79
1.9965,1.0615,-1131.1,-1131.1,-1131.1,-1131.1,-576.24
1.9955,0.93983,-1115.8,-1115.8,-1115.8,-1115.8,-580.39
1.9944,0.81985,-1100.6,-1100.6,-1100.6,-1100.6,-582.7
1.9931,0.70149,-1085.6,-1085.6,-1085.6,-1085.6,-584.53
1.9918,0.58475,-1070.7,-1070.7,-1070.7,-1070.7,-586.19
1.9903,0.46962,-1055.9,-1055.9,-1055.9,-1055.9,-587.78
1.9887,0.35607,-1041.3,-1041.3,-1041.3,-1041.3,-589.31
1.987,0.2441,-1026.8,-1026.8,-1026.8,-1026.8,-590.78
1.9852,0.13368,-1012.4,-1012.4,-1012.4,-1012.4,-592.21
1.9833,0.024813,-998.22,-998.22,-998.22,-998.22,-593.58
1.9813,-0.082527,-984.13,-984.13,-984.13,-984.13,-594.9
1.9791,-0.18835,-970.17,-970.17,-970.17,-970.17,-596.17
1.9769,-0.29267,-956.34,-956.34,-956.34,-956.34,-597.4
1.9745,-0.39551,-942.64,-942.64,-942.64,-942.64,-598.57
1.9721,-0.49687,-929.07,-929.07,-929.07,-929.07,-599.7
1.9695,-0.59677,-915.62,-915.62,-915.62,-915.62,-600.78

The X-train data is

3492 -0.49055   2.0     2.0     2.0     2.0     2.0
9730 -0.49055   2.0     2.0     2.0     2.0     2.0
3027 -0.49055   2.0     2.0     2.0     2.0     2.0
4307 -0.49055   2.0     2.0     2.0     2.0     2.0
3364 -0.49055   2.0     2.0     2.0     2.0     2.0
(5008, 6)

and the Y-train data is,

3492   -1.333700e-06
9730    5.215400e-08
3027    4.209600e-06
4307    5.215400e-08
3364    5.215400e-08
Name: output, dtype: float64
(5008,) 

Upvotes: 1

Views: 814

Answers (1)

Florian H
Florian H

Reputation: 3082

As mentioned in the comments, this is a regression problem, thus accuracy doesn't make sense.

But there is another problem in your code. Your last layers activation function is sigmoid:

model.add(Dense(1, activation='sigmoid')) #output layer

The Sigmoid Function is defined between 0 and 1, that means that the output of your network will never be smaller than 0 or bigger than one. Thus you will never ever come even near to your negative outputs. I see 2 options to solve this problem

  • Scale your output data between 0 an 1, e.g. with Sklearns MinMaxScaler
  • Use an activation function wich can output any number like linar or if you are sure that your data is between -1 and 1 something like tanh.

It also might improve your performance (less error, faster learning) if you scale your input data. It's common to scale it in a way that your mean is 0 and your variance is 1. Thats called standardize. You can do that e.g. with sklearns StandardScaler

Also your trainingdata seems somehow wrong:

3492 -0.49055   2.0     2.0     2.0     2.0     2.0
9730 -0.49055   2.0     2.0     2.0     2.0     2.0
3027 -0.49055   2.0     2.0     2.0     2.0     2.0
4307 -0.49055   2.0     2.0     2.0     2.0     2.0
3364 -0.49055   2.0     2.0     2.0     2.0     2.0

every row here is identic while your labels (y) differ. You can't build a network that maps identic inputs to different outputs.

Upvotes: 3

Related Questions