AliY
AliY

Reputation: 557

What does model.evaluate output?

I was running my gated recurrent unit (GRU) model. An after ti was over I had

score = model15.evaluate(X20_test, y20_test)
print('Score: {}'.format(score))

and the output was.

[0.030501108373429363, 0.00272163194425038]

Here is my code for my model:

model20 = Sequential()
model20.add(GRU(units=70, return_sequences=True, input_shape=(1,12),activity_regularizer=regularizers.l2(0.0001)))
model20.add(GRU(units=50, return_sequences=True,dropout=0.1))
model20.add(GRU(units=30, dropout=0.1))
model20.add(Dense(units=5))
model20.add(Dense(units=3))
model20.add(Dense(units=1, activation='relu'))
model20.compile(loss=['mae'], optimizer=Adam(lr=0.0001),metrics=['mse']) 
model20.summary() 


history20=model20.fit(X20_train, y20_train, batch_size=1000,epochs=25,validation_split=0.1, verbose=1, callbacks=[TensorBoardColabCallback(tbc),Early_Stop])

Is the first number the loss MAE number for the test data using the model, and the second is the metrics MSE number for the data data using the model. If so, does this mean lower is better?

enter image description here

enter image description here

Upvotes: 0

Views: 90

Answers (1)

Nicolas Gervais
Nicolas Gervais

Reputation: 36584

The first number is the loss, mae, for the test data using the model. The second is the metrics. A smaller mae is always better.

Upvotes: 1

Related Questions