Reputation: 15
I'm trying to do a simple Keras Neural Network but the model doesn't fit:
Train on 562 samples, validate on 188 samples
Epoch 1/20
562/562 [==============================] - 1s 1ms/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 2/20
562/562 [==============================] - 0s 298us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 3/20
562/562 [==============================] - 0s 295us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 4/20
562/562 [==============================] - 0s 282us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 5/20
562/562 [==============================] - 0s 289us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
Epoch 6/20
562/562 [==============================] - 0s 265us/step - loss: 8.1130 - acc: 0.4911 - val_loss: 7.6320 - val_acc: 0.5213
The data base is structured in a CSV file like this:
doc venda img1 img2 v1 v2 gt
RG venda1 img123 img12 [3399, 162675, ...] [3399, 162675, ...] 1
My intent its to use the diff between v1 and v2 vector to answer me if img1 and im2 are from the same class.
The code:
from sklearn.model_selection import train_test_split
(X_train, X_test, Y_train, Y_test) = train_test_split(train, train_labels, test_size=0.25, random_state=42)
# create the model
model = Sequential()
model.add(Dense(10, activation="relu", input_dim=10, kernel_initializer="uniform"))
model.add(Dense(6, activation="relu", kernel_initializer="uniform"))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(
np.array(X_train),
np.array(Y_train),
shuffle=True,
epochs=20,
verbose=1,
batch_size=5,
validation_data=(np.array(X_test), np.array(Y_test)),
)
What i'm doing wrong?
Upvotes: 0
Views: 64
Reputation: 552
I have had success normalizing features using this function. I forget exactly why I use the same mu and sigma from train set on the test and val but I am pretty sure I learned it during the deep.ai course on coursera
def normalize_features(dataset):
mu = np.mean(dataset, axis = 0) # columns
sigma = np.std(dataset, axis = 0)
norm_parameters = {'mu': mu,
'sigma': sigma}
return (dataset-mu)/(sigma+1e-10), norm_parameters
# Normal X data; using same mu and sigma from test set;
x_train, norm_parameters = normalize_features(x_train)
x_val = (x_val-norm_parameters['mu'])/(norm_parameters['sigma']+1e-10)
x_test = (x_test-norm_parameters['mu'])/(norm_parameters['sigma']+1e-10)
Upvotes: 0
Reputation: 1469
Divide the difference vector by some constant number so that the feature vector is in range 0 to 1 or -1 to 1. Right now the values are too big and the loss is coming high. Network learns faster if the data is normalized properly.
Upvotes: 2