akhetos
akhetos

Reputation: 706

autoencodeur output and feature vector are incorrect

I'm doing feature extraction on image with autoencodeur. My image are bit map => pixel value = 0 or 1

I use the following code:

X_train_autoencodeur = X_train.reshape(-1, 96*96)
X_valid_autoencodeur=X_valid.reshape(-1,96*96)

input_img = Input(shape=(96*96,))
encoded = Dense(1024, activation='relu')(input_img)
encoded = Dense(512, activation='relu')(encoded)
encoded = Dense(256, activation='relu')(encoded)
encoded = Dense(128, activation='relu')(encoded)



decoded = Dense(256, activation='relu')(encoded)
decoded = Dense(512, activation='relu')(decoded)
decoded = Dense(1024, activation='relu')(decoded)
decoded = Dense(96*96, activation='sigmoid')(decoded)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

autoencoder.fit(X_train_autoencodeur, X_train_autoencodeur,
                epochs=100,
                batch_size=256,
                shuffle=True,
                validation_data=(X_valid_autoencodeur, X_valid_autoencodeur))


Then I plot reconstructed images with

decoded_imgs = autoencoder.predict(X_valid_autoencodeur)
plt.imshow(decoded_imgs[7].reshape(96,96))

After 3 epoch validation and training loss goes to very low value and doesnt change

Reconstructed images are full of black, feature vector are all the same.

I have train the autoencodeur for 100 epoch, should I train for more? Did i made mistake on my code which could explain bad reconstruction?

Upvotes: 1

Views: 172

Answers (3)

Sebastian Dengler
Sebastian Dengler

Reputation: 1308

Just from experience I know that autoencoders normally need a long time to train (more like 1000 epochs) even if you are using Convolutional Neural Networks.

You however, are trying to use a fully connected NN (and a fairly large one) which will take even longer to learn something.

My suggestions are: Try using a CNN and more training epochs.

Upvotes: 2

Somayyeh
Somayyeh

Reputation: 326

First of all your network is (unnecessarily) very big and has many parameters so needs lots of data to train. so I suggest try it with just 2 layer in each of encoder and decoder structures. very important point is that to code images convolutional auto encoders definitely get better results so try it. and finally why do you used binary cross entropy for loss function?? try mse.

good luck!

Upvotes: 1

Simon Delecourt
Simon Delecourt

Reputation: 1599

Your code seems correct. I believe the problem comes from the data itself. Did you preprocess your image so that it is normalized between 0-1 like this :

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

Upvotes: 2

Related Questions