Reputation: 156
I get the following error when trying to use an HDF5 dataset with keras. It would seem that Sequential.fit(), while making a validation data slice, encounters that a key of the slice does not have the attribute 'stop'. I don't know if this is a formatting issue of my HDF5 dataset or something else. Any help would be appreciated.
Traceback (most recent call last):
File "autoencoder.py", line 73, in modulevalidation_split=0.2)
File "/home/ben/.local/lib/python2.7/site-packages/keras/models.py", line 672, in fit
initial_epoch=initial_epoch)
File "/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1143, in fit
x, val_x = (slice_X(x, 0, split_at), slice_X(x, split_at))
File "/home/ben/.local/lib/python2.7/site-packages/keras/engine/training.py", line 301, in slice_X
return [x[start:stop] for x in X]
File "/home/ben/.local/lib/python2.7/site-packages/keras/utils/io_utils.py", line 71, in getitem
if key.stop + self.start <= self.end:
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
training_input = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_input_rotated')
training_target = HDF5Matrix("../../media/patches/data_rotated.h5", 'training_target_rotated')
# Model definition
autoencoder = Sequential()
autoencoder.add(Convolution2D(32, 3, 3, activation='relu', border_mode='same',input_shape=(64, 64, 3)))
autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
autoencoder.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same'))
autoencoder.add(MaxPooling2D((2, 2), border_mode='same'))
autoencoder.add(Convolution2D(128, 3, 3, activation='relu', border_mode='same'))
autoencoder.add(Deconvolution2D(64, 3, 3, activation='relu', border_mode='same',output_shape=(None, 16, 16, 64),subsample=(2, 2)))
autoencoder.add(UpSampling2D((2, 2)))
autoencoder.add(Deconvolution2D(32, 3, 3, activation='relu', border_mode='same',output_shape=(None, 32, 32, 32),subsample=(2, 2)))
autoencoder.add(UpSampling2D((2, 2)))
autoencoder.add(Deconvolution2D(3, 3, 3, activation='sigmoid', border_mode='same',output_shape=(None, 64, 64, 3),subsample=(2, 2)))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.summary()
# Callback configure
csv_logger = CSVLogger('../../runs/training_' + start_time + '.log')
prog_logger = ProgbarLogger()
checkpointer = ModelCheckpoint(filepath='../../runs/model_' + start_time + '.hdf5', verbose=1, save_best_only=False)
# Training call
history = autoencoder.fit(
x=training_input,
y=training_target,
batch_size=256,
nb_epoch=1000,
verbose=2,
callbacks=[csv_logger, prog_logger, checkpointer],
validation_split=0.2)
Upvotes: 3
Views: 707
Reputation: 156
I didn't fix the error, but I got around it by using validation_data instead of validation_split in my fit call.
Upvotes: 0