Expected lstm_1 to have shape (20, 256) but got array with shape (1, 76)

I am building a neural net for speaker recognition and I am having problems with dimensions, I must be doing something wrong in the batch generator, but I have no idea what. My steps are as follows. First I prepare labels:

labels = []
with open('filtered_files.csv', 'r') as csvfile:
    reader = csv.reader(csvfile)
    for file in reader:
        label = file[0]
        if label not in labels:
            labels.append(label)
print(labels)

Then I declare batch_generator:

n_features = 20
max_length = 1000
n_classes = len(labels)

def batch_generator(data, batch_size=16):
    while 1:
        random.shuffle(data)
        X, y = [], []
        for i in range(batch_size):
            print(i)
            wav = data[i]
            waves, sr = librosa.load(wav, mono=True)
            print(waves)
            filename = wav.split('\\')[1]
            filename = filename.split('.')[0] + ".mp3"
            filename = filename.split('_', 1)[1]
            print(filename)
            with open('filtered_files.csv', 'r') as csvfile:
                reader = csv.reader(csvfile)
                for file in reader:
                    if filename == file[1]:
                        print(file[0])
                        label = file[0]
                        break
                    else:
                        continue

            y.append(one_hot_encode(["'" + label + "'"]))
            mfcc = librosa.feature.mfcc(waves, sr)
            mfcc = np.pad(mfcc, ((0,0), (0, max_length - len(mfcc[0]))), mode='constant', constant_values=0)
            X.append(np.array(mfcc))
        yield np.array(X), np.array(y)

Finally, I have the neural net declaration and I start the training process:

learning_rate = 0.001
        batch_size = 64
        n_epochs = 50
        dropout = 0.5

        input_shape = (n_features, max_length)
        steps_per_epoch = 50
        model = Sequential()
        model.add(LSTM(256, return_sequences=True, input_shape=input_shape,
                       dropout=dropout))
        # model.add(Flatten())
        # model.add(Dense(128, activation='relu'))
        # model.add(Dropout(dropout))
        # model.add(Dense(n_classes, activation='softmax'))

        opt = Adam(lr=learning_rate)
        model.compile(loss='categorical_crossentropy', optimizer=opt,
        metrics=['accuracy'])
        model.summary()

        history = model.fit_generator(
            generator=batch_generator(X_train, batch_size),
            steps_per_epoch=steps_per_epoch,
            epochs=n_epochs,
            verbose=1,
            validation_data=batch_generator(X_val, 32),
            validation_steps=5,
            callbacks=callbacks
        )

I put a lot of code, because I am not sure which part might actually be causing the wrong dimension. The first layer is having a following problem with the format: ,,Error when checking target: expected lstm_1 to have shape (20, 256) but got array with shape (1, 76)"

If I uncomment the second layer, then I receive: ,,Error when checking target: expected flatten_1 to have 2 dimensions, but got array with shape (64, 1, 76)"

Upvotes: 1

Views: 60

Answers (1)

edkeveked
edkeveked

Reputation: 18401

There is a shape mismatch between the model inputShape and the dataset shape. As indicated by the error, the dataset has the shape (1, 76) whereas the model is expecting a shape of (20, 256) (input_shape = (n_features, max_length)).

To fix the issue, either the model inputShape will be changed to match that of the dataset or the dataset is processed in order to match the model inputShape.

input_shape = (20, 256)
model = Sequential()
model.add(LSTM(256, return_sequences=True, input_shape=input_shape,
                dropout=dropout))
model.add(Flatten())
# model.add(Dense(128, activation='relu'))
# model.add(Dropout(dropout))
model.add(Dense(2, activation='softmax'))

opt = Adam(lr=learning_rate)
model.compile(loss='categorical_crossentropy', optimizer=opt,
metrics=['accuracy'])
model.summary()

model.fit(tf.ones([1, 20, 256]), tf.one_hot([0, 1], 2)) // an example of training

Upvotes: 0

Related Questions