Reputation: 403
I have converted voice to spectrogram using librosa. The shape of spectogram is (257, 356), which i have reshaped to (257, 356, 1).
I have created a model
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
model = Sequential()
model.add(Conv2D(64, kernel_size=3, activation='relu', input_shape=A.shape))
model.add(Flatten())
model.add(Dense(1, activation='softmax'))
while fitting the model, following error is produced
model.fit(A,validation_data=(A2), epochs=3)
where A2 is another spectrogram with following dimensions
ValueError: Error when checking input: expected conv2d_3_input to have 4 dimensions, but got array with shape (257, 356, 1)
Model Summary
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_24 (Conv2D) (None, 255, 354, 64) 640
_________________________________________________________________
conv2d_25 (Conv2D) (None, 253, 352, 32) 18464
_________________________________________________________________
flatten_11 (Flatten) (None, 2849792) 0
_________________________________________________________________
dense_11 (Dense) (None, 10) 28497930
=================================================================
Total params: 28,517,034
Trainable params: 28,517,034
Non-trainable params: 0
And the shape of A[0] is
A[0].shape = (356, 1)
Upvotes: 4
Views: 1608
Reputation: 1427
EDIT: Here's my working code:
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
import numpy as np
A = np.zeros((1,257,356,1)) # Only for illustration
A2 = np.zeros((1,1)) # Only for illustration
model = Sequential()
model.add(Conv2D(64, kernel_size=(3,3), activation='relu', input_shape=A.shape[1:])) # input_shape ==> (257,356,1)
model.add(Flatten())
model.add(Dense(1, activation='softmax'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(A, A2, validation_data = (A, A2), epochs=3)
And here's the output for 3 epochs:
Train on 1 samples, validate on 1 samples
Epoch 1/3
1/1 [==============================] - 0s 250ms/step - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 2/3
1/1 [==============================] - 0s 141ms/step - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
Epoch 3/3
1/1 [==============================] - 0s 156ms/step - loss: 0.0000e+00 - accuracy: 1.0000 - val_loss: 0.0000e+00 - val_accuracy: 1.0000
<keras.callbacks.callbacks.History at 0x1d508dbb708>
Upvotes: 2
Reputation: 21739
Look out for VGG-like convnet example on keras official documentation page
.
Like @Daniel mentioned, the input_shape
must be defined as tuple of (number_of_examples, height, width, channel)
.
For reference, let's say you have 500 samples, your input data should look like this:
x_train = np.random.random((500, 257, 356, 1))
print(x_train.shape)
(500, 257, 356, 1)
Upvotes: 0
Reputation: 86620
You are missing the batch size in your model.
The input shapes for 2D convolutions are (batch, spatial1, spatial2, channels)
.
I don't know the structure of your data, but it seems you don't have a batch? (If this is true you need to create the batch dimension with size 1, but this will not train well at all, you need huge amounts of data, not a single example).
So, A.shape
must be (1, 257,356, 1)
, being the first 1 the batch size, and the last the number of channels. The other two numbers are the spatial dimensions of the "image".
And your input_shape
must not include the batch size: input_shape=(257,356,1)
.
Upvotes: 0