Reputation: 1402
momentum_rate = 0.5
learning_rate = 0.1
neurons = 30
def convolutional_neural_network(x, y):
print("Hyper-parameter values:\n")
print('Momentum Rate =',momentum_rate,'\n')
print('learning rate =',learning_rate,'\n')
print('Number of neurons =',neurons,'\n')
model = Sequential()
#model.summary()
model.add(Conv1D(input_shape=(X.shape[1],X.shape[0]),activation='relu',kernel_size = 1,filters = 64))
model.add(Flatten())
model.add(Dense(neurons,activation='relu')) # first hidden layer
model.summary()
model.add(Dense(neurons, activation='relu'))
model.summary()# second hidden layer
model.add(Dense(neurons, activation='relu'))
model.summary()
model.add(Dense(neurons, activation='relu'))
model.summary()
model.add(Dense(10, activation='softmax'))
model.summary()
sgd = optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=momentum_rate, nesterov=True)
model.summary()
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy',tensorflow.keras.metrics.Precision()])
model.summary()
history = model.fit(X, y, validation_split=0.2, epochs=10)
model.summary()
print("\nTraining Data Statistics:\n")
print("CNN Model with Relu Hidden Units and Cross-Entropy Error Function:")
print(convolutional_neural_network(X,y))
The shape of X is (150, 1320) The shape of y is (150,)
Here is the output I am getting:
Model: "sequential_36"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_30 (Conv1D) (None, 1320, 64) 9664
_________________________________________________________________
flatten_21 (Flatten) (None, 84480) 0
_________________________________________________________________
dense_106 (Dense) (None, 30) 2534430
_________________________________________________________________
dense_107 (Dense) (None, 30) 930
_________________________________________________________________
dense_108 (Dense) (None, 30) 930
_________________________________________________________________
dense_109 (Dense) (None, 30) 930
_________________________________________________________________
dense_110 (Dense) (None, 10) 310
=================================================================
Total params: 2,547,194
Trainable params: 2,547,194
Non-trainable params: 0
ValueError: Error when checking input: expected conv1d_30_input to have 3 dimensions, but got array with shape (150, 1320)
Upvotes: 2
Views: 166
Reputation: 4960
As your error reflects your input shape is (150, 1320)
. In the comment you have said you have 1320 samples (row) and 150 features (column).
Let's make some temp data with mentioned shapes as X
and y
:
X = tf.random.uniform((150,1320))
y = tf.random.uniform((1320,10))
#10 label for each sample which maybe a little strange, take care of it
Now we have X
with shape (150,1320)
and y
with shape (1320,10)
.
Since we have 1320 samples and it should be the first axis, we have to transpose it:
X = tf.transpose(X)
Now the X shape will be (1320,150)
instead of (150,1320)
.
Since a Conv1D layer expects input as batch_shape + (steps, input_dim)
, we need to add a new dimension. So:
X = tf.expand_dims(X,axis=2)
print(X.shape, y.shape) # X.shape=(1320, 150, 1) y.shape=(1320,10)
Then, we have X shape as (1320,150,1)
Now, let's specify the input shape in the Conv1D
layer:
model.add(Conv1D(input_shape=(X.shape[1:]),activation='relu',kernel_size = 1,filters = 64))
Upvotes: 0
Reputation: 361
Conv1D
is expecting an input_shape
of the form (steps, input_dim)
(see docs).
Now, if I understand correctly your input_dim=1
because 1320 is the number of samples and 150 the length of the array. In this case, change the input_shape=(X.shape[1], X.shape[2])
.
Edit: It's unclear what are you trying to do. The code below is working and shows the expected shapes for your network. But beware that I changed the y dimension in order to match the number of rows and the output layer. I'm not sure of what the y shape (150,) is representing.
X = tf.random.normal((1320,150,1))
y = tf.random.uniform((1320,10))
momentum_rate = 0.5
learning_rate = 0.1
neurons = 30
def convolutional_neural_network(x, y):
print("Hyper-parameter values:\n")
print('Momentum Rate =',momentum_rate,'\n')
print('learning rate =',learning_rate,'\n')
print('Number of neurons =',neurons,'\n')
model = Sequential()
#model.summary()
model.add(Conv1D(input_shape=(X.shape[1], X.shape[2]),activation='relu',kernel_size = 1,filters = 64))
model.add(Flatten())
model.add(Dense(neurons,activation='relu')) # first hidden layer
model.add(Dense(neurons, activation='relu'))
model.add(Dense(neurons, activation='relu'))
model.add(Dense(neurons, activation='relu'))
model.add(Dense(10, activation='softmax'))
sgd = optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=momentum_rate, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'] )
history = model.fit(X, y, validation_split=0.2, epochs=10)
model.summary()
print("\nTraining Data Statistics:\n")
print("CNN Model with Relu Hidden Units and Cross-Entropy Error Function:")
print(convolutional_neural_network(X,y))
Upvotes: 1