Suvab
Suvab

Reputation: 41

Input 0 is incompatible with layer lstm_93: expected ndim=3, found ndim=2

My X_train shape is (171,10,1) and y_train shape is (171,)(contains values from 1 to 19). The output should be probability of each of the 19 class. I am trying to use a RNN for classification of 19 classes.

from sklearn.preprocessing import LabelEncoder,OneHotEncoder
label_encoder_X=LabelEncoder()
label_encoder_y=LabelEncoder()

y_train=label_encoder_y.fit_transform(y_train)
y_train=np.array(y_train)

X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))


from keras.models import Sequential
from keras.layers import Dense,Flatten
from keras.layers import LSTM
from keras.layers import Dropout


regressor = Sequential()

regressor.add(LSTM(units = 100, return_sequences = True, input_shape=( 
(X_train.shape[1], 1)))
regressor.add(Dropout(rate=0.15))

regressor.add(LSTM(units = 100, return_sequences =False))#False caused the 
exception ndim
regressor.add(Dropout(rate=0.15))


regressor.add(Flatten())
regressor.add(Dense(units= 19,activation='sigmoid'))
regressor.compile(optimizer = 'rmsprop', loss = 'mean_squared_error')

regressor.fit(X_train, y_train, epochs = 250, batch_size = 16)

Upvotes: 0

Views: 90

Answers (1)

giser_yugang
giser_yugang

Reputation: 6166

When you set return_sequences =False in the second LSTM layer, the result is that (None, 100) no longer needs Flatten(). You can set return_sequences=True in the second LSTM layer or delete regressor.add(Flatten()) according to your needs.

In addition, if you want to get probability of each of the 19 class, your label data should be in one-hot form. Using keras.utils.to_categorical:

one_hot_labels = keras.utils.to_categorical(y_train, num_classes=19) #(None,19)
regressor.fit(X_train, one_hot_labels, epochs = 250, batch_size = 16)

Upvotes: 1

Related Questions