Md. Imrul Kayes
Md. Imrul Kayes

Reputation: 963

ANN model for multiclass classification

I don't know what the problem is and why I'm getting this error:

ValueError: in user code:
ValueError: Shapes (None, 1) and (None, 6) are incompatible

Can anyone please help me with this code?

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation,Dropout
from sklearn.preprocessing import MinMaxScaler
%matplotlib inline

df = pd.read_csv('test.csv')
dft = pd.read_csv('train.csv')

X_train = df.drop('label',axis=1).values
y_train = df['label'].values

X_test = dft.drop('label',axis=1).values
y_test = dft['label'].values

scaler = MinMaxScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)

model = Sequential()
model.add(Dense(units=30, activation='relu'))
model.add(Dense(units=15, activation='relu'))
model.add(Dense(6, activation='softmax'))

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x=X_train, y=y_train, epochs=15, batch_size=10, validation_data=(X_test, y_test))

Upvotes: 1

Views: 344

Answers (1)

user11989081
user11989081

Reputation: 8654

The issue is that the length of the second dimension of your target arrays (y_train and y_test) is equal to 1, while the model is expecting 6, given that the number of neurons of the output layer is set equal to 6. To resolve this issue you need to one-hot encode the target (you can use scikit-learn OneHotEncoder). If your target does actually have 6 classes then your model will work.

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.datasets import make_classification
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler
from sklearn.model_selection import train_test_split
tf.random.set_seed(0)

# generate the data
X, y = make_classification(n_classes=6, n_samples=1000, n_features=10, n_informative=10, n_redundant=0, random_state=42)
print(y.shape)
# (1000, )

# split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)

# one-hot encode the target
enc = OneHotEncoder(sparse=False, handle_unknown='ignore')
enc.fit(y_train.reshape(-1, 1))
y_train = enc.transform(y_train.reshape(-1, 1))
y_test = enc.transform(y_test.reshape(-1, 1))
print(y_train.shape, y_test.shape)
# (750, 6) (250, 6)

# scale the features
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)

# define the model
model = Sequential()
model.add(Dense(units=30, activation='relu'))
model.add(Dense(units=15, activation='relu'))
model.add(Dense(6, activation='softmax'))

# fit the model
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x=X_train, y=y_train, epochs=3, batch_size=10, validation_data=(X_test, y_test))
# Epoch 1/3
# 75/75 [==============================] - 1s 2ms/step - loss: 1.7872 - accuracy: 0.2427 - val_loss: 1.7719 - val_accuracy: 0.2600
# Epoch 2/3
# 75/75 [==============================] - 0s 781us/step - loss: 1.7660 - accuracy: 0.2547 - val_loss: 1.7549 - val_accuracy: 0.2720
# Epoch 3/3
# 75/75 [==============================] - 0s 768us/step - loss: 1.7528 - accuracy: 0.2587 - val_loss: 1.7408 - val_accuracy: 0.3280

Upvotes: 1

Related Questions