neversaint
neversaint

Reputation: 64054

How to use Keras' multi layer perceptron for multi-class classification

I tried to follow the instruction here, where it stated that it uses Reuter dataset.

from keras.datasets import reuters

(X_train, y_train), (X_test, y_test) = reuters.load_data(path="reuters.pkl",
                                                         nb_words=None,
                                                         skip_top=0,
                                                         maxlen=None,
                                                         test_split=0.1)
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD

model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(64, input_dim=20, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(10, init='uniform'))
model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])

#breaks here
model.fit(X_train, y_train,
          nb_epoch=20,
          batch_size=16)

score = model.evaluate(X_test, y_test, batch_size=16)

But the code breaks on model fitting. How can I resolve the issue?

Update: This is the error I got.

In [21]: model.fit(X_train, y_train,
   ....:           nb_epoch=20,
   ....:           batch_size=16)
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-21-4b227e56e5a9> in <module>()
      1 model.fit(X_train, y_train,
      2           nb_epoch=20,
----> 3           batch_size=16)

//anaconda/lib/python2.7/site-packages/keras/models.pyc in fit(self, x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, **kwargs)
    400                               shuffle=shuffle,
    401                               class_weight=class_weight,
--> 402                               sample_weight=sample_weight)
    403
    404     def evaluate(self, x, y, batch_size=32, verbose=1,

//anaconda/lib/python2.7/site-packages/keras/engine/training.pyc in fit(self, x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight)
    969                                                            class_weight=class_weight,
    970                                                            check_batch_dim=False,
--> 971                                                            batch_size=batch_size)
    972         # prepare validation data
    973         if validation_data:

//anaconda/lib/python2.7/site-packages/keras/engine/training.pyc in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_dim, batch_size)
    909                           in zip(y, sample_weights, class_weights, self.sample_weight_modes)]
    910         check_array_lengths(x, y, sample_weights)
--> 911         check_loss_and_target_compatibility(y, self.loss_functions, self.internal_output_shapes)
    912         if self.stateful and batch_size:
    913             if x[0].shape[0] % batch_size != 0:

//anaconda/lib/python2.7/site-packages/keras/engine/training.pyc in check_loss_and_target_compatibility(targets, losses, output_shapes)
    182             if y.shape[1] == 1:
    183                 raise Exception('You are passing a target array of shape ' + str(y.shape) +
--> 184                                 ' while using as loss `categorical_crossentropy`. '
    185                                 '`categorical_crossentropy` expects '
    186                                 'targets to be binary matrices (1s and 0s) '

Exception: You are passing a target array of shape (10105, 1) while using as loss `categorical_crossentropy`. `categorical_crossentropy` expects targets to be binary matrices (1s and 0s) of shape (samples, classes). If your targets are integer classes, you can convert them to the expected format via:
```
from keras.utils.np_utils import to_categorical
y_binary = to_categorical(y_int)
```

Alternatively, you can use the loss function `sparse_categorical_crossentropy` instead, which does expect integer targets.

Upvotes: 7

Views: 12336

Answers (2)

Marco Cerliani
Marco Cerliani

Reputation: 22031

The answer is related to the format of your target... you have 2 possibilities:

1. possibility: if you have 1D integer encoded target, you can use sparse_categorical_crossentropy as loss function

n_class = 3
n_features = 100
n_sample = 1000

X = np.random.randint(0,10, (n_sample,n_features))
y = np.random.randint(0,n_class, n_sample)

inp = Input((n_features,))
x = Dense(128, activation='relu')(inp)
out = Dense(n_class, activation='softmax')(x)

model = Model(inp, out)
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X, y, epochs=3)

2. possibility: if you have one-hot encoded your target in order to have 2D shape (n_samples, n_class), you can use categorical_crossentropy as loss

n_class = 3
n_features = 100
n_sample = 1000

X = np.random.randint(0,10, (n_sample,n_features))
y = pd.get_dummies(np.random.randint(0,n_class, n_sample)).values

inp = Input((n_features,))
x = Dense(128, activation='relu')(inp)
out = Dense(n_class, activation='softmax')(x)

model = Model(inp, out)
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X, y, epochs=3)

Upvotes: 0

Dr. Snoopy
Dr. Snoopy

Reputation: 56377

This is a pretty common beginner's mistake with Keras. Unlike other Deep Learning frameworks, Keras does not use integer labels for the usual crossentropy loss, instead it expects a binary vector (called "one-hot"), where the vector is just 0's and a 1 over the index of the right class.

You can easily convert your labels to this format with the following code:

from keras.utils.np_utils import to_categorical
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)

Before model.fit. An alternative is to change the loss to "sparse_categorical_crossentropy", which does expect integer labels.

Upvotes: 22

Related Questions