Shawn Atlas
Shawn Atlas

Reputation: 25

Keras Deep NN does not include all the the classes

I have made a model which has been trained to predict a number from 34-63 (no decimal numbers) In total that is 30 potential outputs.

When I run the model it complains and wants me to put in 15 in my last layer, which I have understood should be the number of outputs.

I Also get following output in the terminal after it has been trained:

ValueError: y_true and y_pred contain different number of classes 7, 16. Please provide the true labels explicitly through the labels argument. Classes found in y_true: [51 52 53 54 56 59 63]

When i then run model with:

prediction = model.predict(test)
print(model.predict(test))
print(np.argmax(model.predict(test), axis=-1))

i get:

WARNING:tensorflow:6 out of the last 19 calls to <function Model.make_predict_function..predict_function at 0x000001B8C4EB1AF0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.

[[0.00006836 0.33038142 0.22732003 0.03764497 0.22742009 0.01213347
  0.16344884 0.00000338 0.0012028  0.00014862 0.00000717 0.00017032
  0.00000437 0.00001909 0.00002712]]
[1]

I am guessing that the matrix is suppose to be all the sizes but there are only 15. I Have looked in my dataset and all classes has at least 3 instances so they should be included in the training.

#UPDATE i have included the model underneath

np.set_printoptions(suppress=True)
pd.set_option("display.max_rows", None, "display.max_columns", None)
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

Labeldata = ['Output_Label']

RelevantFeatures = ['column A','column B','column X']
RelevantFeaturesandlabel = ['column A','column B','column X','Output_Label']

PATH = 'Training_Data.xlsx'
PATHVa = 'Validation_Data.xlsx'

Full_Data = pd.read_excel(PATH)
ValidationFull = pd.read_excel(PATHVa)

# Which range of outputs should be included
Full_Data = Full_Data[(Full_Data['Output_Label'] >= 34) & (Full_Data['Output_Label'] <= 70)]
ValidationFull = ValidationFull[(ValidationFull['Output_Label'] >= 34) & (ValidationFull['Output_Label'] <= 70)]


FeatureDatadf = Full_Data.filter(items = RelevantFeatures, axis = 1)
Validation = ValidationFull.filter(items = RelevantFeatures, axis = 1)
ValidationLabel = ValidationFull.filter(items = Labeldata, axis = 1)
FeatureData = pd.DataFrame(StandardScaler().fit_transform(FeatureDatadf))
Validation = pd.DataFrame(StandardScaler().fit_transform(Validation))

FeatureData = FeatureData.apply(pd.to_numeric, errors='coerce')
FeatureData = FeatureData.to_numpy()
Validation = Validation.to_numpy()

#Standardisation
LabelData = Full_Data.filter(items = Labeldata, axis=1)
LabelData = LabelData.apply(pd.to_numeric, errors='coerce')
dummies = pd.get_dummies(['34','35','36','37','38','39','40','41','42','43','44','45','46','47','48','49','50','51','52','53','54','55','56','57','58','59','60','61','62','63'], prefix = 'Size')
dummies = pd.get_dummies(LabelData['Output_Label'],prefix = 'Size')
LabelData = dummies.to_numpy()



# Split the sets up
Feature_train, Feature_test, Label_train, Label_test = train_test_split(FeatureData, LabelData, test_size=0.2)

# Model
model = Sequential()
model.add(Dense(26, activation = LeakyReLU(alpha=1), input_dim = 26,activity_regularizer=regularizers.l1(1e-4),use_bias=False))#209))
model.add(Dense(26,RandomFourierFeatures(output_dim=1024, scale=10.0, kernel_initializer="gaussian"),use_bias=False))
model.add(Dense(26,RandomFourierFeatures(output_dim=1024, scale=10.0, kernel_initializer="gaussian"),use_bias=False))
model.add(Dense(26,RandomFourierFeatures(output_dim=1024, scale=10.0, kernel_initializer="gaussian"),use_bias=False))
model.add(Dense(50,RandomFourierFeatures(output_dim=1024, scale=10.0, kernel_initializer="gaussian"),use_bias=False))


model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26,Dropout(0.4),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26,Dropout(0.4),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26,Dropout(0.4),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26,Dropout(0.4),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1),use_bias=False))



model.add(Dense(26,GaussianNoise(stddev = 0.5),use_bias=False))
model.add(Dense(26,GaussianNoise(stddev = 0.5),use_bias=False))
model.add(Dense(26,GaussianNoise(stddev = 0.5),use_bias=False))
model.add(Dense(26,GaussianNoise(stddev = 0.5),use_bias=False))
model.add(Dense(26,GaussianNoise(stddev = 0.5),use_bias=False))
#model.add(Dense(25,Normalization(),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1), activity_regularizer=regularizers.l1(1e-4),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1), activity_regularizer=regularizers.l1(1e-4),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1), activity_regularizer=regularizers.l1(1e-4),use_bias=False))
model.add(Dense(26, activation = LeakyReLU(alpha=1), activity_regularizer=regularizers.l1(1e-4),use_bias=False))


model.add(Dense(15, activation = 'softmax',use_bias=False))#Output is the number of classes

#optimisation
opt = SGD(lr=0.001, momentum=0.9)

# Compile
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['categorical_accuracy'])

history = model.fit(Feature_train, Label_train, validation_data=(Feature_test, Label_test), epochs=500, verbose=1)

# evaluate the model
_, train_acc = model.evaluate(Feature_train, Label_train, verbose=1)
_, test_acc = model.evaluate(Feature_test, Label_test, verbose=1)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# plot loss during training
pyplot.subplot(211)
pyplot.title('Loss')
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
# plot accuracy during training
pyplot.subplot(212)
pyplot.title('categorical_accuracy')
pyplot.plot(history.history['categorical_accuracy'], label='train')
pyplot.plot(history.history['val_categorical_accuracy'], label='test')
pyplot.legend()
pyplot.show()


prediction = model.predict(Validation)

Predictionsdf = pd.DataFrame(prediction, columns = dummies.columns)

Predictionsdf.to_excel('Preditions.xlsx', index = False)

#Save model
model.summary()

model.save(os.path.join('.', 'Output_Label.h5'))


score = metrics.log_loss(ValidationLabel, prediction)
print("Log loss score: {}".format(score))

The data structure as it is now: enter image description here

  1. How can I fix the model error and make my network so that it includes all the classes

UPDATED 2) How can i print the prediction of the top 3 classes: their accuracy and their name?

UPDATED So i have the following code for printing the prediction and their accuracy:

prediction = model_Chest.predict(test)
print(model_Chest.predict(test))
y_pred = model_Chest.predict(test)
# top_k has shape (N, k)
K=18
dummies = pd.get_dummies(['44','45', '46','47', '48','49', '50','51', '52','53', '54','55', '56','57', '58','59', '60', '61'], prefix = 'Size')
top_K = np.argsort(y_pred, -1)[:, :K]
names = dummies.columns.to_numpy()[top_K]
probs = np.take_along_axis(y_pred, top_K, -1)
print(names)
print(probs)

It is suppose to:

But i get: enter image description here

Upvotes: 0

Views: 94

Answers (1)

Aaron Keesing
Aaron Keesing

Reputation: 1287

As mentioned in the comments, the number of classes in the dataset was only 15, hence why an output of 15 values is appropriate.

To get the top k class probabilities you can use numpy.argsort and then use the dataframe columns to get the class name:

y_pred = model.predict(x)
# top_k has shape (N, k)
top_k = np.argsort(y_pred, -1)[:, :k]
names = dummies.columns.to_numpy()[top_k]
probs = np.take_along_axis(y_pred, top_k, -1)

names then contains the names of the top k classes for each instance in x and probs contains the corresponding probabilities.

Upvotes: 1

Related Questions