Test12
Test12

Reputation: 7

Predicting a label of 5 different classes with tensorflow keras

I have the following problem, i have a dataset with 3dprinter data and want to predict a label representing an error using a tensorflow nn. However, that label runs from 0 to 5 - how could i achieve this? do i need five different outputs? because as i understand classification, it only assigns the label or not.

Cant find anything about this exactly, maybe because i dont know how to search for it - quite new in this whole subject.

The Data is either one-hot encoded or floats, and i am trying to use keras tuner to find hyperparameters for the network -i currently have it as this:

  def build_model_hp(self, hp, model_type):
        if model_type == 'standard':
            shape = (59,)
        elif model_type == 'expert':
            shape = (73,)
        else:
            shape = (60,)

        inputs = tf.keras.Input(shape=shape)
        x = inputs
        for i in range(hp.Int('hidden_blocks', 3, 10, default=3)):
            x = tf.keras.layers.Dense(hp.Int('hidden_size_'+str(i), 16, 256, step=16, default=16), activation='relu')(x)

        x = tf.keras.layers.Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5))(x)
        outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)

        model = tf.keras.Model(inputs, outputs)
        if (hp.Choice('optimizer', ['adam', 'sgd'])) == 'adam':
            opt = tf.keras.optimizers.Adam(
                hp.Float('learning_rate', 1e-4, 1e-2, sampling='log'))
        else:
            opt = tf.keras.optimizers.SGD(
                hp.Float('learning_rate', 1e-4, 1e-2), nesterov=True)
        model.compile(
            optimizer=opt,
            loss='binary_crossentropy',
            metrics=['accuracy'])
        return model

Upvotes: 0

Views: 846

Answers (1)

Gerry P
Gerry P

Reputation: 8102

If you have 6 classes with labels 0-5 then change the output layer from

outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)

outputs = tf.keras.layers.Dense(6, activation='softmax')(x)

change your model compile code from
model.compile(
            optimizer=opt,
            loss='binary_crossentropy',
            metrics=['accuracy'])

to what is shown below if your labels are one-hot encoded

model.compile(
            optimizer=opt,
            loss='categorical_crossentropy',
            metrics=['accuracy'])

if your labels are integers then use

model.compile(
            optimizer=opt,
            loss='sparse_categorical_crossentropy',
            metrics=['accuracy'])

after you train your model (assuming you one hot encoded the labels and used loss='categorical_crossentropy) then do predictions on your test set

from sklearn.metrics import confusion_matrix, classification_report
classes=test_gen.class_indices.keys()
labels=test_gen.labels
y_pred=[]
y_true=[]
preds=model.predict(test_gen)
for i, p in enumerate(preds)
    y_pred=np.argmax(p)
    y_true=labels[i] # assumes you have a list of labels for each test file
ypred=np.array(y_pred)
ytrue=np.array(y_true)
clr = classification_report(y_true, y_pred, target_names=classes) # assumes classes is a list of your classes
print("Classification Report:\n----------------------\n", clr)

where I am assuming you have a test generator that generates batches of test data

Upvotes: 1

Related Questions