ram
ram

Reputation: 31

How to penalize the loss of one class more than the other in tensorflow for a multi class problem?

Let's say my model has two classes Class 1 and Class 2. Both Class 1 and Class 2 has a equal amount of training and testing data. But I want to penalize the loss of the Class 1 more than Class 2, so that one class has a fewer number of False Positives than the other (I want the model to perform better for one class than the other).

How do I achieve this in Tensorflow?

Upvotes: 1

Views: 2183

Answers (2)

dennlinger
dennlinger

Reputation: 11488

The thing you are looking for is probably weighted_cross_entropy.
It is giving a very closely related contextual information, similar to @Sazzad 's answer, but specific to TensorFlow. To quote the documentation:

This is like sigmoid_cross_entropy_with_logits() except that pos_weight, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.

It accepts an additional argument pos_weights. Also note that this is only for binary classification, which is the case in the example you described. If there might be other classes besides the two, this would not work.

Upvotes: 1

Sazzad
Sazzad

Reputation: 853

If I understand your question correctly, this is not a tensorflow concept. you can write your own. for binary classification, the loss is something like this

loss = ylogy + (1-y)log(1-y)

Here class 0 and class 1 have the same weight in the loss. So you can give more give more weight to some portion. for example,

loss = 5 * ylogy + (1-y)log(1-y)

Hope it answers your question.

Upvotes: 0

Related Questions