Ayesha Khan
Ayesha Khan

Reputation: 116

Tensorflow Output for custom object detection

I'm a newbee in using tensorflow. Why am I getting so many metrics while training custom tensorflow 2.x object detection?

Use fn_output_signature instead
INFO:tensorflow:Step 100 per-step time 8.507s
I0607 17:52:10.328038 16488 model_lib_v2.py:699] Step 100 per-step time 8.507s
INFO:tensorflow:{'Loss/classification_loss': 21.313017,
 'Loss/localization_loss': 2.1917934,
 'Loss/regularization_loss': 221.82864,
 'Loss/total_loss': 245.33345,
 'learning_rate': 0.014666351}
I0607 17:52:10.349220 16488 model_lib_v2.py:700] {'Loss/classification_loss': 21.313017,
 'Loss/localization_loss': 2.1917934,
 'Loss/regularization_loss': 221.82864,
 'Loss/total_loss': 245.33345,
 'learning_rate': 0.014666351}
Use fn_output_signature instead
INFO:tensorflow:Step 100 per-step time 8.507s
I0607 17:52:10.328038 16488 model_lib_v2.py:699] Step 100 per-step time 8.507s
INFO:tensorflow:{'Loss/classification_loss': 21.313017,
 'Loss/localization_loss': 2.1917934,
 'Loss/regularization_loss': 221.82864,
 'Loss/total_loss': 245.33345,
 'learning_rate': 0.014666351}
I0607 17:52:10.349220 16488 model_lib_v2.py:700] {'Loss/classification_loss': 21.313017,
 'Loss/localization_loss': 2.1917934,
 'Loss/regularization_loss': 221.82864,
 'Loss/total_loss': 245.33345,
 'learning_rate': 0.014666351}

Upvotes: 2

Views: 556

Answers (1)

Doruk Karınca
Doruk Karınca

Reputation: 215

In the intuitive sense, a loss "guides" the model towards the correct learning trajectory. If there's no loss for a task, your model would receive no reward for getting good at that task, so training probably wouldn't make your model good at that task. By that logic, if there are multiple things that your model needs to get right in order to succeed, you can sum up individual losses for each of these tasks.

That's what happens in your logs: at each step, total loss is a sum of classification loss, localization loss, and regularization loss. This means your model cares about all three of classifying objects, localizing them, and also keeping model parameters as simple as possible (regularization). Minimizing total loss requires minimizing all three of these losses.

In practice, you can even multiply individual losses by constants before summing them to assign them relative importance, so that your model is incentivized to optimize the losses with bigger multipliers a little more than others:

total_loss = alpha*classification_loss + beta*localization_loss + gamma*regularization_loss

Here, values of alpha, beta, and gamma would be an importance weighting chosen by you.

Upvotes: 1

Related Questions