Reputation: 61
I have a model(3DUnet, Regression problem) that predicts values PD and T1, where PD and T1 are the qMRI outputs based on the input. From these predictions, I calculate T1_Weighted_image using the formula: Weighted_images = PD (1 - exp(-1 / (T1 + epsilon)))*, where epsilon is a small value to prevent division by zero and T1=>0 . During training, my ground truth for loss computation is T1_Weighted_groundtruth, but I also have ground truth values for PD and T1, although they are not directly used for loss computation. They serve to ensure the correctness of predicted values for PD and T1. The loss is computed using a loss function between T1_Weighted_predict and T1_Weighted_groundtruth.
However, there exist various combinations of PD or T1 that can yield similar results for T1_Weighted. For instance, instead of predicting high values for T1 (which is the correct answer), my model might predict very low values for PD (for example in CSF as an obvious example). Is there a method to compel my model to predict the correct values, or at least to predict (any) possible combinations?
Upvotes: 0
Views: 87
Reputation: 3
Try penalizing PD and T1 separately as well. Account for their relative scales to each other and the combined metric. Or only use their loss, when the difference is over a threshold or the ratio between them is wrong.
Upvotes: 0