Reputation: 1
I work on an image-regression model and I got these evaluation metrics for RandomForestRegressor
. Are they good or not?
Test set evaluation:
Train set evaluation:
Do the metrics show a good result or not?
Upvotes: -1
Views: 532
Reputation: 147
Metrics pertinence depends on the context. For example if the small errors matters less than the big errors, MSE is good because as the error gets small, the mse gets even smaller, and when the error grows, MSE grows even bigger ( picture the square function graph in your head). If you want to penalize errors poportionally to their size without additional increase, MAE is better. See RMSE or MAE on Cross-Validated for more details.
Moreover, each metric scale depends on the specific problem : for example, a MSE value around 10 for predicting values in the range [0,20]
is very bad, while the same MSE for predicting values in the range [0, 1 000 000]
is very good.
However, you can see that errors on the train set are far lower than error on the test set. This is a clear sign of overfitting, which is very bad and not desired at all.
Taken independently of overfitting, Wether the test errors you get are good or not depends on how precise you want to get, and how large the desired values can be. From my experience, a MSE of 100
is good when values are ranging inside [0, 10 000]
(error of 1%).
However, in your case, the showcased results are very likely to be bad, because overfitting is bad.
Upvotes: 0