Reputation: 327
How do you compute these metrics when there is not a positive.negative meaning in the classes, but they just represent something neutral?
Let's say for example that we have a classification problem, where you have two classes that represent a person (John, Alex) and you want to classify your new instances into one of them. The objective is to find if the new person looks like John, or looks like Alex. Then how do you compute recall and precision?
Upvotes: 2
Views: 454
Reputation: 66805
Usually in situations like this there is no such thing as precision, what you can (and people usually do) is to report two precisions, in your case:
In other words you simply treat each class as positive one separately and report multiple precisions. There are metrics without this problem (like accuracy) as they are symmetric. With assymetric ones (like precision or F1) you have to do one of the three things:
As a final remark - there is no such thing as a "general way of doing that" since every approach gives answer to a different question. Once you can strictly define what is the question your model is trying to answer, you can choose the best metric.
For example if your question is "I want to maximize probability of correct classification of never seen before object x, sampled from the same data source as my training set" the answer for that is given by accuracy, not precision or recall.
Upvotes: 2