Introduction
You have built your first classification model, whether random forest, xgboost, neural network or a craftily made logistic regression. Now, you want to know how to measure and tell the accuracy of this classification model. We are exactly going to discuss this topic in this article. For the sake of the discussion, we will take a sample classification model. Let the problem be that among 100 cash transactions you have to find out how many are a fraud. So essentially it is a fraud vs non-fraud classification. We will base our examples on this case.
What are we going to learn?
We will learn the following topics here:
- Accuracy
- Precision & recall
- F1-score
Accuracy:
Accuracy is the ratio of the number of test cases correctly predicted vs the total number of test cases. Accuracy is a simple yet powerful metric to denote how "accurate" your model is. The cases where accuracy suffices to prove good enough are classification cases that do not suffer from class imbalances and no significant differences between different classes. A genuine example would be for this case is that let's say your model tries to classify between 4 different classes of flower and none of them are any rarer in occurrence than others. Then in such a case, if you have a reliable test data, the accuracy of the model will be good enough to give an idea of how good the model is.
Problems with accuracy occur when we have a high imbalance in classes or one of the classes is more important than others. For example, out of 100 transaction cases, at most 1 or 2 can be fraudulent. Therefore, even if you label all the cases as non-fraud, then also your accuracy will be 98-99%. But that does not depict the real problem with your prediction here. Therefore, we consider precision and recall; a slightly more difficult set of accuracy metrics.
Precision and Recall:
Precision and recall unlike accuracy, make sense only in a binary set up. For precision, you have to consider one class as the target class and the rest as not-target. In such a setup, precision is the ratio of the number of target cases correctly predicted and the total number of cases predicted in the target class. The recall is the ratio of the number of target cases correctly predicted and the total number of target cases.
I will elaborate on the above definitions using the fraud detection example. Let's say there are in real 3 frauds in the sample and 97 non-fraud transactions. Now, if your model predicts 7 frauds in total, out of which 2 are originally fraud and 5 others are non-fraud.
Then, here, considering fraud as the target class, the recall will be 2 out of 3 i.e. 0.66.
And the precision will be 2 out of 7 i.e. 0.28.
Comments
Post a Comment