Skip to main content

accuracy measures for classification problems in machine learning


Chinese (Simplified)Chinese (Traditional)CzechDanishDutchEnglishFrenchGermanHindiIndonesianItalianKoreanPolishPortugueseRussianSerbianSlovakSpanishThaiBengaliGujaratiMarathiNepaliPunjabiTamilTelugu

Introduction

You have built your first classification model, whether random forest, xgboost, neural network or a craftily made logistic regression. Now, you want to know how to measure and tell the accuracy of this classification model. We are exactly going to discuss this topic in this article. For the sake of the discussion, we will take a sample classification model. Let the problem be that among 100 cash transactions you have to find out how many are a fraud. So essentially it is a fraud vs non-fraud classification. We will base our examples on this case.

What are we going to learn?

We will learn the following topics here:

  1. Accuracy
  2. Precision & recall
  3. F1-score
So let's dive in.

Accuracy:

Accuracy is the ratio of the number of test cases correctly predicted vs the total number of test cases. Accuracy is a simple yet powerful metric to denote how "accurate" your model is. The cases where accuracy suffices to prove good enough are classification cases that do not suffer from class imbalances and no significant differences between different classes. A genuine example would be for this case is that let's say your model tries to classify between 4 different classes of flower and none of them are any rarer in occurrence than others. Then in such a case, if you have a reliable test data, the accuracy of the model will be good enough to give an idea of how good the model is.

Problems with accuracy occur when we have a high imbalance in classes or one of the classes is more important than others. For example, out of 100 transaction cases, at most 1 or 2 can be fraudulent. Therefore, even if you label all the cases as non-fraud, then also your accuracy will be 98-99%. But that does not depict the real problem with your prediction here. Therefore, we consider precision and recall; a slightly more difficult set of accuracy metrics.

Precision and Recall:

Precision and recall unlike accuracy, make sense only in a binary set up. For precision, you have to consider one class as the target class and the rest as not-target. In such a setup, precision is the ratio of the number of target cases correctly predicted and the total number of cases predicted in the target class. The recall is the ratio of the number of target cases correctly predicted and the total number of target cases.

I will elaborate on the above definitions using the fraud detection example. Let's say there are in real 3 frauds in the sample and 97 non-fraud transactions. Now, if your model predicts 7 frauds in total, out of which 2 are originally fraud and 5 others are non-fraud.
Then, here, considering fraud as the target class, the recall will be 2 out of 3 i.e. 0.66.
And the precision will be 2 out of 7 i.e. 0.28.

F1-score:

last but not least, the final accuracy equivalent score made from precision and recall is F1-score. F1-score is the harmonic mean of precision and recall, i.e. F1-score = 2*precision*recall/(precision + recall) This is used as an equivalent to accuracy in general, as it better depicts the accuracy than the normal accuracy metric.

Conclusion:

These are the basic metrics used for defining how accurate your classification model is. But for business criteria or specific types of classifications, you may have to use different types of standard and custom metrics also. Thanks for reading! For more such posts, please subscribe and visit the page for other posts like this in machine learning!

Comments

Popular posts from this blog

20 Must-Know Math Puzzles for Data Science Interviews: Test Your Problem-Solving Skills

Introduction:   When preparing for a data science interview, brushing up on your coding and statistical knowledge is crucial—but math puzzles also play a significant role. Many interviewers use puzzles to assess how candidates approach complex problems, test their logical reasoning, and gauge their problem-solving efficiency. These puzzles are often designed to test not only your knowledge of math but also your ability to think critically and creatively. Here, we've compiled 20 challenging yet exciting math puzzles to help you prepare for data science interviews. We’ll walk you through each puzzle, followed by an explanation of the solution. 1. The Missing Dollar Puzzle Puzzle: Three friends check into a hotel room that costs $30. They each contribute $10. Later, the hotel realizes there was an error and the room actually costs $25. The hotel gives $5 back to the bellboy to return to the friends, but the bellboy, being dishonest, pockets $2 and gives $1 back to each friend. No...

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme...

GAM model : PyGAM package details Analysis and possible issue resolving

Introduction:                  picture credit to peter laurinec. I have been studying about PyGAM package for last couple of days. Now, I am planning to thoroughly analyze the code of PyGAM package with necessary description of GAM model and sources whenever necessary. This is going to be a long post and very much technical in nature. Pre-requisites: For understanding the coding part of PyGAM package, first you have to learn what is a GAM model. GAM stands for generalized additive model, i.e. it is a type of statistical modeling where a target variable Y is roughly represented by additive combination of set of different functions. In formula it can be written as: g(E[Y]) = f 1 (x 1 ) + f 2 (x 2 ) + f 3 (x 3 ,x 4 )+...etc where g is called a link function and f are different types of functions. In technical terms, in GAM model, theoretically expectation of the link transformed target variable is assume...