Skip to main content

accuracy measures for classification problems in machine learning


Chinese (Simplified)Chinese (Traditional)CzechDanishDutchEnglishFrenchGermanHindiIndonesianItalianKoreanPolishPortugueseRussianSerbianSlovakSpanishThaiBengaliGujaratiMarathiNepaliPunjabiTamilTelugu

Introduction

You have built your first classification model, whether random forest, xgboost, neural network or a craftily made logistic regression. Now, you want to know how to measure and tell the accuracy of this classification model. We are exactly going to discuss this topic in this article. For the sake of the discussion, we will take a sample classification model. Let the problem be that among 100 cash transactions you have to find out how many are a fraud. So essentially it is a fraud vs non-fraud classification. We will base our examples on this case.

What are we going to learn?

We will learn the following topics here:

  1. Accuracy
  2. Precision & recall
  3. F1-score
So let's dive in.

Accuracy:

Accuracy is the ratio of the number of test cases correctly predicted vs the total number of test cases. Accuracy is a simple yet powerful metric to denote how "accurate" your model is. The cases where accuracy suffices to prove good enough are classification cases that do not suffer from class imbalances and no significant differences between different classes. A genuine example would be for this case is that let's say your model tries to classify between 4 different classes of flower and none of them are any rarer in occurrence than others. Then in such a case, if you have a reliable test data, the accuracy of the model will be good enough to give an idea of how good the model is.

Problems with accuracy occur when we have a high imbalance in classes or one of the classes is more important than others. For example, out of 100 transaction cases, at most 1 or 2 can be fraudulent. Therefore, even if you label all the cases as non-fraud, then also your accuracy will be 98-99%. But that does not depict the real problem with your prediction here. Therefore, we consider precision and recall; a slightly more difficult set of accuracy metrics.

Precision and Recall:

Precision and recall unlike accuracy, make sense only in a binary set up. For precision, you have to consider one class as the target class and the rest as not-target. In such a setup, precision is the ratio of the number of target cases correctly predicted and the total number of cases predicted in the target class. The recall is the ratio of the number of target cases correctly predicted and the total number of target cases.

I will elaborate on the above definitions using the fraud detection example. Let's say there are in real 3 frauds in the sample and 97 non-fraud transactions. Now, if your model predicts 7 frauds in total, out of which 2 are originally fraud and 5 others are non-fraud.
Then, here, considering fraud as the target class, the recall will be 2 out of 3 i.e. 0.66.
And the precision will be 2 out of 7 i.e. 0.28.

F1-score:

last but not least, the final accuracy equivalent score made from precision and recall is F1-score. F1-score is the harmonic mean of precision and recall, i.e. F1-score = 2*precision*recall/(precision + recall) This is used as an equivalent to accuracy in general, as it better depicts the accuracy than the normal accuracy metric.

Conclusion:

These are the basic metrics used for defining how accurate your classification model is. But for business criteria or specific types of classifications, you may have to use different types of standard and custom metrics also. Thanks for reading! For more such posts, please subscribe and visit the page for other posts like this in machine learning!

Comments

Popular posts from this blog

Mastering SQL for Data Science: Top SQL Interview Questions by Experience Level

Introduction: SQL (Structured Query Language) is a cornerstone of data manipulation and querying in data science. SQL technical rounds are designed to assess a candidate’s ability to work with databases, retrieve, and manipulate data efficiently. This guide provides a comprehensive list of SQL interview questions segmented by experience level—beginner, intermediate, and experienced. For each level, you'll find key questions designed to evaluate the candidate’s proficiency in SQL and their ability to solve data-related problems. The difficulty increases as the experience level rises, and the final section will guide you on how to prepare effectively for these rounds. Beginner (0-2 Years of Experience) At this stage, candidates are expected to know the basics of SQL, common commands, and elementary data manipulation. What is SQL? Explain its importance in data science. Hint: Think about querying, relational databases, and data manipulation. What is the difference between WHERE

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme