Introduction:
This week, I took part in a competition playfully and thought that I will spend a good weekend on this problem. But the competition grew complex from a simple enough problem and ended up me being tackled by the problem; instead of me tackling the problem. This article will summarize my mistakes and the 2 specific things I learned from this.
Summary of the article:
Recently I took part in a hackathon, and tried my ml expertise with a tabular data. But for some reasons, I wasn't able to tackle the problem. The main findings were I used a wrong encoding method, as well as didn't do enough model testing.
The good:
In this project, the problem was to predict for a healthcare company whether they will get a customer or not; based on different policy based data. The dataset consisted of different data about the customer and policy amount etc. The customer related data were maximum and minimum ages mentioned in the policy, for how many years the policy have been working, city code, region code etc. For policy related data, the policy amount suggested, health indicator code, policy code etc were mentioned.
In the basic modeling, I left the city and region code as they were high cardinality categorical features. I created a bunch of features after understanding the data. And then tried 4 models: xgboost, random forest, extratrees forest and light gbm classifier.
With these, I reached around 60% score. Important thing to note here was, the data was imbalanced and therefore we needed to use class weights to get from 50% to 60% performance.
Also, in this exercise, I fine tuned a light gbm model. It took me sometime to read the documentation and tune it; but that improved around 2-3% accuracy.
Also, another important observation was that, the extratrees classifier performed better than random forest. According to me, because of higher number of categorical features, the extra trees classifier performed better.
The bad:
Then, near around 61% roc_auc_score, the things started to go bad. First of all, random forest, xgboost never reached above 63% on starting to add the city and region code ( the categorical features I left at the first portion of modeling). This is when I started fine tuning the light gbm model.
The missed opportunity:
Around in 5-6 hr into the problem, I realized that the model is not getting up the curve because of high sparseness of the data from high cardinal categorical features. The problem was that although I read about label encoding for solving this, I chose to try my hand with PCA to compactify the signal.
From around 4k features, I used PCA and compacted 99.99% variance of the data into a little 300 features. This and the lgbm modeling made me reach the final score I reached(72.5%).
What are the learning?
The trick in this competition was to use catboost model with label encoding method. Both of these stuffs I thoroughly managed to miss out. I have been religiously using one hot encoding from the beginning of time; and this hackathon was a eye-opener for me on the fact that, like trying out multiple modeling, one need to try out all the available types of encoding methods. Also, in case of high cardinality categorical data, it is advisable to use label encoding as it seems from my study now. Another thing was catboost model. I had for some reason a very wrong intuition that as xgboost is failing so would the other boosters. But as catboost stands for categorical boosting; should have tried my hand with it.
The silverlining:
This competition was an eye-opener. Due to desperation of not getting to 80%, I really dived deep into feature and tuned the number of pca based features. Also, this was first time for me to tune a light gbm model to almost perfect tune (manually done). And the best silverlining was to learn 2 big mistakes in one fall.
The best solutions:
Needless to say, after so much rant, you deserve to hear from the bests. So here is the link to a few of the best solutions.
(a) Rank 2 solution to the problem
(b) Rank 5 solution
(c) Rank 10 solution
(d) The magic solution by shobhit upadhyay
I hope you liked the story and learned a thing or two!
Comments
Post a Comment