Skip to main content

Is it fine to start learning Deep learning with a basic knowledge in machine learning?


A small story:

A  junior data scientist walks up to a senior. He asks, "my model accuracy is not good; can you take a look?". The senior takes a look into the model, and sees that the model is a self attention model without pre-training; and it is trained on a 400 row datasets. This can happen if you don't have a bit of experience in machine learning and delve into deep learning too fast. But then how much knowledge in machine learning is needed to start working on deep learning? 

Why are people rushing to deep learning?


Deep learning, the machine learning part established on neural networks; has started back in 1950s. After two winters in deep learning, just now, when processor speeds are at maximum and computation cost is at the lowest of all times, deep learning is booming in both academics as well as industries. From Tesla to openai, from stanford to MIT, everywhere academics are working on exciting new things, the different new architectures are now too many to even count and therefore all new students and professionals are running towards deep learning like the insects fly towards the fire.
So yes, indeed the world of deep learning is ever expanding. Each year, or at least once in two years, there is normally a break-through in deep learning field in some field. And along with that break through, comes a barrage of research papers, software, state of art architectures and many other things. Now the important thing is that if you want to stay up to date in the deep learning then you will need to read and get practiced with these.

The balance and how to find it:

Now the problem therefore in front of us is to balance between the huge task of staying updated with the deep learning and understanding good machine learning to become a valid enough data scientist.

The way to resolve this issue is to try and solve problems with appropriate technology and architecture. i.e. if you have a small dataset (<1000 rows) and small number of features; among which there are no text or something sort of a data which is a series type or sequence type data, then it is enough to use normal machine learning techniques to solve this problem instead of using a higher technology like deep learning or self-attention models. 
In doing so, you will learn two things:
(1) applying proper model in proper scenario. When you will not be in a modeling only platform, and will be supposed to apply models in real life data; then you will apply the proper model in the proper settings and therefore save both computational as well as your reputation. Also, in business, unlike academic scenario, models are supposed to be explainable glass box models instead of leader board supported accuracy oriented models; whose inner workings are often not discussed.
(2) what are the basics and necessary things in a normal modeling scenario, tuning, pruning, cleaning, imputing, reporting, visualizing, feature engineering by hand and through business understanding and thousands other things which are not taught when you are trying to fit a black box model into your data to get an awesome end result.

In conclusion:

So the end advice will be it is okay to start deep learning with a basic knowledge of machine learning; but you should learn the basic nuisances of model training before getting into deep learning too hard, and also you should treat a model always like food, i.e. " you should know what you put into your mouth" because just like your body, your data and the model results are also sensitive.
Thanks for reading! I write about more technical stuffs generally and visit some of my other posts if you are interested in machine learning. Have a great day!

Comments

Popular posts from this blog

Tinder bio generation with OpenAI GPT-3 API

Introduction: Recently I got access to OpenAI API beta. After a few simple experiments, I set on creating a simple test project. In this project, I will try to create good tinder bio for a specific person.  The abc of openai API playground: In the OpenAI API playground, you get a prompt, and then you can write instructions or specific text to trigger a response from the gpt-3 models. There are also a number of preset templates which loads a specific kind of prompt and let's you generate pre-prepared results. What are the models available? There are 4 models which are stable. These are: (1) curie (2) babbage (3) ada (4) da-vinci da-vinci is the strongest of them all and can perform all downstream tasks which other models can do. There are 2 other new models which openai introduced this year (2021) named da-vinci-instruct-beta and curie-instruct-beta. These instruction models are specifically built for taking in instructions. As OpenAI blog explains and also you will see in our

Can we write codes automatically with GPT-3?

 Introduction: OpenAI created and released the first versions of GPT-3 back in 2021 beginning. We wrote a few text generation articles that time and tested how to create tinder bio using GPT-3 . If you are interested to know more on what is GPT-3 or what is openai, how the server look, then read the tinder bio article. In this article, we will explore Code generation with OpenAI models.  It has been noted already in multiple blogs and exploration work, that GPT-3 can even solve leetcode problems. We will try to explore how good the OpenAI model can "code" and whether prompt tuning will improve or change those performances. Basic coding: We will try to see a few data structure coding performance by GPT-3. (a) Merge sort with python:  First with 200 words limit, it couldn't complete the Write sample code for merge sort in python.   def merge(arr, l, m, r):     n1 = m - l + 1     n2 = r- m       # create temp arrays     L = [0] * (n1)     R = [0] * (n

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle