Skip to main content

Deep Learning by Ian GoodFellow, Yoshua Bengio and Aaron courville Review


History:

I have been reading deep learning topics from a number of resources like machine learning mastery by Jason Brawlee, Analyticsvidhya, and other blog resources. But the problem has stayed, the problem of inconsistency in the knowledge. Therefore, I have decided to now sit down, and go through a deep learning book thoroughly. And what better name for deep learning other than Ian Goodfellow! So I have found this book named Deep Learning by Ian Goodfellow.

Introduction:

Plan for this post is reviewing and rewriting the topics from the book, in simpler language and for sharing the pieces of knowledge with my readers. I will update this post continuously as I proceed with the reading also. So ideally this post is broadly about basic to advanced deep learning material discussion.

 

Different parts of the book and purpose of them:

This book has three parts,which talks about
(1) applied mathematics and machine learning basics
(2) deep learning theories and practices
(3) deep learning researches

Now for getting a rock solid establishment on deep learning concepts and fundamentals, I am going through this book almost line by line; and so should you if you want to get the real essence of this book.

Applied mathematics and machine learning basics:

This part consists the first 5 chapters. If you are even on your 2nd year of engineering  or whatever tech course you are in, this part should be a good education for you. For people who have passed bachelors/masters years before; and want to revise the concepts for fine tuning their learning to the point, can scheme through this first part. Having said that, there are a lot of good discussions on different algorithms, how they come to play and also theoretical understanding in this part; which is unseen in many such machine learning courses, books and even core subject books.

Therefore my suggestion will be to read the first part of the book fast but thoroughly regardless of your maths/ml basics standpoint. This is also a easy primer part for understanding and familiarizing with the writing pattern of the authors.

Mainly, the first chapter concerning linear algebra starts merely with vector spaces and matrices definition, but quickly picks up the pace and discusses the linear algebra we several times get concerned with in deep learning; which are matrix factorization and data compression related theories. 

The book discusses well on eigen values, PCA, SVD and other topics. But in my opinion, for understanding all types of deep learning algorithms, these are sufficient but not exhaustive.  You should definitely do a course on linear algebra or you can watch the lecture videos on MIT linear algebra course by gilbert strang. This will give you conceptual and visual idea about matrices which is often necessary to imagine different ml and specially deep learning theories.

Now, after first chapter, the authors talk about the probability and information theory concepts needed. Starting from defining random variable, explaining different basics like independence, dependence of variables, conditional, marginal and different calculations; properties of variables like expectation, variance, covariance and other things are very finely approached upto a certain mathematical rigour which you will only find in text books in such subjects. 

Authors even venture into explaining into measure theory and its impact on probability theory. For basic level of deep learning and machine learning such rigorous treatment may not be needed; neither beginners in college may be able to understand. So if you are having problems understanding parts here; you may skip and revert back later when some issue arises in the middle portion of the book because of lack of understanding for these concepts. While this is tough, being a math major, I thoroughly enjoyed such rigor and simple explanations withstanding that rigor and would like to thank the authors for that.

The third chapter is actually very relevant and much more important than the previous two chapters; as it focuses on the numerical computations we use in deep learning and big data related machine learning problems. This chapter motivates, discusses and improves one's understanding about different optimization problems, along with properly discussing the star algorithms like stochastic gradient and constrained gradient optimization methods.

Still being a basics chapter, this chapter actually doesn't go that much into the recent optimization techniques, including bayesian optimization related algorithms and other alternatives of gradient descent family. For some of the advanced algorithms, you will need such optimizations and that you will have to read from other resources. 

The final chapter (chapter 5) in the basics part is machine learning basics. I have been working on machine learning for about 1.5 years now; and I found some of the descriptions really informative and beautifully explained. In this chapter, what reader will find most interesting is the view of machine learning algorithms. Author treat hyperparameter, cost function and solution algorithm as different parts of the whole model; making model sound like a pipeline.

This view not only facilitate understanding of different complex models, also facilitate creation of different hybrid and new machine learning models. This chapter is specially recommended for people not experienced in machine learning previous to this book; and also is a good read for people who has begun in machine learning for just several years.

The last section properly motivates why and for which problems we need to use the deep learning algorithms. A mathematical motivation is also provided for the same.

Here ends the journey of the basics and ordinary; from here on my friend we enter the realm of deep learning.

Part 2: the deep learning fundamental:

I am reading up on it. Stay tuned for further updates.

Comments

Popular posts from this blog

Mastering SQL for Data Science: Top SQL Interview Questions by Experience Level

Introduction: SQL (Structured Query Language) is a cornerstone of data manipulation and querying in data science. SQL technical rounds are designed to assess a candidate’s ability to work with databases, retrieve, and manipulate data efficiently. This guide provides a comprehensive list of SQL interview questions segmented by experience level—beginner, intermediate, and experienced. For each level, you'll find key questions designed to evaluate the candidate’s proficiency in SQL and their ability to solve data-related problems. The difficulty increases as the experience level rises, and the final section will guide you on how to prepare effectively for these rounds. Beginner (0-2 Years of Experience) At this stage, candidates are expected to know the basics of SQL, common commands, and elementary data manipulation. What is SQL? Explain its importance in data science. Hint: Think about querying, relational databases, and data manipulation. What is the difference between WHERE

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle

Spacy errors and their solutions

 Introduction: There are a bunch of errors in spacy, which never makes sense until you get to the depth of it. In this post, we will analyze the attribute error E046 and why it occurs. (1) AttributeError: [E046] Can't retrieve unregistered extension attribute 'tag_name'. Did you forget to call the set_extension method? Let's first understand what the error means on superficial level. There is a tag_name extension in your code. i.e. from a doc object, probably you are calling doc._.tag_name. But spacy suggests to you that probably you forgot to call the set_extension method. So what to do from here? The problem in hand is that your extension is not created where it should have been created. Now in general this means that your pipeline is incorrect at some level.  So how should you solve it? Look into the pipeline of your spacy language object. Chances are that the pipeline component which creates the extension is not included in the pipeline. To check the pipe eleme