Skip to main content

Deep Learning by Ian GoodFellow, Yoshua Bengio and Aaron courville Review


History:

I have been reading deep learning topics from a number of resources like machine learning mastery by Jason Brawlee, Analyticsvidhya, and other blog resources. But the problem has stayed, the problem of inconsistency in the knowledge. Therefore, I have decided to now sit down, and go through a deep learning book thoroughly. And what better name for deep learning other than Ian Goodfellow! So I have found this book named Deep Learning by Ian Goodfellow.

Introduction:

Plan for this post is reviewing and rewriting the topics from the book, in simpler language and for sharing the pieces of knowledge with my readers. I will update this post continuously as I proceed with the reading also. So ideally this post is broadly about basic to advanced deep learning material discussion.

 

Different parts of the book and purpose of them:

This book has three parts,which talks about
(1) applied mathematics and machine learning basics
(2) deep learning theories and practices
(3) deep learning researches

Now for getting a rock solid establishment on deep learning concepts and fundamentals, I am going through this book almost line by line; and so should you if you want to get the real essence of this book.

Applied mathematics and machine learning basics:

This part consists the first 5 chapters. If you are even on your 2nd year of engineering  or whatever tech course you are in, this part should be a good education for you. For people who have passed bachelors/masters years before; and want to revise the concepts for fine tuning their learning to the point, can scheme through this first part. Having said that, there are a lot of good discussions on different algorithms, how they come to play and also theoretical understanding in this part; which is unseen in many such machine learning courses, books and even core subject books.

Therefore my suggestion will be to read the first part of the book fast but thoroughly regardless of your maths/ml basics standpoint. This is also a easy primer part for understanding and familiarizing with the writing pattern of the authors.

Mainly, the first chapter concerning linear algebra starts merely with vector spaces and matrices definition, but quickly picks up the pace and discusses the linear algebra we several times get concerned with in deep learning; which are matrix factorization and data compression related theories. 

The book discusses well on eigen values, PCA, SVD and other topics. But in my opinion, for understanding all types of deep learning algorithms, these are sufficient but not exhaustive.  You should definitely do a course on linear algebra or you can watch the lecture videos on MIT linear algebra course by gilbert strang. This will give you conceptual and visual idea about matrices which is often necessary to imagine different ml and specially deep learning theories.

Now, after first chapter, the authors talk about the probability and information theory concepts needed. Starting from defining random variable, explaining different basics like independence, dependence of variables, conditional, marginal and different calculations; properties of variables like expectation, variance, covariance and other things are very finely approached upto a certain mathematical rigour which you will only find in text books in such subjects. 

Authors even venture into explaining into measure theory and its impact on probability theory. For basic level of deep learning and machine learning such rigorous treatment may not be needed; neither beginners in college may be able to understand. So if you are having problems understanding parts here; you may skip and revert back later when some issue arises in the middle portion of the book because of lack of understanding for these concepts. While this is tough, being a math major, I thoroughly enjoyed such rigor and simple explanations withstanding that rigor and would like to thank the authors for that.

The third chapter is actually very relevant and much more important than the previous two chapters; as it focuses on the numerical computations we use in deep learning and big data related machine learning problems. This chapter motivates, discusses and improves one's understanding about different optimization problems, along with properly discussing the star algorithms like stochastic gradient and constrained gradient optimization methods.

Still being a basics chapter, this chapter actually doesn't go that much into the recent optimization techniques, including bayesian optimization related algorithms and other alternatives of gradient descent family. For some of the advanced algorithms, you will need such optimizations and that you will have to read from other resources. 

The final chapter (chapter 5) in the basics part is machine learning basics. I have been working on machine learning for about 1.5 years now; and I found some of the descriptions really informative and beautifully explained. In this chapter, what reader will find most interesting is the view of machine learning algorithms. Author treat hyperparameter, cost function and solution algorithm as different parts of the whole model; making model sound like a pipeline.

This view not only facilitate understanding of different complex models, also facilitate creation of different hybrid and new machine learning models. This chapter is specially recommended for people not experienced in machine learning previous to this book; and also is a good read for people who has begun in machine learning for just several years.

The last section properly motivates why and for which problems we need to use the deep learning algorithms. A mathematical motivation is also provided for the same.

Here ends the journey of the basics and ordinary; from here on my friend we enter the realm of deep learning.

Part 2: the deep learning fundamental:

I am reading up on it. Stay tuned for further updates.

Comments

Popular posts from this blog

Tinder bio generation with OpenAI GPT-3 API

Introduction: Recently I got access to OpenAI API beta. After a few simple experiments, I set on creating a simple test project. In this project, I will try to create good tinder bio for a specific person.  The abc of openai API playground: In the OpenAI API playground, you get a prompt, and then you can write instructions or specific text to trigger a response from the gpt-3 models. There are also a number of preset templates which loads a specific kind of prompt and let's you generate pre-prepared results. What are the models available? There are 4 models which are stable. These are: (1) curie (2) babbage (3) ada (4) da-vinci da-vinci is the strongest of them all and can perform all downstream tasks which other models can do. There are 2 other new models which openai introduced this year (2021) named da-vinci-instruct-beta and curie-instruct-beta. These instruction models are specifically built for taking in instructions. As OpenAI blog explains and also you will see in our

Can we write codes automatically with GPT-3?

 Introduction: OpenAI created and released the first versions of GPT-3 back in 2021 beginning. We wrote a few text generation articles that time and tested how to create tinder bio using GPT-3 . If you are interested to know more on what is GPT-3 or what is openai, how the server look, then read the tinder bio article. In this article, we will explore Code generation with OpenAI models.  It has been noted already in multiple blogs and exploration work, that GPT-3 can even solve leetcode problems. We will try to explore how good the OpenAI model can "code" and whether prompt tuning will improve or change those performances. Basic coding: We will try to see a few data structure coding performance by GPT-3. (a) Merge sort with python:  First with 200 words limit, it couldn't complete the Write sample code for merge sort in python.   def merge(arr, l, m, r):     n1 = m - l + 1     n2 = r- m       # create temp arrays     L = [0] * (n1)     R = [0] * (n

What is Bort?

 Introduction: Bort, is the new and more optimized version of BERT; which came out this october from amazon science. I came to know about it today while parsing amazon science's news on facebook about bort. So Bort is the newest addition to the long list of great LM models with extra-ordinary achievements.  Why is Bort important? Bort, is a model of 5.5% effective and 16% total size of the original BERT model; and is 20x faster than BERT, while being able to surpass the BERT model in 20 out of 23 tasks; to quote the abstract of the paper,  ' it obtains performance improvements of between 0 . 3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks. ' So what made this achievement possible? The main idea behind creation of Bort is to go beyond the shallow depth of weight pruning, connection deletion or merely factoring the NN into different matrix factorizations and thus distilling it. While methods like knowle