Fork me on GitHub

Tuesday, August 22, 2017

Course review: Neural Networks and Deep Learning

If you have been in the Machine Learning space, you know of the visionary and my favorite machine learning scientist Andrew Ng. Not a lot of people can claim the titles of Stanford professor, Coursera cofounder, founder of Google Brain project and chief scientist at Baidu so I have all reasons to look up to him. His latest venture is DeepLearning.ai, a startup for Deep Learning training which has launched a Deep Learning Specialization on Coursera. I'm not exactly a beginner in the field; in addition to doing the Machine Learning course on Coursera, I did other ML courses at school and did my thesis in a ML related project. Thought it be a good idea to do the specialization as a refresher, learn a different approach and advance on to deeper networks. I might have had motivations but none of them was as motivating as this tweet.



I just finished the first course Neural Networks and Deep Learning, I'd like to share a few nuggets.
The specialization is intended for beginners, comfortable with python and linear algebra, intending to create neural network classifiers in 4 weeks. It builds up to Neural Networks from the simple logistic regression classifier. Andrew slowly goes through the concepts of matrix operations, gradient descent and backpropagation from a single neuron to deep neural network. This pace allows one to develop intuition of the concept without feeling overwhelmed. There are also programming assignments for the final 3 weeks leading up to the final task of building a cat image classifier.


I've completed it with a full mark, thanks to the following features that made it a whole lot enjoyable
  • It guides you develop intuitions on neural networks. Although it may sound annoying, Andrew keeps repeating the same equations and concepts in a lot of the videos, and it does really help to stick. Some videos are even dedicated to debugging numpy code as well as matrix equations. 

  • The programming assignments require no setup letting you focus on the concepts. They are done in Jupyter Notebooks which are hosted on Coursera hub. This is a feature I'm sure complete beginners will appreciate since the last thing you want is installation errors.
  • My very favourite feature was heroes of deep learning where he interviews the heroes of deep learning like Ian Goodwell, the innovator of General Adversarial Networks, Geoff Hinton, one of the inventors of backpropagation, who also introduced backprop in word embeddings, Boltzmann machines, deep belief nets among others and finally Pieter Abbeel who has developed algorithms that enable helicopters to do advanced aerobatics. Reading about these heroes is a lot different from actually hearing them talk, I mean, I never would have guessed that Geoff Hinton struggled to get a job and he even tried carpenting while figuring his research interests. They all gave very good advice on how to get into Machine Learning where I was both surprised and delighted that they all emphasized project-based learning  instead swallowing all the papers written to date (which is obviously impossible).
There were times where it felt very repetitive but its a tradeoff I'm willing to take and its also good in the long run. I genuinely enjoyed the course so here is to finishing the remaining 4 courses in the specialization and doing rad deep things! Enjoy the rest of your week!