These Four Subjects Will Help You Get Started with Deep Learning
Have you tried to get started with a deep learning framework just to give up after finding the experience incredibly frustrating? First of all, there are all those algorithms that you never seen before and they get assembled into complex structures that are really hard to understand. The you have all the crazy mathematical operations that take you back to the university years. Finally, there is the strange structure of the code that read like any programs you have seen before. Well, don’t feel bad because you are far from being along about that experience.
Mastering deep learning can be an incredibly frustrating experience for most developers. Deep learning programs are “deeply” mathematical and is hard to get a good sense of any algorithm without understanding its mathematical foundation. Additionally, deep learning builds on the foundations of machine learning algorithms which are essential to understand the functionality of any model. So there are really no shortcuts if you want to become a strong deep learning technologist but, understanding what to learn can make a huge difference. In that subject, I would like to give you a few recommendations about several areas that I believe will drastically improve your understanding of deep learning techniques.
What to Learn to Deep Learn
There are several disciplines that are influential in deep learning applications and the list can get overwhelming. However, in my experience, there are four specific subjects that I would recommend to any aspiring deep learning practitioners.
Most deep learning algorithms several complex operations on data structures such as matrices, vectors, tensors or other elements that are the core subject of linear algebra. While most computer science curriculums include some basics of linear algebra, we need to go a bit deeper to understand the operations used in deep learning models. From the fundamentals of matrix, addition, multiplication or inversion to sophisticated topics such as egendecomposition or the Moore-Penrose Pseudo universe, a deep understanding of linear algebra is key in order to master deep learning models.
Numerical computation is the branch of mathematics that focuses on solving complex problems by iteratively estimating the solution. The approach followed by numerical computation techniques contrasts with traditional methods of discrete mathematics or logic that solve problems by codifying a formula. One of the key areas of numerical computation is Gradient-Based Optimization which can be considered the common denominator to every deep learning algorithm.
Deep learning models can be seen as a computational graph for quantifying uncertainty and they require a mathematical framework to express statements about uncertainty. That’s the role of probability theory which enables a mathematical foundation to reason under uncertainty. From classic probability distributions to the world of Bayesian statistics, probability theory has a strong footprint in deep learning models.
Not much to say here. Machine learning algorithms provide the basics to understand deep learning models. Many layers of deep learning programs are combinations of machine learning algorithms. Getting an understanding of the fundamental machine learning techniques is a key requirement before diving into the deep learning universe.