To truly understand the power of deep learning, you need to grasp the mathematical concepts that make it tick. Math for Deep Learning will give you a working knowledge of probability, statistics, linear algebra, and differential calculus-the essential math subfields required to practice deep learning successfully. Each subfield is explained with Python code and hands-on, real-world examples that bridge the gap between pure mathematics and its applications in deep learning. The book begins with fundamentals such as Bayes' theorem before progressing to more advanced concepts like training neural networks using vectors, matrices, and derivatives of functions. You'll then put all this math to use as you explore and implement backpropagation and gradient descent- the foundational algorithms that have enabled the Al revolution. You'll learn how to: Use statistics to understand datasets and evaluate models, Apply the rules of probability, Manipulate vectors and matrices to move data through a neural network, Use linear algebra to implement principal component analysis and singular value decomposition, Implement gradient-based optimization techniques like RMSprop, Adagrad, and Adadelta, The core math concepts presented in Math for Deep Learning will give you the foundation you need to unlock the potential of deep learning in your own applications. Book jacket.
Math for Deep Learning : What You Need to Know to Understand Neural Networks