/* ---- Google Analytics Code Below */

Thursday, March 14, 2019

Laying out the Look of Deep Learning

A dive into the Math of Deep learning,   not very deep,  more like structural look, reminds me of lots of books I read back in the 90s, which laid out  the structural architecture of neural networks,  detailed their computational operations,  but never really got at the math.   Left you wanting to say 'why' and 'how'?

Piotr Skalski

Deep Dive into Math Behind Deep Networks
Nowadays, having at our disposal many high-level, specialized libraries and frameworks such as Keras, TensorFlow or PyTorch, we do not need to constantly worry about the size of our weights matrices or remembers formula for the derivative of activation function we decided to use. Often all we need to create a neural network, even one with a very complicated structure, is a few imports and a few lines of code. This saves us hours of searching for bugs and streamlines our work. However, the knowledge of what is happening inside the neural network helps a lot with tasks like architecture selection, hyperparameters tuning or optimisation.

Introduction

To understand more about how neural networks work, I decided to spend some time in this summer and take a look at the mathematics that hides under the surface. I also decided to write an article, a bit for myself — to organize newly learned information, a bit for others — to help them understand these sometimes difficult concepts. I will try to be as gentle as possible for those who feel less comfortable with algebra and differential calculus, but as the title suggests, it will be an article with a lot of math ... "

No comments: