Backpropagation explained | Part 3 – Mathematical observations

Spread the love

Backpropagation explained | Part 3 - Mathematical observations

In our last video, we focused on the mathematical notation and definitions that we would be using going forward to show how backpropagation mathematically works to calculate the gradient of the loss function. We’ll start making use of what we learned and applying it in this video, so it’s crucial that you have a full understanding of everything we covered in that video first.

Here, we’re going to be making some mathematical observations about the training process of a neural network. The observations we’ll be making are actually facts that we already know conceptually, but we’ll now just be expressing them mathematically. We’ll be making these observations because the math for backprop that comes next, particularly, the differentiation of the loss function with respect to the weights, is going to make use of these observations.

We’re first going to start out by making an observation regarding how we can mathematically express the loss function. We’re then going to make observations around how we express the input and the output for any given node mathematically. And lastly, we’ll observe what method we’ll be using to differentiate the loss function via backpropagation.

Follow deeplizard on Twitter:

Follow deeplizard on Steemit:
https://steemit.com/@deeplizard

Support deeplizard:
Bitcoin: 1AFgm3fLTiG5pNPgnfkKdsktgxLCMYpxCN
Litecoin: LTZ2AUGpDmFm85y89PFFvVR5QmfX6Rfzg3
Ether: 0x9105cd0ecbc921ad19f6d5f9dd249735da8269ef

Become a patron:
https://www.patreon.com/deeplizard

Copyright © 2018 NEURALSCULPT.COM