In our last video on backpropagation, we covered the intuition behind what backpropagation’s role is during the training of an artificial neural network. Now, we’re going to focus on the math that’s underlying backprop. The math is pretty involved, and so we’re going to break it up into bite-sized chunks across a few videos.
We’re going to start out in this video by first quickly recapping how backpropagation is used during the training process. Then, we’ll jump over to the math side of things and open our discussion up by going over the notation and definitions that we’ll be using for our backprop calculations going forward.
These definitions and the notation will be the focus of this video. The math underlying backprop all relies heavily on what we’ll get introduced to here, so it’s crucial that these things are understood before moving forward.
Lastly, we’ll narrow our focus to discuss the several indices that the notation depends on.
Follow deeplizard on Twitter:
Tweets by deeplizard
Follow deeplizard on Steemit:
Become a patron: