In this video, we’re going to discuss backpropagation and what its role is in the training process of a neural network. We’re going to start out by first going over a quick recap of some of the points about Stochastic Gradient Descent that we learned in previous videos. Then, we’re going to talk about where backpropagation comes into the picture, and we’ll then spend the majority of our time discussing the intuition behind what backpropagation is actually doing.
We’re going to be building on concepts that we covered in a couple of previous videos. These two videos covered what it means to train an artificial neural network, and how a network learns. So if you haven’t seen those yet, go ahead and check them out now, and then come back to this video once you finish up there:
Follow deeplizard on Twitter:
Tweets by deeplizard
Follow deeplizard on Steemit:
Become a patron: