Backpropagation explained

Spread the love

Backpropagation explained

In this video, we’re going to discuss backpropagation and what its role is in the training process of a neural network. We’re going to start out by first going over a quick recap of some of the points about Stochastic Gradient Descent that we learned in previous videos. Then, we’re going to talk about where backpropagation comes into the picture, and we’ll then spend the majority of our time discussing the intuition behind what backpropagation is actually doing.

We’re going to be building on concepts that we covered in a couple of previous videos. These two videos covered what it means to train an artificial neural network, and how a network learns. So if you haven’t seen those yet, go ahead and check them out now, and then come back to this video once you finish up there:

Follow deeplizard on Twitter:

Follow deeplizard on Steemit:
https://steemit.com/@deeplizard

Support deeplizard:
Bitcoin: 1AFgm3fLTiG5pNPgnfkKdsktgxLCMYpxCN
Litecoin: LTZ2AUGpDmFm85y89PFFvVR5QmfX6Rfzg3
Ether: 0x9105cd0ecbc921ad19f6d5f9dd249735da8269ef

Become a patron:
https://www.patreon.com/deeplizard

Author: administrator