We go through normal Gradient Descent before we finish up with Stochastic Gradient descent. An optimisation technique that really sped up Neural Networks training.
Code: https://github.com/sachinruk/deepschool.io/tree/master/DL-Keras_Tensorflow (lesson 2)