Back to Browse

Week 5 – Lecture: Optimisation

19.6K views
Mar 17, 2020
1:29:05

Course website: http://bit.ly/DLSP20-web Playlist: http://bit.ly/pDL-YouTube Speaker: Aaron DeFazio Week 5: http://bit.ly/DLSP20-05 0:00:00 – Week 5 – Lecture LECTURE Part A: http://bit.ly/DLSP20-05-1 We begin by introducing Gradient Descent. We discuss the intuition and also talk about how step sizes play an important role in reaching the solution. Then we move on to SGD and its performance in comparison to Full Batch GD. Finally we talk about Momentum Updates, specifically the two update rules, the intuition behind momentum and its effect on convergence. 0:01:28 – Gradient Descent 0:14:58 – Stochastic Gradient Descent 0:27:52 – Momentum LECTURE Part B: http://bit.ly/DLSP20-05-2 We discuss adaptive methods for SGD such as RMSprop and ADAM. We also talk about normalization layers and their effects on the neural network training process. Finally, we discuss a real-world example of neural nets being used in industry to make MRI scans faster and more efficient. 0:44:35 – Adaptive Methods 1:05:07 – Normalization Layers 1:20:17 – The Death of Optimization

Download

0 formats

No download links available.

Week 5 – Lecture: Optimisation | NatokHD