Gradient descent optimization in C++ - Part 1 - Theory
Gradient descent is a powerful numerical optimization technique that is used in many fields such as #machinelearning and #AI, indeed anywhere where we need to find a set of parameters that minimizes the output of an unknown function.
In part 1 of this two-part series I focus on the theory behind the gradient descent method and give some examples. In the next video I will look at how to implement this technique in C++ code.
In part 2 of this series, I show how I have approached implementing a simple gradient descent algorithm in C++. You can see that video here:
https://youtu.be/eyCq3cNFpMU
**********************************
You can also follow me on the QuantitativeBytes Facebook page at:
www.facebook.com/QuantitativeBytes
**********************************