Here we look at a visualization of optimization algorithms:
- SGD (Stochastic Gradient Descent)
- NAG (Numerical Algorithms Group)
- Adagrad (Adaptive Gradient Algorithm)
- RMSProp (Divide the gradient by a running average of its recent magnitude)
What we are missing is the L-BFGS (Limited-memory BFGS)...
Algorithms that scale step size based on the gradient quickly break symmetry and begin descent.
Due to the large initial gradient, velocity based techniques shoot off and bounce around - adagrad almost goes unstable for the same reason.
Algorithms that scale gradients/step sizes like adadelta and RMSProp proceed more like accelerated SGD and handle large gradients with more stability.
NAG/Momentum again like to explore around, almost taking a different path.
Adadelta/Adagrad/RMSProp proceed like accelerated SGD.
The original post can be found here.