February 17, 2015

Visualizing Optimization Algorithms

Here we look at a visualization of optimization algorithms:
- SGD (Stochastic Gradient Descent)
- Momentum
- NAG  (Numerical Algorithms Group)
- Adagrad (Adaptive Gradient Algorithm)
- Adadelta
- RMSProp (Divide the gradient by a running average of its recent magnitude)

What we are missing is the L-BFGS (Limited-memory BFGS)...



 Long Valley

Algorithms without scaling based on gradient information really struggle to break symmetry here - SGD gets no where and Nesterov Accelerated Gradient / Momentum exhibits oscillations until they build up velocity in the optimization direction.

Algorithms that scale step size based on the gradient quickly break symmetry and begin descent.


Beale's function




Due to the large initial gradient, velocity based techniques shoot off and bounce around - adagrad almost goes unstable for the same reason.

Algorithms that scale gradients/step sizes like adadelta and RMSProp proceed more like accelerated SGD and handle large gradients with more stability.



Saddle Point
Behavior around a saddle point.

NAG/Momentum again like to explore around, almost taking a different path.

Adadelta/Adagrad/RMSProp proceed like accelerated SGD.



---------------------------
The original post can be found here.


No comments:

Post a Comment