This is an archived version of the course. Please find the latest version of the course on the main webpage.

Chapter 4: PyTorch for Automatic Gradient Descent

Other Gradient Descent Optimization Algorithms

face Luca Grillotti

Remember how we defined our optimiser?

learning_rate = 0.2
optimiser = torch.optim.SGD(params=list_parameters, lr=learning_rate)

PyTorch actually provides plenty of optimisers. Many of these are more efficient than SGD.

Among the most popular optimisers are Adam and RMSProp. The way these optimisers work is out of the scope of this course.

Adam

optimiser = torch.optim.Adam(params=list_parameters)

RMSProp

import torch
optimiser = torch.optim.RMSprop(params=list_parameters)

Exercise:

Try to replace SGD with these optimisers in your code. Do they produce better results?