Designing Optimizer for Deep Learning

Mazumder, Sudipta

Designing Optimizer for Deep Learning by Sudipta Mazumder - IIT Jodhpur Department of Computer Science and Technology 2023 - vii,10p. HB

Optimizers form the backbone of any convolutional network as they are responsible for making the functions converge faster. The optimizers do this by modifying the weights and learning rate of the algorithm, which reduces the loss and improves the accuracy. A lot of optimizers have gained traction over the years, out of which SGD and Adam take the cake. Adam has taken the lead as it helps to reduce the dying gradient problem of SGD. However, we still have scope for improvement. With this paper, we aim to introduce a new algorithm that surpasses the performance of Adam by calculating the angular gradients (cosine and tangent angles) at consecutive steps. This algorithm uses the gradient of the current step, the previous step, and the step previous to that. As we present more information to the optimizer’s algorithm, the algorithm has more information, making it better poised to make more accurate predictions at faster convergence rates. We have tested this approach on benchmark datasets and compared it with other state-of-the-art optimizers, and have obtained superior results in almost every approach.



Department of Computer Science and Technology
Optimizers
MTech Theses

006.31 / M476D