Image from Google Jackets

Designing Optimizer for Deep Learning by Sudipta Mazumder

By: Contributor(s): Material type: TextTextPublication details: IIT Jodhpur Department of Computer Science and Technology 2023Description: vii,10p. HBSubject(s): DDC classification:
  • 006.31 M476D
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)

Optimizers form the backbone of any convolutional network as they are responsible for making the functions converge faster. The optimizers do this by modifying the weights and learning rate of the algorithm, which reduces the loss and improves the accuracy. A lot of optimizers have gained traction over the years, out of which SGD and Adam take the cake. Adam has taken the lead as it helps to reduce the dying gradient problem of SGD. However, we still have scope for improvement. With this paper, we aim to introduce a new algorithm that surpasses the performance of Adam by calculating the angular gradients (cosine and tangent angles) at consecutive steps. This algorithm uses the gradient of the current step, the previous step, and the step previous to that. As we present more information to the optimizer’s algorithm, the algorithm has more information, making it better poised to make more accurate predictions at faster convergence rates. We have tested this approach on benchmark datasets and compared it with other state-of-the-art optimizers, and have obtained superior results in almost every approach.

There are no comments on this title.

to post a comment.