Accelerated Gradient Descent via Long Steps

Authors: Benjamin Grimmer, Kevin Shu, Alex L. Wang

Abstract: Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent's state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent's first accelerated convergence rate in this setting. Namely, we prove a O(1/T^{1.02449}) rate for smooth convex minimization by utilizing a nonconstant nonperiodic sequence of increasingly large stepsizes. It remains open if one can achieve the O(1/T^{1.178}) rate conjectured by Das Gupta et. al. [2] or the optimal gradient method rate of O(1/T^2). Big-O convergence rate accelerations from long steps follow from our theory for strongly convex optimization, similar to but somewhat weaker than those concurrently developed by Altschuler and Parrilo [3].

Submitted to arXiv on 18 Sep. 2023

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.