TY - JOUR
T1 - Accelerating Level-Value Adjustment for the Polyak Stepsize
AU - Liu, Anbang
AU - Bragin, Mikhail A.
AU - Chen, Xi
AU - Guan, Xiaohong
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
PY - 2025/9
Y1 - 2025/9
N2 - The Polyak stepsize has been widely used in subgradient methods for non-smooth convex optimization. However, calculating the stepsize requires the optimal value, which is generally unknown. Therefore, dynamic estimations of the optimal value are usually needed. In this paper, to guarantee convergence, a series of level values is constructed to estimate the optimal value successively. This is achieved by developing a decision-guided procedure that involves solving a novel, easy-to-solve linear constraint satisfaction problem referred to as the “Polyak Stepsize Violation Detector” (PSVD). Once a violation is detected, the level value is recalculated. We rigorously establish the convergence for both the level values and the objective function values. Furthermore, with our level adjustment approach, calculating an approximate subgradient in each iteration is sufficient for convergence. A series of empirical tests of convex optimization problems with diverse characteristics demonstrates the practical advantages of our approach over existing methods.
AB - The Polyak stepsize has been widely used in subgradient methods for non-smooth convex optimization. However, calculating the stepsize requires the optimal value, which is generally unknown. Therefore, dynamic estimations of the optimal value are usually needed. In this paper, to guarantee convergence, a series of level values is constructed to estimate the optimal value successively. This is achieved by developing a decision-guided procedure that involves solving a novel, easy-to-solve linear constraint satisfaction problem referred to as the “Polyak Stepsize Violation Detector” (PSVD). Once a violation is detected, the level value is recalculated. We rigorously establish the convergence for both the level values and the objective function values. Furthermore, with our level adjustment approach, calculating an approximate subgradient in each iteration is sufficient for convergence. A series of empirical tests of convex optimization problems with diverse characteristics demonstrates the practical advantages of our approach over existing methods.
KW - Approximate subgradient method
KW - Convex optimization
KW - Non-smooth optimization
KW - Polyak stepsize
KW - Subgradient method
UR - https://www.scopus.com/pages/publications/105008719489
U2 - 10.1007/s10957-025-02750-0
DO - 10.1007/s10957-025-02750-0
M3 - 文章
AN - SCOPUS:105008719489
SN - 0022-3239
VL - 206
JO - Journal of Optimization Theory and Applications
JF - Journal of Optimization Theory and Applications
IS - 3
M1 - 71
ER -