Higher-order Accumulative Regularization Methods for Gradient Minimization

Oct 26, 2025·
Yao Ji
Yao Ji
,
Guanghui Lan
· 0 min read
Abstract
We introduce a unified Accumulative Regularization (AR) framework that closes the gap between fast function-value residual convergence and slow gradient norm convergence in high-order convex optimization. The framework systematically transforms fast function-value residual convergence rates into matching gradient norm convergence rates.
Type
Publication
2025 INFORMS Annual Meeting — Talk
presentations