High-Order Accumulative Regularization for Gradient Minimization in Convex Programming

Nov 6, 2025·
Yao Ji
Yao Ji
,
Guanghui Lan
· 0 min read
Abstract
High-order optimization methods achieve fast convergence for function-value residuals, but often exhibit a significant gap when it comes to reducing the gradient norm. This paper introduces a unified Accumulative Regularization (AR) framework that closes this gap by systematically transforming fast function-value residual convergence rates into matching gradient norm convergence rates.
Type
Publication
arXiv preprint arXiv:2511.03723
publications