High-Order Accumulative Regularization for Gradient Minimization in Convex Programming
Nov 6, 2025·
,·
0 min read
Yao Ji
Guanghui Lan
Abstract
High-order optimization methods achieve fast convergence for function-value residuals, but often exhibit a significant gap when it comes to reducing the gradient norm. This paper introduces a unified Accumulative Regularization (AR) framework that closes this gap by systematically transforming fast function-value residual convergence rates into matching gradient norm convergence rates.
Type
Publication
arXiv preprint arXiv:2511.03723

Authors
H. Milton Stewart Postdoctoral Fellow
I am an H. Milton Stewart Postdoctoral Fellow in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE) at Georgia Institute of Technology. My postdoctoral mentor is Prof. Guanghui (George) Lan.
I earned my Ph.D. in Industrial Engineering at Purdue University (2024). Prior to that, I received my B.S. (2016) and M.S. (2019) degrees in the School of Mathematic Science from Beijing Normal University.