Higher-order Accumulative Regularization Methods for Gradient Minimization
Abstract
We introduce a unified Accumulative Regularization (AR) framework that closes the gap between fast function-value residual convergence and slow gradient norm convergence in high-order convex optimization. The framework systematically transforms fast function-value residual convergence rates into matching gradient norm convergence rates.
Type
Publication
2025 INFORMS Annual Meeting — Talk

Authors
H. Milton Stewart Postdoctoral Fellow
I am an H. Milton Stewart Postdoctoral Fellow in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE) at Georgia Institute of Technology. My postdoctoral mentor is Prof. Guanghui (George) Lan.
I earned my Ph.D. in Industrial Engineering at Purdue University (2024). Prior to that, I received my B.S. (2016) and M.S. (2019) degrees in the School of Mathematical Sciences from Beijing Normal University.