High-Order Accumulative Regularization for Gradient Minimization in Convex Programming
A unified high-order framework that closes the gap between fast function-value residual convergence and slow gradient norm convergence in convex optimization.
A unified high-order framework that closes the gap between fast function-value residual convergence and slow gradient norm convergence in convex optimization.
Talk at 2025 INFORMS Annual Meeting on a unified high-order framework for optimal gradient minimization in convex optimization.