High-order optimization methods achieve fast convergence for function-value residuals, but often exhibit a significant gap when it comes to reducing the gradient norm. This paper introduces a unified Accumulative Regularization (AR) framework that closes this gap by systematically transforming fast function-value residual convergence rates into matching gradient norm convergence rates. For composite convex problems, the proposed AR methods attain the optimal rate for gradient norm minimization, matching the best-known rates for function-value residual. The framework is further extended to uniformly convex settings, yielding linear, superlinear, and sublinear convergence of the gradient norm under varying lower curvature conditions. The paper also presents parameter-free algorithms that require no prior knowledge of problem-specific parameters—such as the Lipschitz constant of the p-th-order derivative, the initial optimality gap, or the uniform convexity parameter—and support inexact solutions at each high-order step.