We propose an approach to the normalization of the excitation of the identification loop regressor constructed based on the dynamic extension and mixing procedure. With a constant estimation loop gain, applying this approach allows one to have the same upper bound on the parametric identification error for scalar regressors with various degrees of excitation, a feature that is a significant advantage for practice. The approach developed is compared with the well-known regressor amplitude normalization method, and it is shown that the classical normalization method does not have the above-mentioned property. As a validation of our theoretical conclusions, the results of comparative mathematical modeling are presented for the classical gradient estimation loop, loops with amplitude regressor normalization, and loops with the proposed regressor excitation normalization.