The scope of this research is a problem of correct initialization and further correction of a neural network learning rate. It is one of the main hyperparameters, which helps to increase a convergence rate of a training process. There are known techniques of time-based decay, step decay and exponential decay, in which the learning rate is initialized manually and then corrected downwards proportionally to some value. In contrast, in this paper, it is proposed to focus on an excitation level of a regressor - an output amplitude of a previous network layer. The formulas, which are based on the recursive least squares method, are derived to calculate the learning rate for each network layer, and their convergence is proved. Using them, the initial learning rate can be chosen arbitrarily, and not only can such rate decrease, but also it is able to increase when the value of the regressor has become lower. Experiments are conducted for a task of image recognition using multilayer networks and the MNIST database. For networks of different structures, the proposed method allows reducing the number of training epochs significantly in comparison with the backpropagation method with a constant learning rate.