Learning rate is value by which weights are updated during Artificial neural network training. The value of Learning rate tell us how big step we should take towards minimizing error.
It is also called Step size.
If Learning rate is small then it will take a lot of time for training Artificial neural network.

If the value of Learning rate is high, then the minimum error may be overshoots.

Optimal Learning rate causes that minimum loss will not be overshoots and training will not takes forever.

Mathematical Implementation of New Weight
new_weigh = old_weight - learning_rate*gradient gradient - tells us in which direction the loss will move to a minimum and it is calculated using the function derivation gradient = Δ error / Δ weight Δ - change in
Thanks for reading this post.
References
- Google Developers. 2020. Reducing Loss: Learning Rate | Machine Learning Crash Course. [online] Available at: <https://developers.google.com/machine-learning/crash-course/reducing-loss/learning-rate> [Accessed 25 April 2020].