Learning Rate in Deep Learning

AvatarPosted by

Learning rate is value by which weights are updated during Artificial neural network training. The value of Learning rate tell us how big step we should take towards minimizing error.

It is also called Step size.

If Learning rate is small then it will take a lot of time for training Artificial neural network.

Small Learning rate

If the value of Learning rate is high, then the minimum error may be overshoots.

High Learning rate

Optimal Learning rate causes that minimum loss will not be overshoots and training will not takes forever.

Optimal Learning rate

Mathematical Implementation of New Weight

new_weigh = old_weight - learning_rate*gradient

gradient - tells us in which direction the loss will move to a minimum and it is calculated using the function derivation 
gradient = Δ error / Δ weight
Δ - change in

Thanks for reading this post.

References

  1. Google Developers. 2020. Reducing Loss: Learning Rate  |  Machine Learning Crash Course. [online] Available at: <https://developers.google.com/machine-learning/crash-course/reducing-loss/learning-rate> [Accessed 25 April 2020].

Leave a Reply

Your email address will not be published. Required fields are marked *