Why doesn't the learning rate (LR) go below 1e-08 in pytorch?2019 Community Moderator ElectionWhy is it faster to process a sorted array than an unsorted array?Keras MNIST Gradient Descent Stuck / Learning Very SlowlyHow to Train on a small dataset (Fine tunning vgg16 VS small model)How training rate changes between epochs in Keras/TensorflowCNN TensorFlow/Keras validation loss/acc staying constantImplement variable learning rate TensorflowNet does not change weights during training, pytorchLoss not changing no matter the learning ratekeras: how to use learning rate decay with model.train_on_batch()Pytorch Adam optimizer's awkward behavior? better with restart?
Was Luke Skywalker the leader of the Rebel forces on Hoth?
What Happens when Passenger Refuses to Fly Boeing 737 Max?
What are some noteworthy "mic-drop" moments in math?
cat shows nothing
Is "history" a male-biased word ("his+story")?
Can you reject a postdoc offer after the PI has paid a large sum for flights/accommodation for your visit?
Should I tell my boss the work he did was worthless
Getting bad quality map when exporting as PDF?
What would be the most expensive material to an intergalactic society?
Single word request: Harming the benefactor
Reversed Sudoku
What official source details what an Empire citizen knows of WFRP's monsters?
Professor expects me to attend a conference, I can't afford even with 50% funding
Could you please stop shuffling the deck and play already?
Conservation of Mass and Energy
What was the Kree's motivation in Captain Marvel?
Having the player face themselves after the mid-game
In the quantum hamiltonian, why does kinetic energy turn into an operator while potential doesn't?
How strictly should I take "Candidates must be local"?
Do I really need to have a scientific explanation for my premise?
Do f-stop and exposure time perfectly cancel?
IBM PAT Number Sequence
Intuition behind counterexample of Euler's sum of powers conjecture
What problems would a superhuman have whose skin is constantly hot?
Why doesn't the learning rate (LR) go below 1e-08 in pytorch?
2019 Community Moderator ElectionWhy is it faster to process a sorted array than an unsorted array?Keras MNIST Gradient Descent Stuck / Learning Very SlowlyHow to Train on a small dataset (Fine tunning vgg16 VS small model)How training rate changes between epochs in Keras/TensorflowCNN TensorFlow/Keras validation loss/acc staying constantImplement variable learning rate TensorflowNet does not change weights during training, pytorchLoss not changing no matter the learning ratekeras: how to use learning rate decay with model.train_on_batch()Pytorch Adam optimizer's awkward behavior? better with restart?
I am training a model. To overcome overfitting I have done optimization, data augmentation etc etc. I have an updated LR (I tried for both SGD and Adam), and when there is a plateu (also tried step), the learning rate is decreased by a factor until it reaches LR 1e-08 but won't go below than that and my model's validation gets stuck after this point. I tried passing the epsilon parameter to Adam to suggest a smaller value, but it still got stuck at LR 1e-08. I also pass a weight decay, but it doesn't change the situation. Neither did setting the amsgrad to true.
I did some research and people suggest that Adam optimizer has inherent problems but nothing is mentioned about the learning rate - and every discussion added that with SGD, there is no problem.
Why is this? Is it a bug or is it designed so because authors think it is meaninglessly a small value after that? It seems like it would really help to have a smaller learning rate for my dataset because all seems well up until learning rate is down to LR 1e-08.
optimization deep-learning pytorch gradient-descent
add a comment |
I am training a model. To overcome overfitting I have done optimization, data augmentation etc etc. I have an updated LR (I tried for both SGD and Adam), and when there is a plateu (also tried step), the learning rate is decreased by a factor until it reaches LR 1e-08 but won't go below than that and my model's validation gets stuck after this point. I tried passing the epsilon parameter to Adam to suggest a smaller value, but it still got stuck at LR 1e-08. I also pass a weight decay, but it doesn't change the situation. Neither did setting the amsgrad to true.
I did some research and people suggest that Adam optimizer has inherent problems but nothing is mentioned about the learning rate - and every discussion added that with SGD, there is no problem.
Why is this? Is it a bug or is it designed so because authors think it is meaninglessly a small value after that? It seems like it would really help to have a smaller learning rate for my dataset because all seems well up until learning rate is down to LR 1e-08.
optimization deep-learning pytorch gradient-descent
add a comment |
I am training a model. To overcome overfitting I have done optimization, data augmentation etc etc. I have an updated LR (I tried for both SGD and Adam), and when there is a plateu (also tried step), the learning rate is decreased by a factor until it reaches LR 1e-08 but won't go below than that and my model's validation gets stuck after this point. I tried passing the epsilon parameter to Adam to suggest a smaller value, but it still got stuck at LR 1e-08. I also pass a weight decay, but it doesn't change the situation. Neither did setting the amsgrad to true.
I did some research and people suggest that Adam optimizer has inherent problems but nothing is mentioned about the learning rate - and every discussion added that with SGD, there is no problem.
Why is this? Is it a bug or is it designed so because authors think it is meaninglessly a small value after that? It seems like it would really help to have a smaller learning rate for my dataset because all seems well up until learning rate is down to LR 1e-08.
optimization deep-learning pytorch gradient-descent
I am training a model. To overcome overfitting I have done optimization, data augmentation etc etc. I have an updated LR (I tried for both SGD and Adam), and when there is a plateu (also tried step), the learning rate is decreased by a factor until it reaches LR 1e-08 but won't go below than that and my model's validation gets stuck after this point. I tried passing the epsilon parameter to Adam to suggest a smaller value, but it still got stuck at LR 1e-08. I also pass a weight decay, but it doesn't change the situation. Neither did setting the amsgrad to true.
I did some research and people suggest that Adam optimizer has inherent problems but nothing is mentioned about the learning rate - and every discussion added that with SGD, there is no problem.
Why is this? Is it a bug or is it designed so because authors think it is meaninglessly a small value after that? It seems like it would really help to have a smaller learning rate for my dataset because all seems well up until learning rate is down to LR 1e-08.
optimization deep-learning pytorch gradient-descent
optimization deep-learning pytorch gradient-descent
edited Mar 6 at 15:28
dusa
asked Mar 6 at 15:23
dusadusa
302213
302213
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
Personally I'm not aware of a lower limit on the learning rate (other than 0.0). But you can achieve the effect of a lower learning rate by reducing the loss before computing the backwards pass:
outputs = model(batch)
loss = criterion(outputs, targets)
# Equivalent to lowering the learning rate by a factor of 100
loss = loss / 100
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
Indeed a nice trick :)
– mr_mo
Mar 6 at 19:24
Hey thanks, it is a nice trick!
– dusa
Mar 7 at 0:16
add a comment |
Richard's work around should work pretty well, but I have also gotten an official answer if anyone would care to know.
Setting a smaller value to ReduceLROnPlateau scheduler's (and not Adam's) eps parameter has worked.
eps ( float ) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
That's good to know, thanks for the update.
– Richard
2 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55026544%2fwhy-doesnt-the-learning-rate-lr-go-below-1e-08-in-pytorch%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Personally I'm not aware of a lower limit on the learning rate (other than 0.0). But you can achieve the effect of a lower learning rate by reducing the loss before computing the backwards pass:
outputs = model(batch)
loss = criterion(outputs, targets)
# Equivalent to lowering the learning rate by a factor of 100
loss = loss / 100
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
Indeed a nice trick :)
– mr_mo
Mar 6 at 19:24
Hey thanks, it is a nice trick!
– dusa
Mar 7 at 0:16
add a comment |
Personally I'm not aware of a lower limit on the learning rate (other than 0.0). But you can achieve the effect of a lower learning rate by reducing the loss before computing the backwards pass:
outputs = model(batch)
loss = criterion(outputs, targets)
# Equivalent to lowering the learning rate by a factor of 100
loss = loss / 100
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
Indeed a nice trick :)
– mr_mo
Mar 6 at 19:24
Hey thanks, it is a nice trick!
– dusa
Mar 7 at 0:16
add a comment |
Personally I'm not aware of a lower limit on the learning rate (other than 0.0). But you can achieve the effect of a lower learning rate by reducing the loss before computing the backwards pass:
outputs = model(batch)
loss = criterion(outputs, targets)
# Equivalent to lowering the learning rate by a factor of 100
loss = loss / 100
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
Personally I'm not aware of a lower limit on the learning rate (other than 0.0). But you can achieve the effect of a lower learning rate by reducing the loss before computing the backwards pass:
outputs = model(batch)
loss = criterion(outputs, targets)
# Equivalent to lowering the learning rate by a factor of 100
loss = loss / 100
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
answered Mar 6 at 16:13
RichardRichard
870414
870414
Indeed a nice trick :)
– mr_mo
Mar 6 at 19:24
Hey thanks, it is a nice trick!
– dusa
Mar 7 at 0:16
add a comment |
Indeed a nice trick :)
– mr_mo
Mar 6 at 19:24
Hey thanks, it is a nice trick!
– dusa
Mar 7 at 0:16
Indeed a nice trick :)
– mr_mo
Mar 6 at 19:24
Indeed a nice trick :)
– mr_mo
Mar 6 at 19:24
Hey thanks, it is a nice trick!
– dusa
Mar 7 at 0:16
Hey thanks, it is a nice trick!
– dusa
Mar 7 at 0:16
add a comment |
Richard's work around should work pretty well, but I have also gotten an official answer if anyone would care to know.
Setting a smaller value to ReduceLROnPlateau scheduler's (and not Adam's) eps parameter has worked.
eps ( float ) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
That's good to know, thanks for the update.
– Richard
2 hours ago
add a comment |
Richard's work around should work pretty well, but I have also gotten an official answer if anyone would care to know.
Setting a smaller value to ReduceLROnPlateau scheduler's (and not Adam's) eps parameter has worked.
eps ( float ) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
That's good to know, thanks for the update.
– Richard
2 hours ago
add a comment |
Richard's work around should work pretty well, but I have also gotten an official answer if anyone would care to know.
Setting a smaller value to ReduceLROnPlateau scheduler's (and not Adam's) eps parameter has worked.
eps ( float ) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
Richard's work around should work pretty well, but I have also gotten an official answer if anyone would care to know.
Setting a smaller value to ReduceLROnPlateau scheduler's (and not Adam's) eps parameter has worked.
eps ( float ) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
answered Mar 7 at 0:19
dusadusa
302213
302213
That's good to know, thanks for the update.
– Richard
2 hours ago
add a comment |
That's good to know, thanks for the update.
– Richard
2 hours ago
That's good to know, thanks for the update.
– Richard
2 hours ago
That's good to know, thanks for the update.
– Richard
2 hours ago
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55026544%2fwhy-doesnt-the-learning-rate-lr-go-below-1e-08-in-pytorch%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown