EarlyStopping¶
-
class
pytorch_lightning.callbacks.
EarlyStopping
(monitor='early_stop_on', min_delta=0.0, patience=3, verbose=False, mode='auto', strict=True)[source]¶ Bases:
pytorch_lightning.callbacks.base.Callback
Monitor a metric and stop training when it stops improving.
- Parameters
monitor¶ (
str
) – quantity to be monitored. Default:'early_stop_on'
.min_delta¶ (
float
) – minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement. Default:0.0
.patience¶ (
int
) – number of validation epochs with no improvement after which training will be stopped. Default:3
.one of {auto, min, max}. In min mode, training will stop when the quantity monitored has stopped decreasing; in max mode it will stop when the quantity monitored has stopped increasing; in auto mode, the direction is automatically inferred from the name of the monitored quantity.
Warning
Setting
mode='auto'
has been deprecated in v1.1 and will be removed in v1.3.strict¶ (
bool
) – whether to crash the training if monitor is not found in the validation metrics. Default:True
.
- Raises
MisconfigurationException – If
mode
is none of"min"
,"max"
, and"auto"
.RuntimeError – If the metric
monitor
is not available.
Example:
>>> from pytorch_lightning import Trainer >>> from pytorch_lightning.callbacks import EarlyStopping >>> early_stopping = EarlyStopping('val_loss') >>> trainer = Trainer(callbacks=[early_stopping])
-
on_load_checkpoint
(callback_state)[source]¶ Called when loading a model checkpoint, use to reload state.