- class pytorch_lightning.callbacks.EarlyStopping(monitor=None, min_delta=0.0, patience=3, verbose=False, mode='min', strict=True, check_finite=True, stopping_threshold=None, divergence_threshold=None, check_on_train_epoch_end=True)¶
Monitor a metric and stop training when it stops improving.
number of checks with no improvement after which training will be stopped. Under the default configuration, one check happens after every training epoch. However, the frequency of validation can be modified by setting various parameters on the
Trainer, for example
It must be noted that the patience parameter counts the number of validation checks with no improvement, and not the number of training epochs. Therefore, with parameters
patience=3, the trainer will perform at least 40 training epochs before being stopped.
str) – one of
'min'mode, training will stop when the quantity monitored has stopped decreasing and in
'max'mode it will stop when the quantity monitored has stopped increasing.
MisconfigurationException – If
modeis none of
RuntimeError – If the metric
monitoris not available.
>>> from pytorch_lightning import Trainer >>> from pytorch_lightning.callbacks import EarlyStopping >>> early_stopping = EarlyStopping('val_loss') >>> trainer = Trainer(callbacks=[early_stopping])
Called when loading a model checkpoint, use to reload state.
on_load_checkpointwon’t be called with an undefined state. If your
on_load_checkpointhook behavior doesn’t rely on a state, you will still need to override
on_save_checkpointto return a
- Return type
- on_save_checkpoint(trainer, pl_module, checkpoint)¶
Called when saving a model checkpoint, use to persist state.
- on_train_epoch_end(trainer, pl_module)¶
Called when the train epoch ends.
To access all batch outputs at the end of the epoch, either:
Implement training_epoch_end in the LightningModule and access outputs via the module OR
Cache data across train batch hooks inside the callback implementation to post-process in this hook.
- Return type