Shortcuts

Timer

class pytorch_lightning.callbacks.Timer(duration=None, interval=Interval.step, verbose=True)[source]

Bases: pytorch_lightning.callbacks.base.Callback

The Timer callback tracks the time spent in the training, validation, and test loops and interrupts the Trainer if the given time limit for the training loop is reached.

Parameters
  • duration (Union[str, timedelta, Dict[str, int], None]) – A string in the format DD:HH:MM:SS (days, hours, minutes seconds), or a datetime.timedelta, or a dict containing key-value compatible with timedelta.

  • interval (str) – Determines if the interruption happens on epoch level or mid-epoch. Can be either "epoch" or "step".

  • verbose (bool) – Set this to False to suppress logging messages.

Raises

MisconfigurationException – If interval is not one of the supported choices.

Example::

from pytorch_lightning import Trainer from pytorch_lightning.callbacks import Timer

# stop training after 12 hours timer = Timer(duration=”00:12:00:00”)

# or provide a datetime.timedelta from datetime import timedelta timer = Timer(duration=timedelta(weeks=1))

# or provide a dictionary timer = Timer(duration=dict(weeks=4, days=2))

# force training to stop after given time limit trainer = Trainer(callbacks=[timer])

# query training/validation/test time (in seconds) timer.time_elapsed(“train”) timer.start_time(“validate”) timer.end_time(“test”)

end_time(stage=RunningStage.TRAINING)[source]

Return the end time of a particular stage (in seconds)

Return type

Optional[float]

load_state_dict(state_dict)[source]

Called when loading a checkpoint, implement to reload callback state given callback’s state_dict.

Parameters

state_dict (Dict[str, Any]) – the callback state returned by state_dict.

Return type

None

on_fit_start(trainer, *args, **kwargs)[source]

Called when fit begins.

Return type

None

on_test_end(trainer, pl_module)[source]

Called when the test ends.

Return type

None

on_test_start(trainer, pl_module)[source]

Called when the test begins.

Return type

None

on_train_batch_end(trainer, *args, **kwargs)[source]

Called when the train batch ends.

Return type

None

on_train_end(trainer, pl_module)[source]

Called when the train ends.

Return type

None

on_train_epoch_end(trainer, *args, **kwargs)[source]

Called when the train epoch ends.

To access all batch outputs at the end of the epoch, either:

  1. Implement training_epoch_end in the LightningModule and access outputs via the module OR

  2. Cache data across train batch hooks inside the callback implementation to post-process in this hook.

Return type

None

on_train_start(trainer, pl_module)[source]

Called when the train begins.

Return type

None

on_validation_end(trainer, pl_module)[source]

Called when the validation loop ends.

Return type

None

on_validation_start(trainer, pl_module)[source]

Called when the validation loop begins.

Return type

None

start_time(stage=RunningStage.TRAINING)[source]

Return the start time of a particular stage (in seconds)

Return type

Optional[float]

state_dict()[source]

Called when saving a checkpoint, implement to generate callback’s state_dict.

Return type

Dict[str, Any]

Returns

A dictionary containing callback state.

time_elapsed(stage=RunningStage.TRAINING)[source]

Return the time elapsed for a particular stage (in seconds)

Return type

float

time_remaining(stage=RunningStage.TRAINING)[source]

Return the time remaining for a particular stage (in seconds)

Return type

Optional[float]