Model Hooks¶
There are cases when you might want to do something different at different parts of the training/validation loop. To enable a hook, simply override the method in your LightningModule and the trainer will call it at the correct time.
Contributing If there’s a hook you’d like to add, simply:
Fork PyTorchLightning.
Add the hook to
pytorch_lightning.core.hooks.ModelHooks
.Add it in the correct place in
pytorch_lightning.trainer
where it should be called.
Hooks lifecycle¶
Training set-up¶
setup()
Warning
prepare_data is only called from global_rank=0. Don’t assign state (self.something), use setup for that
Training loop¶
Validation loop¶
model.zero_grad()
model.eval()
torch.set_grad_enabled(False)
model.train()
torch.set_grad_enabled(True)
Test loop¶
model.zero_grad()
model.eval()
torch.set_grad_enabled(False)
model.train()
torch.set_grad_enabled(True)
General hooks¶
-
class
pytorch_lightning.core.hooks.
ModelHooks
(*args, **kwargs)[source] Bases:
torch.nn.Module
-
backward
(trainer, loss, optimizer, optimizer_idx)[source] Override backward with your own implementation if you need to.
- Parameters
Called to perform backward step. Feel free to override as needed.
The loss passed in has already been scaled for accumulated gradients if requested.
Example:
def backward(self, trainer, loss, optimizer, optimizer_idx): loss.backward()
- Return type
None
-
on_after_backward
()[source] Called in the training loop after loss.backward() and before optimizers do anything. This is the ideal place to inspect or log gradient information.
Example:
def on_after_backward(self): # example to inspect gradient information in tensorboard if self.trainer.global_step % 25 == 0: # don't make the tf file huge params = self.state_dict() for k, v in params.items(): grads = v name = k self.logger.experiment.add_histogram(tag=name, values=grads, global_step=self.trainer.global_step)
- Return type
None
-
on_batch_end
()[source] Called in the training loop after the batch.
- Return type
None
-
on_batch_start
(batch)[source] Called in the training loop before anything happens for that batch.
If you return -1 here, you will skip training for the rest of the current epoch.
-
on_before_zero_grad
(optimizer)[source] Called after optimizer.step() and before optimizer.zero_grad().
Called in the training loop after taking an optimizer step and before zeroing grads. Good place to inspect weight information with weights updated.
This is where it is called:
for optimizer in optimizers: optimizer.step() model.on_before_zero_grad(optimizer) # < ---- called here optimizer.zero_grad
- Parameters
optimizer¶ (
Optimizer
) – The optimizer for which grads should be zeroed.- Return type
None
-
on_epoch_end
()[source] Called in the training loop at the very end of the epoch.
- Return type
None
-
on_epoch_start
()[source] Called in the training loop at the very beginning of the epoch.
- Return type
None
-
on_fit_end
()[source] Called at the very end of fit. If on DDP it is called on every process
-
on_fit_start
()[source] Called at the very beginning of fit. If on DDP it is called on every process
-
on_post_performance_check
()[source] Called at the very end of the validation loop.
- Return type
None
-
on_pre_performance_check
()[source] Called at the very beginning of the validation loop.
- Return type
None
-
on_sanity_check_start
()[source] Called before starting evaluation.
Warning
Deprecated. Will be removed in v0.9.0.
-
on_train_end
()[source] Called at the end of training before logger experiment is closed.
- Return type
None
-
on_train_start
()[source] Called at the beginning of training before sanity check.
- Return type
None
-
setup
(stage)[source] Called at the beginning of fit and test. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(stage): data = Load_data(...) self.l1 = nn.Linear(28, data.num_classes)
-
teardown
(stage)[source] Called at the end of fit and test.
-
transfer_batch_to_device
(batch, device)[source] Override this hook if your
DataLoader
returns tensors wrapped in a custom data structure.The data types listed below (and any arbitrary nesting of them) are supported out of the box:
torch.Tensor
or anything that implements .to(…)torchtext.data.batch.Batch
For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).
Example:
def transfer_batch_to_device(self, batch, device) if isinstance(batch, CustomBatch): # move all tensors in your custom data structure to the device batch.samples = batch.samples.to(device) batch.targets = batch.targets.to(device) else: batch = super().transfer_batch_to_device(data, device) return batch
- Parameters
- Return type
- Returns
A reference to the data on the new device.
Note
This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing). The
Trainer
already takes care of splitting the batch and determines the target devices.
-