pytorch_lightning.core.hooks module¶
-
class
pytorch_lightning.core.hooks.
ModelHooks
(*args, **kwargs)[source]¶ Bases:
torch.nn.Module
-
backward
(trainer, loss, optimizer, optimizer_idx)[source]¶ Override backward with your own implementation if you need to.
- Parameters
Called to perform backward step. Feel free to override as needed.
The loss passed in has already been scaled for accumulated gradients if requested.
Example:
def backward(self, trainer, loss, optimizer, optimizer_idx): loss.backward()
- Return type
-
on_after_backward
()[source]¶ Called in the training loop after loss.backward() and before optimizers do anything. This is the ideal place to inspect or log gradient information.
Example:
def on_after_backward(self): # example to inspect gradient information in tensorboard if self.trainer.global_step % 25 == 0: # don't make the tf file huge params = self.state_dict() for k, v in params.items(): grads = v name = k self.logger.experiment.add_histogram(tag=name, values=grads, global_step=self.trainer.global_step)
- Return type
-
on_batch_end
()[source]¶ Called in the training loop after the batch.
Warning
Deprecated in 0.9.0 will remove 1.0.0 (use on_train_batch_end instead)
- Return type
-
on_batch_start
(batch)[source]¶ Called in the training loop before anything happens for that batch.
If you return -1 here, you will skip training for the rest of the current epoch.
Warning
Deprecated in 0.9.0 will remove 1.0.0 (use on_train_batch_start instead)
- Return type
-
on_before_zero_grad
(optimizer)[source]¶ Called after optimizer.step() and before optimizer.zero_grad().
Called in the training loop after taking an optimizer step and before zeroing grads. Good place to inspect weight information with weights updated.
This is where it is called:
for optimizer in optimizers: optimizer.step() model.on_before_zero_grad(optimizer) # < ---- called here optimizer.zero_grad()
-
on_epoch_start
()[source]¶ Called in the training loop at the very beginning of the epoch.
- Return type
-
on_fit_start
()[source]¶ Called at the very beginning of fit. If on DDP it is called on every process
-
on_pre_performance_check
()[source]¶ Called at the very beginning of the validation loop.
- Return type
-
on_pretrain_routine_end
()[source]¶ Called at the end of the pretrain routine (between fit and train start).
fit
pretrain_routine start
pretrain_routine end
training_start
- Return type
-
on_pretrain_routine_start
()[source]¶ Called at the beginning of the pretrain routine (between fit and train start).
fit
pretrain_routine start
pretrain_routine end
training_start
- Return type
-
on_test_batch_end
(batch, batch_idx, dataloader_idx)[source]¶ Called in the test loop after the batch.
-
on_test_batch_start
(batch, batch_idx, dataloader_idx)[source]¶ Called in the test loop before anything happens for that batch.
-
on_test_epoch_start
()[source]¶ Called in the test loop at the very beginning of the epoch.
- Return type
-
on_train_batch_end
(batch, batch_idx, dataloader_idx)[source]¶ Called in the training loop after the batch.
-
on_train_batch_start
(batch, batch_idx, dataloader_idx)[source]¶ Called in the training loop before anything happens for that batch.
If you return -1 here, you will skip training for the rest of the current epoch.
-
on_train_end
()[source]¶ Called at the end of training before logger experiment is closed.
- Return type
-
on_train_epoch_start
()[source]¶ Called in the training loop at the very beginning of the epoch.
- Return type
-
on_validation_batch_end
(batch, batch_idx, dataloader_idx)[source]¶ Called in the validation loop after the batch.
-
on_validation_batch_start
(batch, batch_idx, dataloader_idx)[source]¶ Called in the validation loop before anything happens for that batch.
-
on_validation_epoch_end
()[source]¶ Called in the validation loop at the very end of the epoch.
- Return type
-
on_validation_epoch_start
()[source]¶ Called in the validation loop at the very beginning of the epoch.
- Return type
-
setup
(stage)[source]¶ Called at the beginning of fit and test. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(stage): data = Load_data(...) self.l1 = nn.Linear(28, data.num_classes)
-
transfer_batch_to_device
(batch, device)[source]¶ Override this hook if your
DataLoader
returns tensors wrapped in a custom data structure.The data types listed below (and any arbitrary nesting of them) are supported out of the box:
torch.Tensor
or anything that implements .to(…)torchtext.data.batch.Batch
For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).
Example:
def transfer_batch_to_device(self, batch, device) if isinstance(batch, CustomBatch): # move all tensors in your custom data structure to the device batch.samples = batch.samples.to(device) batch.targets = batch.targets.to(device) else: batch = super().transfer_batch_to_device(data, device) return batch
- Parameters
- Return type
- Returns
A reference to the data on the new device.
Note
This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing).
Note
This hook only runs on single GPU training (no data-parallel). If you need multi-GPU support for your custom batch objects, you need to define your custom
DistributedDataParallel
orLightningDistributedDataParallel
and overrideconfigure_ddp()
.
-