Shortcuts

hooks

Classes

CheckpointHooks

Hooks to be used with Checkpointing.

DataHooks

Hooks to be used for data related stuff.

ModelHooks

Hooks to be used in LightningModule.

Various hooks to be used in the Lightning code.

class pytorch_lightning.core.hooks.CheckpointHooks[source]

Bases: object

Hooks to be used with Checkpointing.

on_load_checkpoint(checkpoint)[source]

Called by Lightning to restore your model. If you saved something with on_save_checkpoint() this is your chance to restore this.

Parameters

checkpoint (Dict[str, Any]) – Loaded checkpoint

Example:

def on_load_checkpoint(self, checkpoint):
    # 99% of the time you don't need to implement this method
    self.something_cool_i_want_to_save = checkpoint['something_cool_i_want_to_save']

Note

Lightning auto-restores global step, epoch, and train state including amp scaling. There is no need for you to restore anything regarding training.

Return type

None

on_save_checkpoint(checkpoint)[source]

Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save.

Parameters

checkpoint (Dict[str, Any]) – The full checkpoint dictionary before it gets dumped to a file. Implementations of this hook can insert additional data into this dictionary.

Example:

def on_save_checkpoint(self, checkpoint):
    # 99% of use cases you don't need to implement this method
    checkpoint['something_cool_i_want_to_save'] = my_cool_pickable_object

Note

Lightning saves all aspects of training (epoch, global step, etc…) including amp scaling. There is no need for you to store anything about training.

Return type

None

class pytorch_lightning.core.hooks.DataHooks[source]

Bases: object

Hooks to be used for data related stuff.

on_after_batch_transfer(batch, dataloader_idx)[source]

Override to alter or apply batch augmentations to your batch after it is transferred to the device.

Note

To check the current state of execution of this hook you can use self.trainer.training/testing/validating/predicting so that you can add different logic as per your requirement.

Note

This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.

Parameters
  • batch (Any) – A batch of data that needs to be altered or augmented.

  • dataloader_idx (int) – The index of the dataloader to which the batch belongs.

Return type

Any

Returns

A batch of data

Example:

def on_after_batch_transfer(self, batch, dataloader_idx):
    batch['x'] = gpu_transforms(batch['x'])
    return batch
Raises

MisconfigurationException – If using data-parallel, Trainer(accelerator='dp').

on_before_batch_transfer(batch, dataloader_idx)[source]

Override to alter or apply batch augmentations to your batch before it is transferred to the device.

Note

To check the current state of execution of this hook you can use self.trainer.training/testing/validating/predicting so that you can add different logic as per your requirement.

Note

This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.

Parameters
  • batch (Any) – A batch of data that needs to be altered or augmented.

  • dataloader_idx (int) – The index of the dataloader to which the batch belongs.

Return type

Any

Returns

A batch of data

Example:

def on_before_batch_transfer(self, batch, dataloader_idx):
    batch['x'] = transforms(batch['x'])
    return batch
Raises

MisconfigurationException – If using data-parallel, Trainer(accelerator='dp').

on_predict_dataloader()[source]

Called before requesting the predict dataloader.

Deprecated since version v1.5: on_predict_dataloader() is deprecated and will be removed in v1.7.0. Please use predict_dataloader() directly.

Return type

None

on_test_dataloader()[source]

Called before requesting the test dataloader.

Deprecated since version v1.5: on_test_dataloader() is deprecated and will be removed in v1.7.0. Please use test_dataloader() directly.

Return type

None

on_train_dataloader()[source]

Called before requesting the train dataloader.

Deprecated since version v1.5: on_train_dataloader() is deprecated and will be removed in v1.7.0. Please use train_dataloader() directly.

Return type

None

on_val_dataloader()[source]

Called before requesting the val dataloader.

Deprecated since version v1.5: on_val_dataloader() is deprecated and will be removed in v1.7.0. Please use val_dataloader() directly.

Return type

None

predict_dataloader()[source]

Implement one or multiple PyTorch DataLoaders for prediction.

It’s recommended that all data downloads and preparation happen in prepare_data().

Note

Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.

Return type

Union[DataLoader, Sequence[DataLoader]]

Returns

A torch.utils.data.DataLoader or a sequence of them specifying prediction samples.

Note

In the case where you return multiple prediction dataloaders, the predict() will have an argument dataloader_idx which matches the order here.

prepare_data()[source]

Use this to download and prepare data.

Warning

DO NOT set state to the model (use setup instead) since this is NOT called on every GPU in DDP/TPU

Example:

def prepare_data(self):
    # good
    download_data()
    tokenize()
    etc()

    # bad
    self.split = data_split
    self.some_state = some_other_state()

In DDP prepare_data can be called in two ways (using Trainer(prepare_data_per_node)):

  1. Once per node. This is the default and is only called on LOCAL_RANK=0.

  2. Once in total. Only called on GLOBAL_RANK=0.

Example:

# DEFAULT
# called once per node on LOCAL_RANK=0 of that node
Trainer(prepare_data_per_node=True)

# call on GLOBAL_RANK=0 (great for shared file systems)
Trainer(prepare_data_per_node=False)

This is called before requesting the dataloaders:

model.prepare_data()
initialize_distributed()
model.setup(stage)
model.train_dataloader()
model.val_dataloader()
model.test_dataloader()
Return type

None

setup(stage=None)[source]

Called at the beginning of fit (train + validate), validate, test, and predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.

Parameters

stage (Optional[str]) – either 'fit', 'validate', 'test', or 'predict'

Example:

class LitModel(...):
    def __init__(self):
        self.l1 = None

    def prepare_data(self):
        download_data()
        tokenize()

        # don't do this
        self.something = else

    def setup(stage):
        data = Load_data(...)
        self.l1 = nn.Linear(28, data.num_classes)
Return type

None

teardown(stage=None)[source]

Called at the end of fit (train + validate), validate, test, predict, or tune.

Parameters

stage (Optional[str]) – either 'fit', 'validate', 'test', or 'predict'

Return type

None

test_dataloader()[source]

Implement one or multiple PyTorch DataLoaders for testing.

The dataloader you return will not be reloaded unless you set reload_dataloaders_every_n_epochs to a postive integer.

For data processing use the following pattern:

However, the above are only necessary for distributed processing.

Warning

do not assign state in prepare_data

Note

Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.

Return type

Union[DataLoader, Sequence[DataLoader]]

Returns

A torch.utils.data.DataLoader or a sequence of them specifying testing samples.

Example:

def test_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform,
                    download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=False
    )

    return loader

# can also return multiple dataloaders
def test_dataloader(self):
    return [loader_a, loader_b, ..., loader_n]

Note

If you don’t need a test dataset and a test_step(), you don’t need to implement this method.

Note

In the case where you return multiple test dataloaders, the test_step() will have an argument dataloader_idx which matches the order here.

train_dataloader()[source]

Implement one or more PyTorch DataLoaders for training.

Return type

Union[DataLoader, Sequence[DataLoader], Sequence[Sequence[DataLoader]], Sequence[Dict[str, DataLoader]], Dict[str, DataLoader], Dict[str, Dict[str, DataLoader]], Dict[str, Sequence[DataLoader]]]

Returns

A collection of torch.utils.data.DataLoader specifying training samples. In the case of multiple dataloaders, please see this page.

The dataloader you return will not be reloaded unless you set reload_dataloaders_every_n_epochs to a positive integer.

For data processing use the following pattern:

However, the above are only necessary for distributed processing.

Warning

do not assign state in prepare_data

Note

Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.

Example:

# single dataloader
def train_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=True, transform=transform,
                    download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=True
    )
    return loader

# multiple dataloaders, return as list
def train_dataloader(self):
    mnist = MNIST(...)
    cifar = CIFAR(...)
    mnist_loader = torch.utils.data.DataLoader(
        dataset=mnist, batch_size=self.batch_size, shuffle=True
    )
    cifar_loader = torch.utils.data.DataLoader(
        dataset=cifar, batch_size=self.batch_size, shuffle=True
    )
    # each batch will be a list of tensors: [batch_mnist, batch_cifar]
    return [mnist_loader, cifar_loader]

# multiple dataloader, return as dict
def train_dataloader(self):
    mnist = MNIST(...)
    cifar = CIFAR(...)
    mnist_loader = torch.utils.data.DataLoader(
        dataset=mnist, batch_size=self.batch_size, shuffle=True
    )
    cifar_loader = torch.utils.data.DataLoader(
        dataset=cifar, batch_size=self.batch_size, shuffle=True
    )
    # each batch will be a dict of tensors: {'mnist': batch_mnist, 'cifar': batch_cifar}
    return {'mnist': mnist_loader, 'cifar': cifar_loader}
transfer_batch_to_device(batch, device, dataloader_idx)[source]

Override this hook if your DataLoader returns tensors wrapped in a custom data structure.

The data types listed below (and any arbitrary nesting of them) are supported out of the box:

For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).

Note

This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing). To check the current state of execution of this hook you can use self.trainer.training/testing/validating/predicting so that you can add different logic as per your requirement.

Note

This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.

Parameters
  • batch (Any) – A batch of data that needs to be transferred to a new device.

  • device (device) – The target device as defined in PyTorch.

  • dataloader_idx (int) – The index of the dataloader to which the batch belongs.

Return type

Any

Returns

A reference to the data on the new device.

Example:

def transfer_batch_to_device(self, batch, device):
    if isinstance(batch, CustomBatch):
        # move all tensors in your custom data structure to the device
        batch.samples = batch.samples.to(device)
        batch.targets = batch.targets.to(device)
    else:
        batch = super().transfer_batch_to_device(data, device)
    return batch
Raises

MisconfigurationException – If using data-parallel, Trainer(accelerator='dp').

See also

  • move_data_to_device()

  • apply_to_collection()

val_dataloader()[source]

Implement one or multiple PyTorch DataLoaders for validation.

The dataloader you return will not be reloaded unless you set reload_dataloaders_every_n_epochs to a positive integer.

It’s recommended that all data downloads and preparation happen in prepare_data().

Note

Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.

Return type

Union[DataLoader, Sequence[DataLoader]]

Returns

A torch.utils.data.DataLoader or a sequence of them specifying validation samples.

Examples:

def val_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=False,
                    transform=transform, download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=False
    )

    return loader

# can also return multiple dataloaders
def val_dataloader(self):
    return [loader_a, loader_b, ..., loader_n]

Note

If you don’t need a validation dataset and a validation_step(), you don’t need to implement this method.

Note

In the case where you return multiple validation dataloaders, the validation_step() will have an argument dataloader_idx which matches the order here.

class pytorch_lightning.core.hooks.ModelHooks[source]

Bases: object

Hooks to be used in LightningModule.

configure_sharded_model()[source]

Hook to create modules in a distributed aware context. This is useful for when using sharded plugins, where we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.

The accelerator manages whether to call this hook at every given stage. For sharded plugins where model parallelism is required, the hook is usually on called once to initialize the sharded parameters, and not called again in the same process.

By default for accelerators/plugins that do not use model sharding techniques, this hook is called during each fit/val/test/predict stages.

Return type

None

on_after_backward()[source]

Called after loss.backward() and before optimizers are stepped.

Note

If using native AMP, the gradients will not be unscaled at this point. Use the on_before_optimizer_step if you need the unscaled gradients.

Return type

None

on_before_backward(loss)[source]

Called before loss.backward().

Parameters

loss (Tensor) – Loss divided by number of batches for gradient accumulation and scaled if using native AMP.

Return type

None

on_before_optimizer_step(optimizer, optimizer_idx)[source]

Called before optimizer.step().

The hook is only called if gradients do not need to be accumulated. See: accumulate_grad_batches. If using native AMP, the loss will be unscaled before calling this hook. See these docs for more information on the scaling of gradients.

Parameters
  • optimizer (Optimizer) – Current optimizer being used.

  • optimizer_idx (int) – Index of the current optimizer being used.

Example:

def on_before_optimizer_step(self, optimizer, optimizer_idx):
    # example to inspect gradient information in tensorboard
    if self.trainer.global_step % 25 == 0:  # don't make the tf file huge
        for k, v in self.named_parameters():
            self.logger.experiment.add_histogram(
                tag=k, values=v.grad, global_step=self.trainer.global_step
            )
Return type

None

on_before_zero_grad(optimizer)[source]

Called after training_step() and before optimizer.zero_grad().

Called in the training loop after taking an optimizer step and before zeroing grads. Good place to inspect weight information with weights updated.

This is where it is called:

for optimizer in optimizers:
    out = training_step(...)

    model.on_before_zero_grad(optimizer) # < ---- called here
    optimizer.zero_grad()

    backward()
Parameters

optimizer (Optimizer) – The optimizer for which grads should be zeroed.

Return type

None

on_epoch_end()[source]

Called when either of train/val/test epoch ends.

Return type

None

on_epoch_start()[source]

Called when either of train/val/test epoch begins.

Return type

None

on_fit_end()[source]

Called at the very end of fit. If on DDP it is called on every process

Return type

None

on_fit_start()[source]

Called at the very beginning of fit. If on DDP it is called on every process

Return type

None

on_post_move_to_device()[source]

Called in the parameter_validation decorator after to() is called. This is a good place to tie weights between modules after moving them to a device. Can be used when training models with weight sharing properties on TPU.

Addresses the handling of shared weights on TPU: https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks

Example:

def on_post_move_to_device(self):
    self.decoder.weight = self.encoder.weight
Return type

None

on_predict_batch_end(outputs, batch, batch_idx, dataloader_idx)[source]

Called in the predict loop after the batch.

Parameters
  • outputs (Optional[Any]) – The outputs of predict_step_end(test_step(x))

  • batch (Any) – The batched data as it is returned by the test DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_predict_batch_start(batch, batch_idx, dataloader_idx)[source]

Called in the predict loop before anything happens for that batch.

Parameters
  • batch (Any) – The batched data as it is returned by the test DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_predict_end()[source]

Called at the end of predicting.

Return type

None

on_predict_epoch_end(results)[source]

Called at the end of predicting.

Return type

None

on_predict_epoch_start()[source]

Called at the beginning of predicting.

Return type

None

on_predict_model_eval()[source]

Sets the model to eval during the predict loop

Return type

None

on_predict_start()[source]

Called at the beginning of predicting.

Return type

None

on_pretrain_routine_end()[source]

Called at the end of the pretrain routine (between fit and train start).

  • fit

  • pretrain_routine start

  • pretrain_routine end

  • training_start

Return type

None

on_pretrain_routine_start()[source]

Called at the beginning of the pretrain routine (between fit and train start).

  • fit

  • pretrain_routine start

  • pretrain_routine end

  • training_start

Return type

None

on_test_batch_end(outputs, batch, batch_idx, dataloader_idx)[source]

Called in the test loop after the batch.

Parameters
  • outputs (Union[Tensor, Dict[str, Any], None]) – The outputs of test_step_end(test_step(x))

  • batch (Any) – The batched data as it is returned by the test DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_test_batch_start(batch, batch_idx, dataloader_idx)[source]

Called in the test loop before anything happens for that batch.

Parameters
  • batch (Any) – The batched data as it is returned by the test DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_test_end()[source]

Called at the end of testing.

Return type

None

on_test_epoch_end()[source]

Called in the test loop at the very end of the epoch.

Return type

None

on_test_epoch_start()[source]

Called in the test loop at the very beginning of the epoch.

Return type

None

on_test_model_eval()[source]

Sets the model to eval during the test loop

Return type

None

on_test_model_train()[source]

Sets the model to train during the test loop

Return type

None

on_test_start()[source]

Called at the beginning of testing.

Return type

None

on_train_batch_end(outputs, batch, batch_idx, dataloader_idx)[source]

Called in the training loop after the batch.

Parameters
  • outputs (Union[Tensor, Dict[str, Any]]) – The outputs of training_step_end(training_step(x))

  • batch (Any) – The batched data as it is returned by the training DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_train_batch_start(batch, batch_idx, dataloader_idx)[source]

Called in the training loop before anything happens for that batch.

If you return -1 here, you will skip training for the rest of the current epoch.

Parameters
  • batch (Any) – The batched data as it is returned by the training DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_train_end()[source]

Called at the end of training before logger experiment is closed.

Return type

None

on_train_epoch_end(unused=None)[source]

Called in the training loop at the very end of the epoch.

To access all batch outputs at the end of the epoch, either:

  1. Implement training_epoch_end in the LightningModule OR

  2. Cache data across steps on the attribute(s) of the LightningModule and access them in this hook

on_train_epoch_start()[source]

Called in the training loop at the very beginning of the epoch.

Return type

None

on_train_start()[source]

Called at the beginning of training after sanity check.

Return type

None

on_validation_batch_end(outputs, batch, batch_idx, dataloader_idx)[source]

Called in the validation loop after the batch.

Parameters
  • outputs (Union[Tensor, Dict[str, Any], None]) – The outputs of validation_step_end(validation_step(x))

  • batch (Any) – The batched data as it is returned by the validation DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_validation_batch_start(batch, batch_idx, dataloader_idx)[source]

Called in the validation loop before anything happens for that batch.

Parameters
  • batch (Any) – The batched data as it is returned by the validation DataLoader.

  • batch_idx (int) – the index of the batch

  • dataloader_idx (int) – the index of the dataloader

Return type

None

on_validation_end()[source]

Called at the end of validation.

Return type

None

on_validation_epoch_end()[source]

Called in the validation loop at the very end of the epoch.

Return type

None

on_validation_epoch_start()[source]

Called in the validation loop at the very beginning of the epoch.

Return type

None

on_validation_model_eval()[source]

Sets the model to eval during the val loop

Return type

None

on_validation_model_train()[source]

Sets the model to train during the val loop

Return type

None

on_validation_start()[source]

Called at the beginning of validation.

Return type

None