LightningModule¶
A LightningModule
organizes your PyTorch code into 5 sections
Computations (init).
Train loop (training_step)
Validation loop (validation_step)
Test loop (test_step)
Optimizers (configure_optimizers)
Notice a few things.
It’s the SAME code.
The PyTorch code IS NOT abstracted - just organized.
All the other code that’s not in the
LightningModule
has been automated for you by the trainer.
net = Net() trainer = Trainer() trainer.fit(net)
There are no .cuda() or .to() calls… Lightning does these for you.
# don't do in lightning x = torch.Tensor(2, 3) x = x.cuda() x = x.to(device) # do this instead x = x # leave it alone! # or to init a new tensor new_x = torch.Tensor(2, 3) new_x = new_x.type_as(x)
There are no samplers for distributed, Lightning also does this for you.
# Don't do in Lightning... data = MNIST(...) sampler = DistributedSampler(data) DataLoader(data, sampler=sampler) # do this instead data = MNIST(...) DataLoader(data)
A
LightningModule
is atorch.nn.Module
but with added functionality. Use it as such!
net = Net.load_from_checkpoint(PATH) net.freeze() out = net(x)
Thus, to use Lightning, you just need to organize your code which takes about 30 minutes, (and let’s be real, you probably should do anyhow).
Minimal Example¶
Here are the only required methods.
>>> import pytorch_lightning as pl
>>> class LitModel(pl.LightningModule):
...
... def __init__(self):
... super().__init__()
... self.l1 = torch.nn.Linear(28 * 28, 10)
...
... def forward(self, x):
... return torch.relu(self.l1(x.view(x.size(0), -1)))
...
... def training_step(self, batch, batch_idx):
... x, y = batch
... y_hat = self(x)
... loss = F.cross_entropy(y_hat, y)
... return pl.TrainResult(loss)
...
... def configure_optimizers(self):
... return torch.optim.Adam(self.parameters(), lr=0.02)
Which you can train by doing:
train_loader = DataLoader(MNIST(os.getcwd(), download=True, transform=transforms.ToTensor()))
trainer = pl.Trainer()
model = LitModel()
trainer.fit(model, train_loader)
LightningModule for research¶
For research, LightningModules are best structured as systems.
A model (colloquially) refers to something like a resnet or RNN. A system, may be a collection of models. Here are examples of systems:
GAN (generator, discriminator)
RL (policy, actor, critic)
Autoencoders (encoder, decoder)
Seq2Seq (encoder, attention, decoder)
etc…
A LightningModule is best used to define a complex system:
import pytorch_lightning as pl
import torch
from torch import nn
class Autoencoder(pl.LightningModule):
def __init__(self, latent_dim=2):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28 * 28, 256), nn.ReLU(), nn.Linear(256, latent_dim))
self.decoder = nn.Sequential(nn.Linear(latent_dim, 256), nn.ReLU(), nn.Linear(256, 28 * 28))
def training_step(self, batch, batch_idx):
x, _ = batch
# encode
x = x.view(x.size(0), -1)
z = self.encoder(x)
# decode
recons = self.decoder(z)
# reconstruction
reconstruction_loss = nn.functional.mse_loss(recons, x)
return pl.TrainResult(reconstruction_loss)
def validation_step(self, batch, batch_idx):
x, _ = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
recons = self.decoder(z)
reconstruction_loss = nn.functional.mse_loss(recons, x)
result = pl.EvalResult(checkpoint_on=reconstruction_loss)
return result
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.0002)
Which can be trained like this:
autoencoder = Autoencoder()
trainer = pl.Trainer(gpus=1)
trainer.fit(autoencoder, train_dataloader, val_dataloader)
This simple model generates examples that look like this (the encoders and decoders are too weak)
The methods above are part of the lightning interface:
training_step
validation_step
test_step
configure_optimizers
Note that in this case, the train loop and val loop are exactly the same. We can of course reuse this code.
class Autoencoder(pl.LightningModule):
def __init__(self, latent_dim=2):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28 * 28, 256), nn.ReLU(), nn.Linear(256, latent_dim))
self.decoder = nn.Sequential(nn.Linear(latent_dim, 256), nn.ReLU(), nn.Linear(256, 28 * 28))
def training_step(self, batch, batch_idx):
loss = self.shared_step(batch)
return pl.TrainResult(loss)
def validation_step(self, batch, batch_idx):
loss = self.shared_step(batch)
result = pl.EvalResult(checkpoint_on=loss)
return result
def shared_step(self, batch):
x, _ = batch
# encode
x = x.view(x.size(0), -1)
z = self.encoder(x)
# decode
recons = self.decoder(z)
# loss
return nn.functional.mse_loss(recons, x)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.0002)
We create a new method called shared_step that all loops can use. This method name is arbitrary and NOT reserved.
Inference in Research¶
In the case where we want to perform inference with the system we can add a forward method to the LightningModule.
class Autoencoder(pl.LightningModule):
def forward(self, x):
return self.decoder(x)
The advantage of adding a forward is that in complex systems, you can do a much more involved inference procedure, such as text generation:
class Seq2Seq(pl.LightningModule):
def forward(self, x):
embeddings = self(x)
hidden_states = self.encoder(embeddings)
for h in hidden_states:
# decode
...
return decoded
LightningModule for production¶
For cases like production, you might want to iterate different models inside a LightningModule.
import pytorch_lightning as pl
from pytorch_lightning.metrics import functional as FM
class ClassificationTask(pl.LightningModule):
def __init__(self, model):
super().__init__()
self.model = model
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
return pl.TrainResult(loss)
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
acc = FM.accuracy(y_hat, y)
result = pl.EvalResult(checkpoint_on=loss)
result.log_dict({'val_acc': acc, 'val_loss': loss})
return result
def test_step(self, batch, batch_idx):
result = self.validation_step(batch, batch_idx)
result.rename_keys({'val_acc': 'test_acc', 'val_loss': 'test_loss'})
return result
def configure_optimizers(self):
return torch.optim.Adam(self.model.parameters(), lr=0.02)
Then pass in any arbitrary model to be fit with this task
for model in [resnet50(), vgg16(), BidirectionalRNN()]:
task = ClassificationTask(model)
trainer = Trainer(gpus=2)
trainer.fit(task, train_dataloader, val_dataloader)
Tasks can be arbitrarily complex such as implementing GAN training, self-supervised or even RL.
class GANTask(pl.LightningModule):
def __init__(self, generator, discriminator):
super().__init__()
self.generator = generator
self.discriminator = discriminator
...
Inference in production¶
When used like this, the model can be separated from the Task and thus used in production without needing to keep it in a LightningModule.
You can export to onnx.
Or trace using Jit.
or run in the python runtime.
task = ClassificationTask(model)
trainer = Trainer(gpus=2)
trainer.fit(task, train_dataloader, val_dataloader)
# use model after training or load weights and drop into the production system
model.eval()
y_hat = model(x)
Training loop¶
To add a training loop use the training_step method
class LitClassifier(pl.LightningModule):
def __init__(self, model):
super().__init__()
self.model = model
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
return pl.TrainResult(loss)
Under the hood, Lightning does the following (pseudocode):
# put model in train mode
model.train()
torch.set_grad_enabled(True)
outs = []
for batch in train_dataloader:
# forward
out = training_step(val_batch)
# backward
loss.backward()
# apply and clear grads
optimizer.step()
optimizer.zero_grad()
Training epoch-level metrics¶
If you want to calculate epoch-level metrics and log them, use the TrainResult.log method
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
result = pl.TrainResult(loss)
# logs metrics for each training_step, and the average across the epoch, to the progress bar and logger
result.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
return result
The TrainResult.log object automatically reduces the requested metrics across the full epoch. Here’s the pseudocode of what it does under the hood:
outs = []
for batch in train_dataloader:
# forward
out = training_step(val_batch)
# backward
loss.backward()
# apply and clear grads
optimizer.step()
optimizer.zero_grad()
epoch_metric = torch.mean(torch.stack([x['train_loss'] for x in outs]))
Train epoch-level operations¶
If you need to do something with all the outputs of each training_step, override training_epoch_end yourself.
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
result = pl.TrainResult(loss)
result.prediction = some_prediction
def training_epoch_end(self, training_step_outputs):
all_predictions = training_step_outputs.prediction
...
return result
The matching pseudocode is:
outs = []
for batch in train_dataloader:
# forward
out = training_step(val_batch)
# backward
loss.backward()
# apply and clear grads
optimizer.step()
optimizer.zero_grad()
epoch_out = training_epoch_end(outs)
Training with DataParallel¶
When training using a distributed_backend that splits data from each batch across GPUs, sometimes you might need to aggregate them on the master GPU for processing (dp, or ddp2).
In this case, implement the training_step_end method
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
result = pl.TrainResult(loss)
result.prediction = some_prediction
def training_step_end(self, batch_parts):
gpu_0_prediction = batch_parts.prediction[0]
gpu_1_prediction = batch_parts.prediction[1]
# do something with both outputs
return result
def training_epoch_end(self, training_step_outputs):
all_predictions = training_step_outputs.prediction
...
return result
The full pseudocode that lighting does under the hood is:
outs = []
for train_batch in train_dataloader:
batches = split_batch(train_batch)
dp_outs = []
for sub_batch in batches:
# 1
dp_out = training_step(sub_batch)
dp_outs.append(dp_out)
# 2
out = training_step_end(dp_outs)
outs.append(out)
# do something with the outputs for all batches
# 3
training_epoch_end(outs)
Validation loop¶
To add a validation loop, override the validation_step method of the LightningModule
:
class LitModel(pl.LightningModule):
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
result = pl.EvalResult(checkpoint_on=loss)
return result
Under the hood, Lightning does the following:
# ...
for batch in train_dataloader:
loss = model.training_step()
loss.backward()
# ...
if validate_at_some_point:
# disable grads + batchnorm + dropout
torch.set_grad_enabled(False)
model.eval()
# ----------------- VAL LOOP ---------------
for val_batch in model.val_dataloader:
val_out = model.validation_step(val_batch)
# ----------------- VAL LOOP ---------------
# enable grads + batchnorm + dropout
torch.set_grad_enabled(True)
model.train()
Validation epoch-level metrics¶
If you need to do something with all the outputs of each validation_step, override validation_epoch_end.
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
result = pl.EvalResult(loss)
result.prediction = some_prediction
def validation_epoch_end(self, validation_step_outputs):
all_predictions = validation_step_outputs.prediction
...
return result
Validating with DataParallel¶
When training using a distributed_backend that splits data from each batch across GPUs, sometimes you might need to aggregate them on the master GPU for processing (dp, or ddp2).
In this case, implement the validation_step_end method
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.model(x)
loss = F.cross_entropy(y_hat, y)
result = pl.EvalResult(loss)
result.prediction = some_prediction
def validation_step_end(self, batch_parts):
gpu_0_prediction = batch_parts.prediction[0]
gpu_1_prediction = batch_parts.prediction[1]
# do something with both outputs
return result
def validation_epoch_end(self, validation_step_outputs):
all_predictions = validation_step_outputs.prediction
...
return result
The full pseudocode that lighting does under the hood is:
outs = []
for batch in dataloader:
batches = split_batch(batch)
dp_outs = []
for sub_batch in batches:
# 1
dp_out = validation_step(sub_batch)
dp_outs.append(dp_out)
# 2
out = validation_step_end(dp_outs)
outs.append(out)
# do something with the outputs for all batches
# 3
validation_epoch_end(outs)
Test loop¶
The process for adding a test loop is the same as the process for adding a validation loop. Please refer to the section above for details.
The only difference is that the test loop is only called when .test() is used:
model = Model()
trainer = Trainer()
trainer.fit()
# automatically loads the best weights for you
trainer.test(model)
There are two ways to call test():
# call after training
trainer = Trainer()
trainer.fit(model)
# automatically auto-loads the best weights
trainer.test(test_dataloaders=test_dataloader)
# or call with pretrained model
model = MyLightningModule.load_from_checkpoint(PATH)
trainer = Trainer()
trainer.test(model, test_dataloaders=test_dataloader)
LightningModule API¶
Training loop methods¶
training_step¶
-
pytorch_lightning.core.lightning.LightningModule.
training_step
(self, *args, **kwargs)[source] Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
- Parameters
batch¶ (
Tensor
| (Tensor
, …) | [Tensor
, …]) – The output of yourDataLoader
. A tensor, tuple or list.optimizer_idx¶ (int) – When using multiple optimizers, this argument will also be present.
hiddens¶ (
Tensor
) – Passed in iftruncated_bptt_steps
> 0.
- Returns
-
Note
TrainResult
is simply a Dict with convenient functions for logging, distributed sync and error checking.
In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch # implement your own out = self(x) loss = self.loss(out, x) # TrainResult auto-detaches the loss after the optimization steps are complete result = pl.TrainResult(minimize=loss)
The return object
TrainResult
controls where to log, when to log (step or epoch) and syncing with multiple GPUs.# log to progress bar and logger result.log('train_loss', loss, prog_bar=True, logger=True) # sync metric value across GPUs in distributed training result.log('train_loss_2', loss, sync_dist=True) # log to progress bar as well result.log('train_loss_2', loss, prog_bar=True) # assign arbitrary values result.predictions = predictions result.some_value = 'some_value'
If you define multiple optimizers, this step will be called with an additional
optimizer_idx
parameter.# Multiple optimizers (e.g.: GANs) def training_step(self, batch, batch_idx, optimizer_idx): if optimizer_idx == 0: # do training_step with encoder if optimizer_idx == 1: # do training_step with decoder
If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.
# Truncated back-propagation through time def training_step(self, batch, batch_idx, hiddens): # hiddens are the hidden states from the previous truncated backprop step ... out, hiddens = self.lstm(data, hiddens) ... # TrainResult auto-detaches hiddens result = pl.TrainResult(minimize=loss, hiddens=hiddens) return result
Notes
The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.
training_step_end¶
-
pytorch_lightning.core.lightning.LightningModule.
training_step_end
(self, *args, **kwargs)[source] Use this when training with dp or ddp2 because
training_step()
will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.Note
If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code
# pseudocode sub_batches = split_batches_for_dp(batch) batch_parts_outputs = [training_step(sub_batch) for sub_batch in sub_batches] training_step_end(batch_parts_outputs)
- Parameters
batch_parts_outputs¶ – What you return in training_step for each batch part.
- Returns
-
Note
TrainResult
is simply a Dict with convenient functions for logging, distributed sync and error checking.
When using dp/ddp2 distributed backends, only a portion of the batch is inside the training_step:
def training_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) # softmax uses only a portion of the batch in the denomintaor loss = self.softmax(out) loss = nce_loss(loss) return pl.TrainResult(loss)
If you wish to do something with all the parts of the batch, then use this method to do it:
def training_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) result = pl.TrainResult() result.out = out def training_step_end(self, training_step_outputs): # this out is now the full size of the batch all_outs = training_step_outputs.out # this softmax now uses the full batch loss = nce_loss(all_outs) result = pl.TrainResult(loss) return result
See also
See the Multi-GPU training guide for more details.
training_epoch_end¶
-
pytorch_lightning.core.lightning.LightningModule.
training_epoch_end
(self, outputs)[source] Called at the end of the training epoch with the outputs of all training steps. Use this in case you need to do something with all the outputs for every training_step.
# the pseudocode for these calls train_outs = [] for train_batch in train_data: out = training_step(train_batch) train_outs.append(out) training_epoch_end(train_outs)
- Parameters
outputs¶ (
Union
[TrainResult
,List
[TrainResult
]]) – List of outputs you defined intraining_step()
, or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.- Returns
Note
TrainResult
is simply a Dict with convenient functions for logging, distributed sync and error checking.Note
If this method is not overridden, this won’t be called.
Example:
def training_epoch_end(self, training_step_outputs): # do something with all training_step outputs return result
With multiple dataloaders,
outputs
will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each training step for that dataloader.def training_epoch_end(self, outputs): epoch_result = pl.TrainResult() for train_result in outputs: all_losses = train_result.minimize # do something with all losses return results
Validation loop methods¶
validation_step¶
-
pytorch_lightning.core.lightning.LightningModule.
validation_step
(self, *args, **kwargs)[source] Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.
# the pseudocode for these calls val_outs = [] for val_batch in val_data: out = validation_step(train_batch) val_outs.append(out) validation_epoch_end(val_outs)
- Parameters
- Return type
- Returns
# pseudocode of order out = validation_step() if defined('validation_step_end'): out = validation_step_end(out) out = validation_epoch_end(out)
# if you have one val dataloader: def validation_step(self, batch, batch_idx) # if you have multiple val dataloaders: def validation_step(self, batch, batch_idx, dataloader_idx)
Examples
# CASE 1: A single validation dataset def validation_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! result = pl.EvalResult(checkpoint_on=loss) result.log_dict({'val_loss': loss, 'val_acc': val_acc}) return result
If you pass in multiple val datasets, validation_step will have an additional argument.
# CASE 2: multiple validation datasets def validation_step(self, batch, batch_idx, dataloader_idx): # dataloader_idx tells you which dataset this is.
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the
validation_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.
validation_step_end¶
-
pytorch_lightning.core.lightning.LightningModule.
validation_step_end
(self, *args, **kwargs)[source] Use this when validating with dp or ddp2 because
validation_step()
will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.Note
If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.
# pseudocode sub_batches = split_batches_for_dp(batch) batch_parts_outputs = [validation_step(sub_batch) for sub_batch in sub_batches] validation_step_end(batch_parts_outputs)
- Parameters
batch_parts_outputs¶ – What you return in
validation_step()
for each batch part.- Return type
- Returns
# WITHOUT validation_step_end # if used in DP or DDP2, this batch is 1/num_gpus large def validation_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) loss = self.softmax(out) loss = nce_loss(loss) result = pl.EvalResult() result.log('val_loss', loss) return result # -------------- # with validation_step_end to do softmax over the full batch def validation_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) result = pl.EvalResult() result.out = out return result def validation_epoch_end(self, output_results): # this out is now the full size of the batch all_val_step_outs = output_results.out loss = nce_loss(all_val_step_outs) result = pl.EvalResult(checkpoint_on=loss) result.log('val_loss', loss) return result
See also
See the Multi-GPU training guide for more details.
validation_epoch_end¶
-
pytorch_lightning.core.lightning.LightningModule.
validation_epoch_end
(self, outputs)[source] Called at the end of the validation epoch with the outputs of all validation steps.
# the pseudocode for these calls val_outs = [] for val_batch in val_data: out = validation_step(val_batch) val_outs.append(out) validation_epoch_end(val_outs)
- Parameters
outputs¶ (
Union
[EvalResult
,List
[EvalResult
]]) – List of outputs you defined invalidation_step()
, or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.- Return type
- Returns
Note
If you didn’t define a
validation_step()
, this won’t be called.The outputs here are strictly for logging or progress bar.
If you don’t need to display anything, don’t return anything.
Examples
With a single dataloader:
def validation_epoch_end(self, val_step_outputs): # do something with the outputs of all val batches all_val_preds = val_step_outputs.predictions val_step_outputs.some_result = calc_all_results(all_val_preds) return val_step_outputs
With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.
def validation_epoch_end(self, outputs): for dataloader_output_result in outputs: dataloader_outs = dataloader_output_result.dataloader_i_outputs result = pl.EvalResult() result.log('final_metric', final_value) return result
test loop methods¶
test_step¶
-
pytorch_lightning.core.lightning.LightningModule.
test_step
(self, *args, **kwargs)[source] Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.
# the pseudocode for these calls test_outs = [] for test_batch in test_data: out = test_step(test_batch) test_outs.append(out) test_epoch_end(test_outs)
- Parameters
- Return type
- Returns
# if you have one test dataloader: def test_step(self, batch, batch_idx) # if you have multiple test dataloaders: def test_step(self, batch, batch_idx, dataloader_idx)
Examples
# CASE 1: A single test dataset def test_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! result = pl.EvalResult(checkpoint_on=loss) result.log_dict({'test_loss': loss, 'test_acc': test_acc}) return resultt
If you pass in multiple validation datasets,
test_step()
will have an additional argument.# CASE 2: multiple test datasets def test_step(self, batch, batch_idx, dataloader_idx): # dataloader_idx tells you which dataset this is.
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the
test_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.
test_step_end¶
-
pytorch_lightning.core.lightning.LightningModule.
test_step_end
(self, *args, **kwargs)[source] Use this when testing with dp or ddp2 because
test_step()
will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.Note
If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.
# pseudocode sub_batches = split_batches_for_dp(batch) batch_parts_outputs = [test_step(sub_batch) for sub_batch in sub_batches] test_step_end(batch_parts_outputs)
- Parameters
batch_parts_outputs¶ – What you return in
test_step()
for each batch part.- Return type
- Returns
# WITHOUT test_step_end # if used in DP or DDP2, this batch is 1/num_gpus large def test_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) loss = self.softmax(out) loss = nce_loss(loss) result = pl.EvalResult() result.log('test_loss', loss) return result # -------------- # with test_step_end to do softmax over the full batch def test_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) result = pl.EvalResult() result.out = out return result def test_epoch_end(self, output_results): # this out is now the full size of the batch all_test_step_outs = output_results.out loss = nce_loss(all_test_step_outs) result = pl.EvalResult(checkpoint_on=loss) result.log('test_loss', loss) return result
See also
See the Multi-GPU training guide for more details.
test_epoch_end¶
-
pytorch_lightning.core.lightning.LightningModule.
test_epoch_end
(self, outputs)[source] Called at the end of a test epoch with the output of all test steps.
# the pseudocode for these calls test_outs = [] for test_batch in test_data: out = test_step(test_batch) test_outs.append(out) test_epoch_end(test_outs)
- Parameters
outputs¶ (
Union
[EvalResult
,List
[EvalResult
]]) – List of outputs you defined intest_step_end()
, or if there are multiple dataloaders, a list containing a list of outputs for each dataloader- Return type
- Returns
Note
If you didn’t define a
test_step()
, this won’t be called.The outputs here are strictly for logging or progress bar.
If you don’t need to display anything, don’t return anything.
Examples
With a single dataloader:
def test_epoch_end(self, outputs): # do something with the outputs of all test batches all_test_preds = test_step_outputs.predictions test_step_outputs.some_result = calc_all_results(all_test_preds) return test_step_outputs
With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each test step for that dataloader.
def test_epoch_end(self, outputs): for dataloader_output_result in outputs: dataloader_outs = dataloader_output_result.dataloader_i_outputs result = pl.EvalResult() result.log('final_metric', final_value) return results
configure_optimizers¶
-
pytorch_lightning.core.lightning.LightningModule.
configure_optimizers
(self)[source] Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple.
- Return type
Union
[Optimizer
,Sequence
[Optimizer
],Dict
,Sequence
[Dict
],Tuple
[List
,List
],None
]- Returns
Any of these 6 options.
Single optimizer.
List or Tuple - List of optimizers.
Two lists - The first list has multiple optimizers, the second a list of LR schedulers (or lr_dict).
Dictionary, with an ‘optimizer’ key, and (optionally) a ‘lr_scheduler’ key which value is a single LR scheduler or lr_dict.
Tuple of dictionaries as described, with an optional ‘frequency’ key.
None - Fit will run without any optimizer.
Note
The ‘frequency’ value is an int corresponding to the number of sequential batches optimized with the specific optimizer. It should be given to none or to all of the optimizers. There is a difference between passing multiple optimizers in a list, and passing multiple optimizers in dictionaries with a frequency of 1: In the former case, all optimizers will operate on the given batch in each optimization step. In the latter, only one optimizer will operate on the given batch at every step.
The lr_dict is a dictionary which contains scheduler and its associated configuration. It has five keys. The default configuration is shown below.
{ 'scheduler': lr_scheduler, # The LR schduler 'interval': 'epoch', # The unit of the scheduler's step size 'frequency': 1, # The frequency of the scheduler 'reduce_on_plateau': False, # For ReduceLROnPlateau scheduler 'monitor': 'val_loss' # Metric to monitor }
If user only provides LR schedulers, then their configuration will set to default as shown above.
Examples
# most cases def configure_optimizers(self): opt = Adam(self.parameters(), lr=1e-3) return opt # multiple optimizer case (e.g.: GAN) def configure_optimizers(self): generator_opt = Adam(self.model_gen.parameters(), lr=0.01) disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02) return generator_opt, disriminator_opt # example with learning rate schedulers def configure_optimizers(self): generator_opt = Adam(self.model_gen.parameters(), lr=0.01) disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02) discriminator_sched = CosineAnnealing(discriminator_opt, T_max=10) return [generator_opt, disriminator_opt], [discriminator_sched] # example with step-based learning rate schedulers def configure_optimizers(self): gen_opt = Adam(self.model_gen.parameters(), lr=0.01) dis_opt = Adam(self.model_disc.parameters(), lr=0.02) gen_sched = {'scheduler': ExponentialLR(gen_opt, 0.99), 'interval': 'step'} # called after each training step dis_sched = CosineAnnealing(discriminator_opt, T_max=10) # called every epoch return [gen_opt, dis_opt], [gen_sched, dis_sched] # example with optimizer frequencies # see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1 # https://arxiv.org/abs/1704.00028 def configure_optimizers(self): gen_opt = Adam(self.model_gen.parameters(), lr=0.01) dis_opt = Adam(self.model_disc.parameters(), lr=0.02) n_critic = 5 return ( {'optimizer': dis_opt, 'frequency': n_critic}, {'optimizer': gen_opt, 'frequency': 1} )
Note
Some things to know:
Lightning calls
.backward()
and.step()
on each optimizer and learning rate scheduler as needed.If you use 16-bit precision (
precision=16
), Lightning will automatically handle the optimizers for you.If you use multiple optimizers,
training_step()
will have an additionaloptimizer_idx
parameter.If you use LBFGS Lightning handles the closure function automatically for you.
If you use multiple optimizers, gradients will be calculated only for the parameters of current optimizer at each training step.
If you need to control how often those optimizers step or override the default
.step()
schedule, override theoptimizer_step()
hook.If you only want to call a learning rate scheduler every
x
step or epoch, or want to monitor a custom metric, you can specify these in a lr_dict:{ 'scheduler': lr_scheduler, 'interval': 'step', # or 'epoch' 'monitor': 'val_f1', 'frequency': x, }
Convenience methods¶
Use these methods for convenience
print¶
-
pytorch_lightning.core.lightning.LightningModule.
print
(self, *args, **kwargs)[source] Prints only from process 0. Use this in any distributed mode to log only once.
- Parameters
Example
def forward(self, x): self.print(x, 'in forward')
- Return type
save_hyperparameters¶
-
pytorch_lightning.core.lightning.LightningModule.
save_hyperparameters
(self, *args, frame=None)[source] Save all model arguments.
- Parameters
args¶ – single object of dict, NameSpace or OmegaConf or string names or argumenst from class __init__
>>> from collections import OrderedDict >>> class ManuallyArgsModel(LightningModule): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # manually assign arguments ... self.save_hyperparameters('arg1', 'arg3') ... def forward(self, *args, **kwargs): ... ... >>> model = ManuallyArgsModel(1, 'abc', 3.14) >>> model.hparams "arg1": 1 "arg3": 3.14
>>> class AutomaticArgsModel(LightningModule): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # equivalent automatic ... self.save_hyperparameters() ... def forward(self, *args, **kwargs): ... ... >>> model = AutomaticArgsModel(1, 'abc', 3.14) >>> model.hparams "arg1": 1 "arg2": abc "arg3": 3.14
>>> class SingleArgModel(LightningModule): ... def __init__(self, params): ... super().__init__() ... # manually assign single argument ... self.save_hyperparameters(params) ... def forward(self, *args, **kwargs): ... ... >>> model = SingleArgModel(Namespace(p1=1, p2='abc', p3=3.14)) >>> model.hparams "p1": 1 "p2": abc "p3": 3.14
- Return type
Inference methods¶
Use these hooks for inference with a lightning module
forward¶
-
pytorch_lightning.core.lightning.LightningModule.
forward
(self, *args, **kwargs)[source] Same as
torch.nn.Module.forward()
, however in Lightning you want this to define the operations you want to use for prediction (i.e.: on a server or as a feature extractor).Normally you’d call
self()
from yourtraining_step()
method. This makes it easy to write a complex system for training with the outputs you’d want in a prediction setting.You may also find the
auto_move_data()
decorator useful when using the module outside Lightning in a production setting.- Parameters
- Returns
Predicted output
Examples
# example if we were using this model as a feature extractor def forward(self, x): feature_maps = self.convnet(x) return feature_maps def training_step(self, batch, batch_idx): x, y = batch feature_maps = self(x) logits = self.classifier(feature_maps) # ... return loss # splitting it this way allows model to be used a feature extractor model = MyModelAbove() inputs = server.get_request() results = model(inputs) server.write_results(results) # ------------- # This is in stark contrast to torch.nn.Module where normally you would have this: def forward(self, batch): x, y = batch feature_maps = self.convnet(x) logits = self.classifier(feature_maps) return logits
freeze¶
to_onnx¶
-
pytorch_lightning.core.lightning.LightningModule.
to_onnx
(self, file_path, input_sample=None, **kwargs)[source] Saves the model in ONNX format
- Parameters
Example
>>> class SimpleModel(LightningModule): ... def __init__(self): ... super().__init__() ... self.l1 = torch.nn.Linear(in_features=64, out_features=4) ... ... def forward(self, x): ... return torch.relu(self.l1(x.view(x.size(0), -1)))
>>> with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile: ... model = SimpleModel() ... input_sample = torch.randn((1, 64)) ... model.to_onnx(tmpfile.name, input_sample, export_params=True) ... os.path.isfile(tmpfile.name) True
Properties¶
These are properties available in a LightningModule.
device¶
The device the module is on. Use it to keep your code device agnostic
def training_step(...):
z = torch.rand(2, 3, device=self.device)
global_rank¶
The global_rank of this LightningModule. Lightning saves logs, weights etc only from global_rank = 0. You normally do not need to use this property
Global rank refers to the index of that GPU across ALL GPUs. For example, if using 10 machines, each with 4 GPUs, the 4th GPU on the 10th machine has global_rank = 39
global_step¶
The current step (does not reset each epoch)
def training_step(...):
self.logger.experiment.log_image(..., step=self.global_step)
hparams¶
After calling save_hyperparameters anything passed to init() is available via hparams.
def __init__(self, learning_rate):
self.save_hyperparameters()
def configure_optimizers(self):
return Adam(self.parameters(), lr=self.hparams.learning_rate)
logger¶
The current logger being used (tensorboard or other supported logger)
def training_step(...):
# the generic logger (same no matter if tensorboard or other supported logger)
self.logger
# the particular logger
tensorboard_logger = self.logger.experiment
local_rank¶
The local_rank of this LightningModule. Lightning saves logs, weights etc only from global_rank = 0. You normally do not need to use this property
Local rank refers to the rank on that machine. For example, if using 10 machines, the GPU at index 0 on each machine has local_rank = 0.
trainer¶
Pointer to the trainer
def training_step(...):
max_steps = self.trainer.max_steps
any_flag = self.trainer.any_flag
use_ddp¶
True if using ddp
use_ddp2¶
True if using ddp2
use_dp¶
True if using dp
use_tpu¶
True if using TPUs
Hooks¶
Hook lifecycle pseudocode¶
This is the pseudocode to describe how all the hooks are called during a call to .fit()
def fit(...):
on_fit_start()
if global_rank == 0:
# prepare data is called on GLOBAL_ZERO only
prepare_data()
for gpu/tpu in gpu/tpus:
train_on_device(model.copy())
on_fit_end()
def train_on_device(model):
# setup is called PER DEVICE
setup()
configure_optimizers()
on_pretrain_routine_start()
for epoch in epochs:
train_loop()
teardown()
def train_loop():
on_train_epoch_start()
train_outs = []
for train_batch in train_dataloader():
on_train_batch_start()
# ----- train_step methods -------
out = training_step(batch)
train_outs.append(out)
loss = out.loss
backward()
on_after_backward()
optimizer_step()
on_before_zero_grad()
optimizer_zero_grad()
on_train_batch_end()
if should_check_val:
val_loop()
# end training epoch
logs = training_epoch_end(outs)
def val_loop():
model.eval()
torch.set_grad_enabled(False)
on_validation_epoch_start()
val_outs = []
for val_batch in val_dataloader():
on_validation_batch_start()
# -------- val step methods -------
out = validation_step(val_batch)
val_outs.append(out)
on_validation_batch_end()
validation_epoch_end(val_outs)
on_validation_epoch_end()
# set up for train
model.train()
torch.set_grad_enabled(True)
Advanced hooks¶
Use these hooks to modify advanced functionality
configure_apex¶
-
pytorch_lightning.core.lightning.LightningModule.
configure_apex
(self, amp, model, optimizers, amp_level)[source] Override to init AMP your own way. Must return a model and list of optimizers.
- Parameters
model¶ (
LightningModule
) – pointer to currentLightningModule
.optimizers¶ (
List
[Optimizer
]) – list of optimizers passed inconfigure_optimizers()
.
- Return type
Tuple
[LightningModule
,List
[Optimizer
]]- Returns
Apex wrapped model and optimizers
Examples
# Default implementation used by Trainer. def configure_apex(self, amp, model, optimizers, amp_level): model, optimizers = amp.initialize( model, optimizers, opt_level=amp_level, ) return model, optimizers
configure_ddp¶
-
pytorch_lightning.core.lightning.LightningModule.
configure_ddp
(self, model, device_ids)[source] Override to init DDP in your own way or with your own wrapper. The only requirements are that:
On a validation batch, the call goes to
model.validation_step
.On a training batch, the call goes to
model.training_step
.On a testing batch, the call goes to
model.test_step
.
- Parameters
model¶ (
LightningModule
) – theLightningModule
currently being optimized.
- Return type
- Returns
DDP wrapped model
Examples
# default implementation used in Trainer def configure_ddp(self, model, device_ids): # Lightning DDP simply routes to test_step, val_step, etc... model = LightningDistributedDataParallel( model, device_ids=device_ids, find_unused_parameters=True ) return model
configure_sync_batchnorm¶
-
pytorch_lightning.core.lightning.LightningModule.
configure_ddp
(self, model, device_ids)[source] Override to init DDP in your own way or with your own wrapper. The only requirements are that:
On a validation batch, the call goes to
model.validation_step
.On a training batch, the call goes to
model.training_step
.On a testing batch, the call goes to
model.test_step
.
- Parameters
model¶ (
LightningModule
) – theLightningModule
currently being optimized.
- Return type
- Returns
DDP wrapped model
Examples
# default implementation used in Trainer def configure_ddp(self, model, device_ids): # Lightning DDP simply routes to test_step, val_step, etc... model = LightningDistributedDataParallel( model, device_ids=device_ids, find_unused_parameters=True ) return model
get_progress_bar_dict¶
-
pytorch_lightning.core.lightning.LightningModule.
get_progress_bar_dict
(self)[source] Implement this to override the default items displayed in the progress bar. By default it includes the average loss value, split index of BPTT (if used) and the version of the experiment when using a logger.
Epoch 1: 4%|▎ | 40/1095 [00:03<01:37, 10.84it/s, loss=4.501, v_num=10]
Here is an example how to override the defaults:
def get_progress_bar_dict(self): # don't show the version number items = super().get_progress_bar_dict() items.pop("v_num", None) return items
init_ddp_connection¶
-
pytorch_lightning.core.lightning.LightningModule.
init_ddp_connection
(self, global_rank, world_size, is_slurm_managing_tasks=True)[source] Override to define your custom way of setting up a distributed environment.
Lightning’s implementation uses env:// init by default and sets the first node as root for SLURM managed cluster.
tbptt_split_batch¶
-
pytorch_lightning.core.lightning.LightningModule.
tbptt_split_batch
(self, batch, split_size)[source] When using truncated backpropagation through time, each batch must be split along the time dimension. Lightning handles this by default, but for custom behavior override this function.
- Parameters
- Return type
- Returns
List of batch splits. Each split will be passed to
training_step()
to enable truncated back propagation through time. The default implementation splits root level Tensors and Sequences at dim=1 (i.e. time dim). It assumes that each time dim is the same length.
Examples
def tbptt_split_batch(self, batch, split_size): splits = [] for t in range(0, time_dims[0], split_size): batch_split = [] for i, x in enumerate(batch): if isinstance(x, torch.Tensor): split_x = x[:, t:t + split_size] elif isinstance(x, collections.Sequence): split_x = [None] * len(x) for batch_idx in range(len(x)): split_x[batch_idx] = x[batch_idx][t:t + split_size] batch_split.append(split_x) splits.append(batch_split) return splits
Note
Called in the training loop after
on_batch_start()
iftruncated_bptt_steps
> 0. Each returned batch split is passed separately totraining_step()
.
Checkpoint hooks¶
These hooks allow you to modify checkpoints
on_load_checkpoint¶
-
pytorch_lightning.core.lightning.LightningModule.
on_load_checkpoint
(self, checkpoint)[source] Called by Lightning to restore your model. If you saved something with
on_save_checkpoint()
this is your chance to restore this.Example
def on_load_checkpoint(self, checkpoint): # 99% of the time you don't need to implement this method self.something_cool_i_want_to_save = checkpoint['something_cool_i_want_to_save']
Note
Lightning auto-restores global step, epoch, and train state including amp scaling. There is no need for you to restore anything regarding training.
- Return type
on_save_checkpoint¶
-
pytorch_lightning.core.lightning.LightningModule.
on_save_checkpoint
(self, checkpoint)[source] Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save.
Example
def on_save_checkpoint(self, checkpoint): # 99% of use cases you don't need to implement this method checkpoint['something_cool_i_want_to_save'] = my_cool_pickable_object
Note
Lightning saves all aspects of training (epoch, global step, etc…) including amp scaling. There is no need for you to store anything about training.
- Return type
Data hooks¶
Use these hooks if you want to couple a LightningModule to a dataset.
Note
The same collection of hooks is available in a DataModule class to decouple the data from the model.
train_dataloader¶
-
pytorch_lightning.core.lightning.LightningModule.
train_dataloader
(self)[source] Implement a PyTorch DataLoader for training.
- Return type
- Returns
Single PyTorch
DataLoader
.
The dataloader you return will not be called every epoch unless you set
reload_dataloaders_every_epoch
toTrue
.For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
…
setup()
Note
Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Example
def train_dataloader(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) dataset = MNIST(root='/path/to/mnist/', train=True, transform=transform, download=True) loader = torch.utils.data.DataLoader( dataset=dataset, batch_size=self.batch_size, shuffle=True ) return loader
val_dataloader¶
-
pytorch_lightning.core.lightning.LightningModule.
val_dataloader
(self)[source] Implement one or multiple PyTorch DataLoaders for validation.
The dataloader you return will not be called every epoch unless you set
reload_dataloaders_every_epoch
toTrue
.It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Return type
- Returns
Single or multiple PyTorch DataLoaders.
Examples
def val_dataloader(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform, download=True) loader = torch.utils.data.DataLoader( dataset=dataset, batch_size=self.batch_size, shuffle=False ) return loader # can also return multiple dataloaders def val_dataloader(self): return [loader_a, loader_b, ..., loader_n]
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.Note
In the case where you return multiple validation dataloaders, the
validation_step()
will have an argumentdataloader_idx
which matches the order here.
test_dataloader¶
-
pytorch_lightning.core.lightning.LightningModule.
test_dataloader
(self)[source] Implement one or multiple PyTorch DataLoaders for testing.
The dataloader you return will not be called every epoch unless you set
reload_dataloaders_every_epoch
toTrue
.For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- Return type
- Returns
Single or multiple PyTorch DataLoaders.
Example
def test_dataloader(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform, download=True) loader = torch.utils.data.DataLoader( dataset=dataset, batch_size=self.batch_size, shuffle=False ) return loader # can also return multiple dataloaders def test_dataloader(self): return [loader_a, loader_b, ..., loader_n]
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.Note
In the case where you return multiple test dataloaders, the
test_step()
will have an argumentdataloader_idx
which matches the order here.
prepare_data¶
-
pytorch_lightning.core.lightning.LightningModule.
prepare_data
(self)[source] Use this to download and prepare data.
Warning
DO NOT set state to the model (use setup instead) since this is NOT called on every GPU in DDP/TPU
Example:
def prepare_data(self): # good download_data() tokenize() etc() # bad self.split = data_split self.some_state = some_other_state()
In DDP prepare_data can be called in two ways (using Trainer(prepare_data_per_node)):
Once per node. This is the default and is only called on LOCAL_RANK=0.
Once in total. Only called on GLOBAL_RANK=0.
Example:
# DEFAULT # called once per node on LOCAL_RANK=0 of that node Trainer(prepare_data_per_node=True) # call on GLOBAL_RANK=0 (great for shared file systems) Trainer(prepare_data_per_node=False)
This is called before requesting the dataloaders:
model.prepare_data() if ddp/tpu: init() model.setup(stage) model.train_dataloader() model.val_dataloader() model.test_dataloader()
- Return type
Optimization hooks¶
These are hooks related to the optimization procedure.
backward¶
-
pytorch_lightning.core.lightning.LightningModule.
backward
(self, trainer, loss, optimizer, optimizer_idx)[source] Override backward with your own implementation if you need to.
- Parameters
Called to perform backward step. Feel free to override as needed.
The loss passed in has already been scaled for accumulated gradients if requested.
Example:
def backward(self, trainer, loss, optimizer, optimizer_idx): loss.backward()
- Return type
on_after_backward¶
-
pytorch_lightning.core.lightning.LightningModule.
on_after_backward
(self)[source] Called in the training loop after loss.backward() and before optimizers do anything. This is the ideal place to inspect or log gradient information.
Example:
def on_after_backward(self): # example to inspect gradient information in tensorboard if self.trainer.global_step % 25 == 0: # don't make the tf file huge params = self.state_dict() for k, v in params.items(): grads = v name = k self.logger.experiment.add_histogram(tag=name, values=grads, global_step=self.trainer.global_step)
- Return type
on_before_zero_grad¶
-
pytorch_lightning.core.lightning.LightningModule.
on_before_zero_grad
(self, optimizer)[source] Called after optimizer.step() and before optimizer.zero_grad().
Called in the training loop after taking an optimizer step and before zeroing grads. Good place to inspect weight information with weights updated.
This is where it is called:
for optimizer in optimizers: optimizer.step() model.on_before_zero_grad(optimizer) # < ---- called here optimizer.zero_grad()
optimizer_step¶
-
pytorch_lightning.core.lightning.LightningModule.
optimizer_step
(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False)[source] Override this method to adjust the default way the
Trainer
calls each optimizer. By default, Lightning callsstep()
andzero_grad()
as shown in the example once per optimizer.- Parameters
Examples
# DEFAULT def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, second_order_closure, on_tpu, using_native_amp, using_lbfgs): optimizer.step() # Alternating schedule for optimizer steps (i.e.: GANs) def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, second_order_closure, on_tpu, using_native_amp, using_lbfgs): # update generator opt every 2 steps if optimizer_idx == 0: if batch_idx % 2 == 0 : optimizer.step() optimizer.zero_grad() # update discriminator opt every 4 steps if optimizer_idx == 1: if batch_idx % 4 == 0 : optimizer.step() optimizer.zero_grad() # ... # add as many optimizers as you want
Here’s another example showing how to use this for more advanced things such as learning rate warm-up:
# learning rate warm-up def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, second_order_closure, on_tpu, using_native_amp, using_lbfgs): # warm up lr if self.trainer.global_step < 500: lr_scale = min(1., float(self.trainer.global_step + 1) / 500.) for pg in optimizer.param_groups: pg['lr'] = lr_scale * self.learning_rate # update params optimizer.step() optimizer.zero_grad()
Note
If you also override the
on_before_zero_grad()
model hook don’t forget to add the call to it beforeoptimizer.zero_grad()
yourself.- Return type
Training lifecycle hooks¶
These hooks are called during training
on_fit_start¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_fit_start
(self)[source] Called at the very beginning of fit. If on DDP it is called on every process
on_fit_end¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_fit_end
(self)[source] Called at the very end of fit. If on DDP it is called on every process
on_pretrain_routine_start¶
on_pretrain_routine_end¶
on_test_epoch_start¶
on_test_epoch_end¶
on_test_batch_start¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_test_batch_start
(self, batch, batch_idx, dataloader_idx)[source] Called in the test loop before anything happens for that batch.
on_test_batch_end¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_test_batch_end
(self, batch, batch_idx, dataloader_idx)[source] Called in the test loop after the batch.
on_train_batch_start¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_train_batch_start
(self, batch, batch_idx, dataloader_idx)[source] Called in the training loop before anything happens for that batch.
If you return -1 here, you will skip training for the rest of the current epoch.
on_train_batch_end¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_train_batch_end
(self, batch, batch_idx, dataloader_idx)[source] Called in the training loop after the batch.
on_train_epoch_start¶
on_train_epoch_end¶
on_validation_batch_start¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_validation_batch_start
(self, batch, batch_idx, dataloader_idx)[source] Called in the validation loop before anything happens for that batch.
on_validation_batch_end¶
-
pytorch_lightning.core.hooks.ModelHooks.
on_validation_batch_end
(self, batch, batch_idx, dataloader_idx)[source] Called in the validation loop after the batch.
on_validation_epoch_start¶
on_validation_epoch_end¶
setup¶
-
pytorch_lightning.core.hooks.ModelHooks.
setup
(self, stage)[source] Called at the beginning of fit and test. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(stage): data = Load_data(...) self.l1 = nn.Linear(28, data.num_classes)
teardown¶
transfer_batch_to_device¶
-
pytorch_lightning.core.hooks.ModelHooks.
transfer_batch_to_device
(self, batch, device)[source] Override this hook if your
DataLoader
returns tensors wrapped in a custom data structure.The data types listed below (and any arbitrary nesting of them) are supported out of the box:
torch.Tensor
or anything that implements .to(…)torchtext.data.batch.Batch
For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).
Example:
def transfer_batch_to_device(self, batch, device) if isinstance(batch, CustomBatch): # move all tensors in your custom data structure to the device batch.samples = batch.samples.to(device) batch.targets = batch.targets.to(device) else: batch = super().transfer_batch_to_device(data, device) return batch
- Parameters
- Return type
- Returns
A reference to the data on the new device.
Note
This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing).
Note
This hook only runs on single GPU training (no data-parallel). If you need multi-GPU support for your custom batch objects, you need to define your custom
DistributedDataParallel
orLightningDistributedDataParallel
and overrideconfigure_ddp()
.