lightning¶
Classes
nn.Module with additional great features.
-
class
pytorch_lightning.core.lightning.
LightningModule
(*args, **kwargs)[source]¶ Bases:
abc.ABC
,pytorch_lightning.utilities.device_dtype_mixin.DeviceDtypeModuleMixin
,pytorch_lightning.core.grads.GradInformation
,pytorch_lightning.core.saving.ModelIO
,pytorch_lightning.core.hooks.ModelHooks
,pytorch_lightning.core.hooks.DataHooks
,pytorch_lightning.core.hooks.CheckpointHooks
,torch.nn.Module
-
all_gather
(tensor, group=None, sync_grads=False)[source]¶ Allows users to call
self.all_gather()
from the LightningModule, thus making the`all_gather`
operation accelerator agnostic.`all_gather`
is a function provided by accelerators to gather a tensor from several distributed processes
-
backward
(loss, optimizer, optimizer_idx, *args, **kwargs)[source]¶ Override backward with your own implementation if you need to.
- Parameters
Called to perform backward step. Feel free to override as needed. The loss passed in has already been scaled for accumulated gradients if requested.
Example:
def backward(self, loss, optimizer, optimizer_idx): loss.backward()
- Return type
None
-
configure_optimizers
()[source]¶ Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple.
- Returns
Any of these 6 options.
Single optimizer.
List or Tuple - List of optimizers.
Two lists - The first list has multiple optimizers, the second a list of LR schedulers (or lr_dict).
Dictionary, with an ‘optimizer’ key, and (optionally) a ‘lr_scheduler’ key whose value is a single LR scheduler or lr_dict.
Tuple of dictionaries as described, with an optional ‘frequency’ key.
None - Fit will run without any optimizer.
Note
The ‘frequency’ value is an int corresponding to the number of sequential batches optimized with the specific optimizer. It should be given to none or to all of the optimizers. There is a difference between passing multiple optimizers in a list, and passing multiple optimizers in dictionaries with a frequency of 1: In the former case, all optimizers will operate on the given batch in each optimization step. In the latter, only one optimizer will operate on the given batch at every step.
The lr_dict is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.
{ 'scheduler': lr_scheduler, # The LR scheduler instance (required) 'interval': 'epoch', # The unit of the scheduler's step size 'frequency': 1, # The frequency of the scheduler 'reduce_on_plateau': False, # For ReduceLROnPlateau scheduler 'monitor': 'val_loss', # Metric for ReduceLROnPlateau to monitor 'strict': True, # Whether to crash the training if `monitor` is not found 'name': None, # Custom name for LearningRateMonitor to use }
Only the
scheduler
key is required, the rest will be set to the defaults above.Examples
# most cases def configure_optimizers(self): opt = Adam(self.parameters(), lr=1e-3) return opt # multiple optimizer case (e.g.: GAN) def configure_optimizers(self): generator_opt = Adam(self.model_gen.parameters(), lr=0.01) disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02) return generator_opt, disriminator_opt # example with learning rate schedulers def configure_optimizers(self): generator_opt = Adam(self.model_gen.parameters(), lr=0.01) disriminator_opt = Adam(self.model_disc.parameters(), lr=0.02) discriminator_sched = CosineAnnealing(discriminator_opt, T_max=10) return [generator_opt, disriminator_opt], [discriminator_sched] # example with step-based learning rate schedulers def configure_optimizers(self): gen_opt = Adam(self.model_gen.parameters(), lr=0.01) dis_opt = Adam(self.model_disc.parameters(), lr=0.02) gen_sched = {'scheduler': ExponentialLR(gen_opt, 0.99), 'interval': 'step'} # called after each training step dis_sched = CosineAnnealing(discriminator_opt, T_max=10) # called every epoch return [gen_opt, dis_opt], [gen_sched, dis_sched] # example with optimizer frequencies # see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1 # https://arxiv.org/abs/1704.00028 def configure_optimizers(self): gen_opt = Adam(self.model_gen.parameters(), lr=0.01) dis_opt = Adam(self.model_disc.parameters(), lr=0.02) n_critic = 5 return ( {'optimizer': dis_opt, 'frequency': n_critic}, {'optimizer': gen_opt, 'frequency': 1} )
Note
Some things to know:
Lightning calls
.backward()
and.step()
on each optimizer and learning rate scheduler as needed.If you use 16-bit precision (
precision=16
), Lightning will automatically handle the optimizers for you.If you use multiple optimizers,
training_step()
will have an additionaloptimizer_idx
parameter.If you use LBFGS Lightning handles the closure function automatically for you.
If you use multiple optimizers, gradients will be calculated only for the parameters of current optimizer at each training step.
If you need to control how often those optimizers step or override the default
.step()
schedule, override theoptimizer_step()
hook.If you only want to call a learning rate scheduler every
x
step or epoch, or want to monitor a custom metric, you can specify these in a lr_dict:{ 'scheduler': lr_scheduler, 'interval': 'step', # or 'epoch' 'monitor': 'val_f1', 'frequency': x, }
-
forward
(*args, **kwargs)[source]¶ Same as
torch.nn.Module.forward()
, however in Lightning you want this to define the operations you want to use for prediction (i.e.: on a server or as a feature extractor).Normally you’d call
self()
from yourtraining_step()
method. This makes it easy to write a complex system for training with the outputs you’d want in a prediction setting.You may also find the
auto_move_data()
decorator useful when using the module outside Lightning in a production setting.- Parameters
- Returns
Predicted output
Examples
# example if we were using this model as a feature extractor def forward(self, x): feature_maps = self.convnet(x) return feature_maps def training_step(self, batch, batch_idx): x, y = batch feature_maps = self(x) logits = self.classifier(feature_maps) # ... return loss # splitting it this way allows model to be used a feature extractor model = MyModelAbove() inputs = server.get_request() results = model(inputs) server.write_results(results) # ------------- # This is in stark contrast to torch.nn.Module where normally you would have this: def forward(self, batch): x, y = batch feature_maps = self.convnet(x) logits = self.classifier(feature_maps) return logits
-
freeze
()[source]¶ Freeze all params for inference.
Example
model = MyLightningModule(...) model.freeze()
- Return type
None
-
get_progress_bar_dict
()[source]¶ Implement this to override the default items displayed in the progress bar. By default it includes the average loss value, split index of BPTT (if used) and the version of the experiment when using a logger.
Epoch 1: 4%|▎ | 40/1095 [00:03<01:37, 10.84it/s, loss=4.501, v_num=10]
Here is an example how to override the defaults:
def get_progress_bar_dict(self): # don't show the version number items = super().get_progress_bar_dict() items.pop("v_num", None) return items
-
log
(name, value, prog_bar=False, logger=True, on_step=None, on_epoch=None, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]¶ Log a key, value
Example:
self.log('train_loss', loss)
The default behavior per hook is as follows
*
also applies to the test loop¶LightningMoule Hook
on_step
on_epoch
prog_bar
logger
training_step
T
F
F
T
training_step_end
T
F
F
T
training_epoch_end
F
T
F
T
validation_step*
F
T
F
T
validation_step_end*
F
T
F
T
validation_epoch_end*
F
T
F
T
- Parameters
on_step¶ (
Optional
[bool
]) – if True logs at this step. None auto-logs at the training_step but not validation/test_stepon_epoch¶ (
Optional
[bool
]) – if True logs epoch accumulated metrics. None auto-logs at the val/test step but not training_stepreduce_fx¶ (
Callable
) – reduction function over step values for end of epoch. Torch.mean by defaulttbptt_reduce_fx¶ (
Callable
) – function to reduce on truncated back propenable_graph¶ (
bool
) – if True, will not auto detach the graphsync_dist¶ (
bool
) – if True, reduces the metric across GPUs/TPUssync_dist_op¶ (
Union
[Any
,str
]) – the op to sync across GPUs/TPUs
-
log_dict
(dictionary, prog_bar=False, logger=True, on_step=None, on_epoch=None, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]¶ Log a dictonary of values at once
Example:
values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n} self.log_dict(values)
- Parameters
on_step¶ (
Optional
[bool
]) – if True logs at this step. None auto-logs for training_step but not validation/test_stepon_epoch¶ (
Optional
[bool
]) – if True logs epoch accumulated metrics. None auto-logs for val/test step but not training_stepreduce_fx¶ (
Callable
) – reduction function over step values for end of epoch. Torch.mean by defaulttbptt_reduce_fx¶ (
Callable
) – function to reduce on truncated back propenable_graph¶ (
bool
) – if True, will not auto detach the graphsync_dist¶ (
bool
) – if True, reduces the metric across GPUs/TPUssync_dist_op¶ (
Union
[Any
,str
]) – the op to sync across GPUs/TPUs
-
manual_backward
(loss, optimizer, *args, **kwargs)[source]¶ Call this directly from your training_step when doing optimizations manually. By using this we can ensure that all the proper scaling when using 16-bit etc has been done for you
This function forwards all args to the .backward() call as well.
Tip
In manual mode we still automatically clip grads if Trainer(gradient_clip_val=x) is set
Tip
In manual mode we still automatically accumulate grad over batches if Trainer(accumulate_grad_batches=x) is set and you use optimizer.step()
Example:
def training_step(...): (opt_a, opt_b) = self.optimizers() loss = ... # automatically applies scaling, etc... self.manual_backward(loss, opt_a) opt_a.step()
- Return type
None
-
optimizer_step
(epoch=None, batch_idx=None, optimizer=None, optimizer_idx=None, optimizer_closure=None, on_tpu=None, using_native_amp=None, using_lbfgs=None)[source]¶ Override this method to adjust the default way the
Trainer
calls each optimizer. By default, Lightning callsstep()
andzero_grad()
as shown in the example once per optimizer.Tip
With Trainer(enable_pl_optimizer=True), you can user optimizer.step() directly and it will handle zero_grad, accumulated gradients, AMP, TPU and more automatically for you.
Warning
If you are overriding this method, make sure that you pass the
optimizer_closure
parameter tooptimizer.step()
function as shown in the examples. This ensures thattrain_step_and_backward_closure
is called withinrun_training_batch()
.- Parameters
Examples
# DEFAULT def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_native_amp, using_lbfgs): optimizer.step(closure=optimizer_closure) # Alternating schedule for optimizer steps (i.e.: GANs) def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_native_amp, using_lbfgs): # update generator opt every 2 steps if optimizer_idx == 0: if batch_idx % 2 == 0 : optimizer.step(closure=optimizer_closure) optimizer.zero_grad() # update discriminator opt every 4 steps if optimizer_idx == 1: if batch_idx % 4 == 0 : optimizer.step(closure=optimizer_closure) optimizer.zero_grad() # ... # add as many optimizers as you want
Here’s another example showing how to use this for more advanced things such as learning rate warm-up:
# learning rate warm-up def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_native_amp, using_lbfgs): # warm up lr if self.trainer.global_step < 500: lr_scale = min(1., float(self.trainer.global_step + 1) / 500.) for pg in optimizer.param_groups: pg['lr'] = lr_scale * self.learning_rate # update params optimizer.step(closure=optimizer_closure) optimizer.zero_grad()
- Return type
None
-
print
(*args, **kwargs)[source]¶ Prints only from process 0. Use this in any distributed mode to log only once.
- Parameters
Example
def forward(self, x): self.print(x, 'in forward')
- Return type
None
-
save_hyperparameters
(*args, frame=None)[source]¶ Save all model arguments.
- Parameters
args¶ – single object of dict, NameSpace or OmegaConf or string names or arguments from class __init__
>>> from collections import OrderedDict >>> class ManuallyArgsModel(LightningModule): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # manually assign arguments ... self.save_hyperparameters('arg1', 'arg3') ... def forward(self, *args, **kwargs): ... ... >>> model = ManuallyArgsModel(1, 'abc', 3.14) >>> model.hparams "arg1": 1 "arg3": 3.14
>>> class AutomaticArgsModel(LightningModule): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # equivalent automatic ... self.save_hyperparameters() ... def forward(self, *args, **kwargs): ... ... >>> model = AutomaticArgsModel(1, 'abc', 3.14) >>> model.hparams "arg1": 1 "arg2": abc "arg3": 3.14
>>> class SingleArgModel(LightningModule): ... def __init__(self, params): ... super().__init__() ... # manually assign single argument ... self.save_hyperparameters(params) ... def forward(self, *args, **kwargs): ... ... >>> model = SingleArgModel(Namespace(p1=1, p2='abc', p3=3.14)) >>> model.hparams "p1": 1 "p2": abc "p3": 3.14
- Return type
None
-
tbptt_split_batch
(batch, split_size)[source]¶ When using truncated backpropagation through time, each batch must be split along the time dimension. Lightning handles this by default, but for custom behavior override this function.
- Parameters
- Return type
- Returns
List of batch splits. Each split will be passed to
training_step()
to enable truncated back propagation through time. The default implementation splits root level Tensors and Sequences at dim=1 (i.e. time dim). It assumes that each time dim is the same length.
Examples
def tbptt_split_batch(self, batch, split_size): splits = [] for t in range(0, time_dims[0], split_size): batch_split = [] for i, x in enumerate(batch): if isinstance(x, torch.Tensor): split_x = x[:, t:t + split_size] elif isinstance(x, collections.Sequence): split_x = [None] * len(x) for batch_idx in range(len(x)): split_x[batch_idx] = x[batch_idx][t:t + split_size] batch_split.append(split_x) splits.append(batch_split) return splits
Note
Called in the training loop after
on_batch_start()
iftruncated_bptt_steps
> 0. Each returned batch split is passed separately totraining_step()
.
-
test_epoch_end
(outputs)[source]¶ Called at the end of a test epoch with the output of all test steps.
# the pseudocode for these calls test_outs = [] for test_batch in test_data: out = test_step(test_batch) test_outs.append(out) test_epoch_end(test_outs)
- Parameters
outputs¶ (
List
[Any
]) – List of outputs you defined intest_step_end()
, or if there are multiple dataloaders, a list containing a list of outputs for each dataloader- Return type
None
- Returns
None
Note
If you didn’t define a
test_step()
, this won’t be called.Examples
With a single dataloader:
def test_epoch_end(self, outputs): # do something with the outputs of all test batches all_test_preds = test_step_outputs.predictions some_result = calc_all_results(all_test_preds) self.log(some_result)
With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each test step for that dataloader.
def test_epoch_end(self, outputs): final_value = 0 for dataloader_outputs in outputs: for test_step_out in dataloader_outputs: # do something final_value += test_step_out self.log('final_metric', final_value)
-
test_step
(*args, **kwargs)[source]¶ Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.
# the pseudocode for these calls test_outs = [] for test_batch in test_data: out = test_step(test_batch) test_outs.append(out) test_epoch_end(test_outs)
- Parameters
- Returns
Any of.
Any object or value
None
- Testing will skip to the next batch
# if you have one test dataloader: def test_step(self, batch, batch_idx) # if you have multiple test dataloaders: def test_step(self, batch, batch_idx, dataloader_idx)
Examples
# CASE 1: A single test dataset def test_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'test_loss': loss, 'test_acc': test_acc})
If you pass in multiple test dataloaders,
test_step()
will have an additional argument.# CASE 2: multiple test dataloaders def test_step(self, batch, batch_idx, dataloader_idx): # dataloader_idx tells you which dataset this is.
Note
If you don’t need to test you don’t need to implement this method.
Note
When the
test_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.
-
test_step_end
(*args, **kwargs)[source]¶ Use this when testing with dp or ddp2 because
test_step()
will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.Note
If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.
# pseudocode sub_batches = split_batches_for_dp(batch) batch_parts_outputs = [test_step(sub_batch) for sub_batch in sub_batches] test_step_end(batch_parts_outputs)
- Parameters
batch_parts_outputs¶ – What you return in
test_step()
for each batch part.- Returns
None or anything
# WITHOUT test_step_end # if used in DP or DDP2, this batch is 1/num_gpus large def test_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) loss = self.softmax(out) self.log('test_loss', loss) # -------------- # with test_step_end to do softmax over the full batch def test_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self.encoder(x) return out def test_step_end(self, output_results): # this out is now the full size of the batch all_test_step_outs = output_results.out loss = nce_loss(all_test_step_outs) self.log('test_loss', loss)
See also
See the Multi-GPU training guide for more details.
-
to_onnx
(file_path, input_sample=None, **kwargs)[source]¶ Saves the model in ONNX format
- Parameters
Example
>>> class SimpleModel(LightningModule): ... def __init__(self): ... super().__init__() ... self.l1 = torch.nn.Linear(in_features=64, out_features=4) ... ... def forward(self, x): ... return torch.relu(self.l1(x.view(x.size(0), -1)))
>>> with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile: ... model = SimpleModel() ... input_sample = torch.randn((1, 64)) ... model.to_onnx(tmpfile.name, input_sample, export_params=True) ... os.path.isfile(tmpfile.name) True
-
to_torchscript
(file_path=None, method='script', example_inputs=None, **kwargs)[source]¶ By default compiles the whole model to a
ScriptModule
. If you want to use tracing, please provided the argument method=’trace’ and make sure that either the example_inputs argument is provided, or the model has self.example_input_array set. If you would like to customize the modules that are scripted you should override this method. In case you want to return multiple modules, we recommend using a dictionary.- Parameters
file_path¶ (
Union
[str
,Path
,None
]) – Path where to save the torchscript. Default: None (no file saved).method¶ (
Optional
[str
]) – Whether to use TorchScript’s script or trace method. Default: ‘script’example_inputs¶ (
Optional
[Any
]) – An input to be used to do tracing when method is set to ‘trace’. Default: None (Use self.example_input_array)**kwargs¶ – Additional arguments that will be passed to the
torch.jit.script()
ortorch.jit.trace()
function.
Note
Example
>>> class SimpleModel(LightningModule): ... def __init__(self): ... super().__init__() ... self.l1 = torch.nn.Linear(in_features=64, out_features=4) ... ... def forward(self, x): ... return torch.relu(self.l1(x.view(x.size(0), -1))) ... >>> model = SimpleModel() >>> torch.jit.save(model.to_torchscript(), "model.pt") >>> os.path.isfile("model.pt") >>> torch.jit.save(model.to_torchscript(file_path="model_trace.pt", method='trace', ... example_inputs=torch.randn(1, 64))) >>> os.path.isfile("model_trace.pt") True
-
toggle_optimizer
(optimizer, optimizer_idx)[source]¶ Makes sure only the gradients of the current optimizer’s parameters are calculated in the training step to prevent dangling gradients in multiple-optimizer setup.
Note
Only called when using multiple optimizers
Override for your own behavior
It works with
untoggle_optimizer
to make sure param_requires_grad_state is properly reset.
-
training_epoch_end
(outputs)[source]¶ Called at the end of the training epoch with the outputs of all training steps. Use this in case you need to do something with all the outputs for every training_step.
# the pseudocode for these calls train_outs = [] for train_batch in train_data: out = training_step(train_batch) train_outs.append(out) training_epoch_end(train_outs)
- Parameters
outputs¶ (
List
[Any
]) – List of outputs you defined intraining_step()
, or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.- Return type
None
- Returns
None
Note
If this method is not overridden, this won’t be called.
Example:
def training_epoch_end(self, training_step_outputs): # do something with all training_step outputs return result
With multiple dataloaders,
outputs
will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each training step for that dataloader.def training_epoch_end(self, training_step_outputs): for out in training_step_outputs: # do something here
-
training_step
(*args, **kwargs)[source]¶ Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.
- Parameters
batch¶ (
Tensor
| (Tensor
, …) | [Tensor
, …]) – The output of yourDataLoader
. A tensor, tuple or list.optimizer_idx¶ (int) – When using multiple optimizers, this argument will also be present.
hiddens¶ (
Tensor
) – Passed in iftruncated_bptt_steps
> 0.
- Returns
Any of.
Tensor
- The loss tensordict
- A dictionary. Can include any keys, but must include the key'loss'
None
- Training will skip to the next batch
Note
Returning
None
is currently not supported for multi-GPU or TPU, or with 16-bit precision enabled.In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.
Example:
def training_step(self, batch, batch_idx): x, y, z = batch out = self.encoder(x) loss = self.loss(out, x) return loss
If you define multiple optimizers, this step will be called with an additional
optimizer_idx
parameter.# Multiple optimizers (e.g.: GANs) def training_step(self, batch, batch_idx, optimizer_idx): if optimizer_idx == 0: # do training_step with encoder if optimizer_idx == 1: # do training_step with decoder
If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.
# Truncated back-propagation through time def training_step(self, batch, batch_idx, hiddens): # hiddens are the hidden states from the previous truncated backprop step ... out, hiddens = self.lstm(data, hiddens) ... return {'loss': loss, 'hiddens': hiddens}
Note
The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.
-
training_step_end
(*args, **kwargs)[source]¶ Use this when training with dp or ddp2 because
training_step()
will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.Note
If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code
# pseudocode sub_batches = split_batches_for_dp(batch) batch_parts_outputs = [training_step(sub_batch) for sub_batch in sub_batches] training_step_end(batch_parts_outputs)
- Parameters
batch_parts_outputs¶ – What you return in training_step for each batch part.
- Returns
Anything
When using dp/ddp2 distributed backends, only a portion of the batch is inside the training_step:
def training_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) # softmax uses only a portion of the batch in the denomintaor loss = self.softmax(out) loss = nce_loss(loss) return loss
If you wish to do something with all the parts of the batch, then use this method to do it:
def training_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self.encoder(x) return {'pred': out} def training_step_end(self, training_step_outputs): gpu_0_pred = training_step_outputs[0]['pred'] gpu_1_pred = training_step_outputs[1]['pred'] gpu_n_pred = training_step_outputs[n]['pred'] # this softmax now uses the full batch loss = nce_loss([gpu_0_pred, gpu_1_pred, gpu_n_pred]) return loss
See also
See the Multi-GPU training guide for more details.
-
unfreeze
()[source]¶ Unfreeze all parameters for training.
model = MyLightningModule(...) model.unfreeze()
- Return type
None
-
untoggle_optimizer
(optimizer_idx)[source]¶ Note
Only called when using multiple optimizers
Override for your own behavior
-
validation_epoch_end
(outputs)[source]¶ Called at the end of the validation epoch with the outputs of all validation steps.
# the pseudocode for these calls val_outs = [] for val_batch in val_data: out = validation_step(val_batch) val_outs.append(out) validation_epoch_end(val_outs)
- Parameters
outputs¶ (
List
[Any
]) – List of outputs you defined invalidation_step()
, or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.- Return type
None
- Returns
None
Note
If you didn’t define a
validation_step()
, this won’t be called.Examples
With a single dataloader:
def validation_epoch_end(self, val_step_outputs): for out in val_step_outputs: # do something
With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.
def validation_epoch_end(self, outputs): for dataloader_output_result in outputs: dataloader_outs = dataloader_output_result.dataloader_i_outputs self.log('final_metric', final_value)
-
validation_step
(*args, **kwargs)[source]¶ Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.
# the pseudocode for these calls val_outs = [] for val_batch in val_data: out = validation_step(val_batch) val_outs.append(out) validation_epoch_end(val_outs)
- Parameters
- Returns
Any of.
Any object or value
None
- Validation will skip to the next batch
# pseudocode of order out = validation_step() if defined('validation_step_end'): out = validation_step_end(out) out = validation_epoch_end(out)
# if you have one val dataloader: def validation_step(self, batch, batch_idx) # if you have multiple val dataloaders: def validation_step(self, batch, batch_idx, dataloader_idx)
Examples
# CASE 1: A single validation dataset def validation_step(self, batch, batch_idx): x, y = batch # implement your own out = self(x) loss = self.loss(out, y) # log 6 example images # or generated text... or whatever sample_imgs = x[:6] grid = torchvision.utils.make_grid(sample_imgs) self.logger.experiment.add_image('example_images', grid, 0) # calculate acc labels_hat = torch.argmax(out, dim=1) val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0) # log the outputs! self.log_dict({'val_loss': loss, 'val_acc': val_acc})
If you pass in multiple val dataloaders,
validation_step()
will have an additional argument.# CASE 2: multiple validation dataloaders def validation_step(self, batch, batch_idx, dataloader_idx): # dataloader_idx tells you which dataset this is.
Note
If you don’t need to validate you don’t need to implement this method.
Note
When the
validation_step()
is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.
-
validation_step_end
(*args, **kwargs)[source]¶ Use this when validating with dp or ddp2 because
validation_step()
will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.Note
If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.
# pseudocode sub_batches = split_batches_for_dp(batch) batch_parts_outputs = [validation_step(sub_batch) for sub_batch in sub_batches] validation_step_end(batch_parts_outputs)
- Parameters
batch_parts_outputs¶ – What you return in
validation_step()
for each batch part.- Returns
None or anything
# WITHOUT validation_step_end # if used in DP or DDP2, this batch is 1/num_gpus large def validation_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self.encoder(x) loss = self.softmax(out) loss = nce_loss(loss) self.log('val_loss', loss) # -------------- # with validation_step_end to do softmax over the full batch def validation_step(self, batch, batch_idx): # batch is 1/num_gpus big x, y = batch out = self(x) return out def validation_step_end(self, val_step_outputs): for out in val_step_outputs: # do something with these
See also
See the Multi-GPU training guide for more details.
-
property
automatic_optimization
¶ If False you are responsible for calling .backward, .step, zero_grad.
- Return type
-
logger
= None¶ Pointer to the logger object
-
property
on_gpu
¶ True if your model is currently running on GPUs. Useful to set flags around the LightningModule for different CPU vs GPU behavior.
-
precision
= None¶ The precision used
-
trainer
= None¶ Pointer to the trainer object
-
use_amp
= None¶ True if using amp
-
use_ddp
= None¶ True if using ddp
-
use_ddp2
= None¶ True if using ddp2
-
use_dp
= None¶ True if using dp
-