Shortcuts

NativeMixedPrecisionPlugin

class pytorch_lightning.plugins.precision.NativeMixedPrecisionPlugin[source]

Bases: pytorch_lightning.plugins.precision.mixed.MixedPrecisionPlugin

Plugin for native mixed precision training with torch.cuda.amp.

backward(model, closure_loss, optimizer, opt_idx, should_accumulate, *args, **kwargs)[source]

performs the actual backpropagation

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

  • optimizer (Optimizer) – the optimizer to perform the step lateron

  • opt_idx (int) – the optimizer’s index

  • should_accumulate (bool) – whether to accumulate gradients or not

Return type

Tensor

pre_optimizer_step(pl_module, optimizer, optimizer_idx, lambda_closure, **kwargs)[source]

always called before the optimizer step. Checks that the optimizer is not LBFGS, as this one is not supported by native amp

Return type

bool

predict_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]

test_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]

train_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]

val_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]