Shortcuts

NativeMixedPrecisionPlugin

class pytorch_lightning.plugins.precision.NativeMixedPrecisionPlugin[source]

Bases: pytorch_lightning.plugins.precision.mixed.MixedPrecisionPlugin

Plugin for native mixed precision training with torch.cuda.amp.

backward(model, closure_loss, optimizer, opt_idx, should_accumulate, *args, **kwargs)[source]

performs the actual backpropagation

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

  • optimizer (Optimizer) – the optimizer to perform the step lateron

  • opt_idx (int) – the optimizer’s index

  • should_accumulate (bool) – whether to accumulate gradients or not

Return type

Tensor

post_optimizer_step(optimizer, optimizer_idx)[source]

Updates the GradScaler

Return type

None

pre_optimizer_step(pl_module, optimizer, optimizer_idx, lambda_closure, **kwargs)[source]

always called before the optimizer step. Checks that the optimizer is not LBFGS, as this one is not supported by native amp

Return type

bool

predict_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]

test_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]

train_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]

val_step_context()[source]

Enable autocast context

Return type

Generator[None, None, None]

Read the Docs v: latest
Versions
latest
stable
1.3.1
1.3.0
1.2.10
1.2.9_a
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
docs-robots
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.