Shortcuts

ApexMixedPrecisionPlugin

class pytorch_lightning.plugins.precision.ApexMixedPrecisionPlugin(amp_level='O2')[source]

Bases: pytorch_lightning.plugins.precision.mixed.MixedPrecisionPlugin

Mixed Precision Plugin based on Nvidia/Apex (https://github.com/NVIDIA/apex)

backward(model, closure_loss, optimizer, *args, **kwargs)[source]

Run before precision plugin executes backward

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

  • optimizer (Optional[Optimizer]) – current optimizer being used. None if using manual optimization

Return type

None

dispatch(trainer)[source]

Hook to do something at trainer run_stage starts.

Return type

None

master_params(optimizer)[source]

The master params of the model. Returns the plain model params here. Maybe different in other precision plugins.

Return type

Iterator[Parameter]

on_load_checkpoint(checkpoint)[source]

Called by Lightning to restore your model. If you saved something with on_save_checkpoint() this is your chance to restore this.

Parameters

checkpoint (Dict[str, Any]) – Loaded checkpoint

Example:

def on_load_checkpoint(self, checkpoint):
    # 99% of the time you don't need to implement this method
    self.something_cool_i_want_to_save = checkpoint['something_cool_i_want_to_save']

Note

Lightning auto-restores global step, epoch, and train state including amp scaling. There is no need for you to restore anything regarding training.

Return type

None

on_save_checkpoint(checkpoint)[source]

Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save.

Parameters

checkpoint (Dict[str, Any]) – The full checkpoint dictionary before it gets dumped to a file. Implementations of this hook can insert additional data into this dictionary.

Example:

def on_save_checkpoint(self, checkpoint):
    # 99% of use cases you don't need to implement this method
    checkpoint['something_cool_i_want_to_save'] = my_cool_pickable_object

Note

Lightning saves all aspects of training (epoch, global step, etc…) including amp scaling. There is no need for you to store anything about training.

Return type

None

pre_optimizer_step(model, optimizer, optimizer_idx, lambda_closure, **kwargs)[source]

Hook to do something before each optimizer step.

Return type

bool

static reinit_scheduler_properties(optimizers, schedulers)[source]

Reinitializes schedulers with correct properties

Return type

None