Shortcuts

DDPSpawnShardedPlugin

class pytorch_lightning.plugins.training_type.DDPSpawnShardedPlugin(parallel_devices=None, num_nodes=None, cluster_environment=None, checkpoint_io=None, sync_batchnorm=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, **kwargs)[source]

Bases: pytorch_lightning.plugins.training_type.ddp_spawn.DDPSpawnPlugin

Optimizer sharded training provided by FairScale.

block_backward_sync()[source]

Blocks syncing gradients behaviour on backwards pass.

This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off

Return type

Generator

pre_backward(closure_loss)[source]

Run before precision plugin executes backward.

Return type

None

property lightning_module: pytorch_lightning.core.lightning.LightningModule

Returns the pure LightningModule without potential wrappers.

Return type

LightningModule