Shortcuts

DDPShardedPlugin

class pytorch_lightning.plugins.training_type.DDPShardedPlugin(parallel_devices=None, num_nodes=1, cluster_environment=None, sync_batchnorm=False, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, **kwargs)[source]

Bases: pytorch_lightning.plugins.training_type.ddp.DDPPlugin

Optimizer and gradient sharded training provided by FairScale.

pre_backward(closure_loss, should_accumulate, optimizer, opt_idx)[source]

Run before precision plugin executes backward

property lightning_module

Returns the pure LightningModule without potential wrappers

Return type

LightningModule