Shortcuts

DDPShardedPlugin

class pytorch_lightning.plugins.training_type.DDPShardedPlugin(parallel_devices=None, num_nodes=None, cluster_environment=None, sync_batchnorm=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, **kwargs)[source]

Bases: pytorch_lightning.plugins.training_type.ddp.DDPPlugin

Optimizer and gradient sharded training provided by FairScale.

pre_backward(closure_loss, should_accumulate, optimizer, opt_idx)[source]

Run before precision plugin executes backward

property lightning_module

Returns the pure LightningModule without potential wrappers

Return type

LightningModule

Read the Docs v: latest
Versions
latest
stable
1.3.1
1.3.0
1.2.10
1.2.9_a
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
docs-robots
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.