Shortcuts

DDP2Plugin

class pytorch_lightning.plugins.training_type.DDP2Plugin(parallel_devices=None, num_nodes=None, cluster_environment=None, sync_batchnorm=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, **kwargs)[source]

Bases: pytorch_lightning.plugins.training_type.ddp.DDPPlugin

DDP2 behaves like DP in one node, but synchronization across nodes behaves like in DDP.

model_to_device()[source]

Moves the model to the correct device

reduce(tensor, *args, **kwargs)[source]

Reduces a tensor from all processes to one aggregated tensor. In DDP2, the reduction here is only across local devices within the node.

Parameters
  • tensor – the tensor to sync and reduce

  • *args – ignored for DDP2

  • **kwargs – ignored for DDP2

Returns

reduced value, except when the input was not a tensor the output remains is unchanged

setup(model)[source]

Called by the accelerator to finish setup.

property root_device

Returns the root device

Read the Docs v: latest
Versions
latest
stable
1.3.1
1.3.0
1.2.10
1.2.9_a
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
docs-robots
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.