Shortcuts

DDP2Plugin

class pytorch_lightning.plugins.training_type.DDP2Plugin(parallel_devices=None, num_nodes=None, cluster_environment=None, checkpoint_io=None, sync_batchnorm=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, **kwargs)[source]

Bases: pytorch_lightning.plugins.training_type.ddp.DDPPlugin

DDP2 behaves like DP in one node, but synchronization across nodes behaves like in DDP.

model_to_device()[source]

Moves the model to the correct device.

reduce(collection, *args, **kwargs)[source]

Reduces a collection of tensors from all processes. It can be applied to just a single tensor. In DDP2, the reduction here is only across local devices within the node.

Parameters
Return type

Union[Metric, Tensor, int, float, Mapping[str, Union[Metric, Tensor, int, float]]]

Returns

Reduced tensor values or the same value if it was not or did not contain a tensor.

setup()[source]

Called by the accelerator to finish setup.

Return type

None

property root_device

Return the root device.

Read the Docs v: stable
Versions
latest
stable
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
ipynb-update
docs-search
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.