Shortcuts

DDPPlugin

class pytorch_lightning.plugins.training_type.DDPPlugin(parallel_devices=None, num_nodes=None, cluster_environment=None, checkpoint_io=None, sync_batchnorm=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, **kwargs)[source]

Bases: pytorch_lightning.plugins.training_type.parallel.ParallelPlugin

Plugin for multi-process single-device training on one or multiple nodes.

The master process in each node spawns N-1 child processes via subprocess.Popen(), where N is the number of devices (e.g. GPU) per node. It is very similar to how torch.distributed.launch launches processes.

barrier(*args, **kwargs)[source]

Synchronizes all processes which blocks processes until the whole group enters this function.

Parameters

name – an optional name to pass into barrier.

Return type

None

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters
  • obj (object) – the object to broadcast

  • src (int) – source rank

Return type

object

model_to_device()[source]

Moves the model to the correct device.

post_dispatch(trainer)[source]

Hook to do something after the training/evaluation/prediction finishes.

Return type

None

pre_backward(closure_loss)[source]

Run before precision plugin executes backward.

Return type

None

pre_dispatch()[source]

Hook to do something before the training/evaluation/prediction starts.

reconciliate_processes(trace)[source]

Function to re-conciliate processes on failure.

Return type

None

reduce(tensor, group=None, reduce_op='mean')[source]

Reduces a tensor from several distributed processes to one aggregated tensor.

Parameters
  • tensor – the tensor to sync and reduce

  • group (Optional[Any]) – the process group to gather results from. Defaults to all processes (world)

  • reduce_op (Union[ReduceOp, str]) – the reduction operation. Defaults to ‘mean’/’avg’. Can also be a string ‘sum’ to calculate the sum during reduction.

Return type

Tensor

Returns

reduced value, except when the input was not a tensor the output remains is unchanged

setup_environment()[source]

Setup any processes or distributed connections.

This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.

Return type

None

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type

None

property root_device: torch.device

Return the root device.

Return type

device

Read the Docs v: stable
Versions
latest
stable
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
ipynb-update
docs-search
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.