Shortcuts

DataParallelPlugin

class pytorch_lightning.plugins.training_type.DataParallelPlugin(parallel_devices)[source]

Bases: pytorch_lightning.plugins.training_type.parallel.ParallelPlugin

Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data.

barrier(*args, **kwargs)[source]

Forces all possibly joined processes to wait for each other

broadcast(obj, src=0)[source]

Broadcasts an object to all processes

Return type

object

model_to_device()[source]

Moves the model to the correct device

reduce(tensor, *args, **kwargs)[source]

Reduces a tensor from all parallel processes to one aggregated tensor.

Parameters
  • tensor – the tensor to sync and reduce

  • *args – ignored for DP

  • **kwargs – ignored for DP

Returns

reduced value, except when the input was not a tensor the output remains is unchanged

reduce_boolean_decision(decision)[source]

Reduce the early stopping decision across all processes

Return type

bool

setup(model)[source]

Called by the accelerator to finish setup.

property root_device

Returns the root device

Read the Docs v: latest
Versions
latest
stable
1.3.1
1.3.0
1.2.10
1.2.9_a
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
docs-robots
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.