Shortcuts

DataParallelPlugin

class pytorch_lightning.plugins.training_type.DataParallelPlugin(parallel_devices)[source]

Bases: pytorch_lightning.plugins.training_type.parallel.ParallelPlugin

Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data.

barrier(*args, **kwargs)[source]

Forces all possibly joined processes to wait for each other

broadcast(obj, src=0)[source]

Broadcasts an object to all processes

Return type

object

model_to_device()[source]

Moves the model to the correct device

reduce(collection, *args, **kwargs)[source]

Reduces a collection of tensors from all processes. It can be applied to just a single tensor.

Parameters
Return type

Union[Metric, Tensor, Number, Mapping[str, Union[Metric, Tensor, Number]]]

Returns

Reduced tensor values or the same value if it was not or did not contain a tensor.

reduce_boolean_decision(decision)[source]

Reduce the early stopping decision across all processes

Return type

bool

setup(model)[source]

Called by the accelerator to finish setup.

property root_device

Returns the root device

Read the Docs v: 1.4.0
Versions
latest
stable
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
ipynb-update
docs-search
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.