Shortcuts

ParallelPlugin

class pytorch_lightning.plugins.training_type.ParallelPlugin(parallel_devices=None, cluster_environment=None)[source]

Bases: pytorch_lightning.plugins.training_type.training_type_plugin.TrainingTypePlugin, abc.ABC

Plugin for training with multiple processes in parallel.

all_gather(tensor, group=None, sync_grads=False)[source]

Perform a all_gather on all processes

Return type

Tensor

block_backward_sync()[source]

Blocks ddp sync gradients behaviour on backwards pass. This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off

static configure_sync_batchnorm(model)[source]

Add global batchnorm for a model spread across multiple GPUs and nodes.

Override to synchronize batchnorm between specific process groups instead of the whole world or use a different sync_bn like apex’s version.

Parameters

model (LightningModule) – pointer to current LightningModule.

Return type

LightningModule

Returns

LightningModule with batchnorm layers synchronized between process groups

reduce_boolean_decision(decision)[source]

Reduce the early stopping decision across all processes

Return type

bool

property is_global_zero

Whether the current process is the rank zero process not only on the local node, but for all nodes.

Return type

bool

property lightning_module

Returns the pure LightningModule without potential wrappers

property on_gpu

Returns whether the current process is done on GPU

abstract property root_device

Returns the root device

Read the Docs v: latest
Versions
latest
stable
1.3.1
1.3.0
1.2.10
1.2.9_a
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
docs-robots
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.