Shortcuts

TPUSpawnPlugin

class pytorch_lightning.plugins.training_type.TPUSpawnPlugin(parallel_devices=None, checkpoint_io=None, debug=False, **_)[source]

Bases: pytorch_lightning.plugins.training_type.ddp_spawn.DDPSpawnPlugin

Plugin for training multiple TPU devices using the torch.multiprocessing.spawn() method.

all_gather(tensor, group=None, sync_grads=False)[source]

Function to gather a tensor from several distributed processes :type _sphinx_paramlinks_pytorch_lightning.plugins.training_type.TPUSpawnPlugin.all_gather.tensor: Tensor :param _sphinx_paramlinks_pytorch_lightning.plugins.training_type.TPUSpawnPlugin.all_gather.tensor: tensor of shape (batch, …) :type _sphinx_paramlinks_pytorch_lightning.plugins.training_type.TPUSpawnPlugin.all_gather.group: Optional[Any] :param _sphinx_paramlinks_pytorch_lightning.plugins.training_type.TPUSpawnPlugin.all_gather.group: not available with TPUs :type _sphinx_paramlinks_pytorch_lightning.plugins.training_type.TPUSpawnPlugin.all_gather.sync_grads: bool :param _sphinx_paramlinks_pytorch_lightning.plugins.training_type.TPUSpawnPlugin.all_gather.sync_grads: not available with TPUs

Return type

Tensor

Returns

A tensor of shape (world_size, batch, …)

barrier(name=None)[source]

Synchronizes all processes which blocks processes until the whole group enters this function.

Parameters

name (Optional[str]) – an optional name to pass into barrier.

Return type

None

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters
  • obj (object) – the object to broadcast

  • src (int) – source rank

Return type

object

connect(model)[source]

Called by the accelerator to connect the accelerator and the model with this plugin.

Return type

None

model_to_device()[source]

Moves the model to the correct device.

Return type

None

pre_dispatch()[source]

Hook to do something before the training/evaluation/prediction starts.

process_dataloader(dataloader)[source]

Wraps the dataloader if necessary.

Parameters

dataloader (DataLoader) – iterable. Ideally of type: torch.utils.data.DataLoader

Return type

None

reduce(output, group=None, reduce_op=None)[source]

Reduces a tensor from several distributed processes to one aggregated tensor.

Parameters
  • tensor – the tensor to sync and reduce

  • group (Optional[Any]) – the process group to gather results from. Defaults to all processes (world)

  • reduce_op (Union[ReduceOp, str, None]) – the reduction operation. Defaults to ‘mean’/’avg’. Can also be a string ‘sum’ to calculate the sum during reduction.

Returns

reduced value, except when the input was not a tensor the output remains is unchanged

reduce_boolean_decision(decision)[source]

Reduce the early stopping decision across all processes.

Return type

bool

save_checkpoint(checkpoint, filepath)[source]

Save model/training states as a checkpoint file through state-dump and file-write.

Parameters
  • checkpoint (Dict[str, Any]) – dict containing model and trainer state

  • filepath (Union[str, Path]) – write-target file’s path

Return type

None

setup()[source]

Called by the accelerator to finish setup.

Return type

None

spawn(function, *args, return_result=True, **kwargs)[source]

Spawn processes that run the given function.

Parameters
  • function (Callable) – The function to spawn processes from.

  • *args – Optional positional arguments that will be passed to the function in addition to the process index. These arguments must be pickleable.

  • return_result (bool) – If True, copies the output of the function from process 0 to the main process and returns it.

  • **kwargs – Optional named arguments that will be passed to the function in addition to the process index. These arguments must be pickleable.

Return type

Optional[Any]

Returns

The output of the function of process 0.

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type

None

property root_device: torch.device

Return the root device.

Return type

device

property should_rank_save_checkpoint: bool

Returns whether the checkpoint should be saved (rank based)

Return type

bool

Read the Docs v: stable
Versions
latest
stable
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
ipynb-update
docs-search
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.