Shortcuts

HPUParallelStrategy

class lightning.pytorch.strategies.HPUParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, process_group_backend='hccl', **kwargs)[source]

Bases: lightning.pytorch.strategies.ddp.DDPStrategy

Strategy for distributed training on multiple HPU devices.

Warning

This is an experimental feature.

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters
  • obj (object) – the object to broadcast

  • src (int) – source rank

Return type

object

optimizer_step(optimizer, closure, model=None, **kwargs)[source]

Performs the actual optimizer step.

Parameters
  • optimizer (Optimizer) – the optimizer performing the step

  • closure (Callable[[], Any]) – closure calculating the loss value

  • model (Union[LightningModule, Module, None]) – reference to the model, optionally defining optimizer step related hooks

  • **kwargs – Any extra arguments to optimizer.step

Return type

Any

setup_environment()[source]

Setup any processes or distributed connections.

This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.

Return type

None

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type

None