Shortcuts

distributed

Functions

all_gather_ddp_if_available

rtype

Any

distributed_available

rtype

Any

gather_all_tensors

rtype

Any

get_default_process_group_backend_for_device

rtype

Any

init_dist_connection

rtype

Any

register_ddp_comm_hook

Function to register communication hook for DDP model https://pytorch.org/docs/master/ddp_comm_hooks.html.

sync_ddp

rtype

Any

sync_ddp_if_available

rtype

Any

tpu_distributed

rtype

bool

Utilities that can be used with distributed training.

pytorch_lightning.utilities.distributed.register_ddp_comm_hook(model, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None)[source]

Function to register communication hook for DDP model https://pytorch.org/docs/master/ddp_comm_hooks.html.

Parameters
  • model (DistributedDataParallel) – DDP model

  • ddp_comm_state (Optional[object]) – state is passed to the hook and can be used to maintain and update any state information that users would like to maintain as part of the training process. Examples: error feedback in gradient compression, peers to communicate with next in GossipGrad etc.

  • ddp_comm_hook (Optional[Callable]) –

    hook(state: object, bucket: dist._GradBucket) -> torch.futures.Future

    This callable function is called once the bucket is ready. The hook can perform whatever processing is needed and return a Future indicating completion of any async work (ex: allreduce). If the hook doesn’t perform any communication, it can also just return a completed Future. The Future should hold the new value of grad bucket’s tensors. Once a bucket is ready, c10d reducer would call this hook and use the tensors returned by the Future and copy grads to individual parameters.

  • ddp_comm_wrapper (Optional[Callable]) – communication hook wrapper to support a communication hook such as FP16 compression as wrapper, which could be combined with ddp_comm_hook

Examples

>>> from torch.distributed.algorithms.ddp_comm_hooks import ( 
...     default_hooks as default,
...     powerSGD_hook as powerSGD,
...     post_localSGD_hook as post_localSGD,
... )
>>>
>>> # fp16_compress_hook for compress gradients
>>> ddp_model = ...
>>> register_ddp_comm_hook( 
...     model=ddp_model,
...     ddp_comm_hook=default.fp16_compress_hook,
... )
>>>
>>> # powerSGD_hook
>>> ddp_model = ...
>>> register_ddp_comm_hook( 
...     model=ddp_model,
...     ddp_comm_state=powerSGD.PowerSGDState(
...         process_group=None,
...         matrix_approximation_rank=1,
...         start_powerSGD_iter=5000,
...     ),
...     ddp_comm_hook=powerSGD.powerSGD_hook,
... )
>>>
>>> # post_localSGD_hook
>>> subgroup, _ = torch.distributed.new_subgroups() 
>>> ddp_model = ...
>>> register_ddp_comm_hook( 
...     model=ddp_model,
...     state=post_localSGD.PostLocalSGDState(
...         process_group=None,
...         subgroup=subgroup,
...         start_localSGD_iter=1_000,
...     ),
...     ddp_comm_hook=post_localSGD.post_localSGD_hook,
... )
>>>
>>> # fp16_compress_wrapper combined with other communication hook
>>> ddp_model = ...
>>> register_ddp_comm_hook( 
...     model=ddp_model,
...     ddp_comm_state=powerSGD.PowerSGDState(
...         process_group=None,
...         matrix_approximation_rank=1,
...         start_powerSGD_iter=5000,
...     ),
...     ddp_comm_hook=powerSGD.powerSGD_hook,
...     ddp_comm_wrapper=default.fp16_compress_wrapper,
... )
Return type

None

Read the Docs v: latest
Versions
latest
stable
1.7.7
1.7.6
1.7.5
1.7.4
1.7.3
1.7.2
1.7.1
1.7.0
1.6.5
1.6.4
1.6.3
1.6.2
1.6.1
1.6.0
1.5.10
1.5.9
1.5.8
1.5.7
1.5.6
1.5.5
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
future-structure
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.