HorovodPlugin¶
-
class
pytorch_lightning.plugins.training_type.
HorovodPlugin
(parallel_devices=None)[source]¶ Bases:
pytorch_lightning.plugins.training_type.parallel.ParallelPlugin
Plugin for Horovod distributed training integration.
-
all_gather
(result, group=None, sync_grads=False)[source]¶ Perform a all_gather on all processes
- Return type
-
post_backward
(closure_loss, should_accumulate, optimizer, opt_idx)[source]¶ Run after precision plugin executes backward
-
reduce
(tensor, group=None, reduce_op='mean')[source]¶ Reduces a tensor from several distributed processes to one aggregated tensor.
- Parameters
- Returns
reduced value, except when the input was not a tensor the output remains is unchanged
-
property
root_device
¶ Returns the root device
-