Shortcuts

HivemindStrategy

class pytorch_lightning.strategies.HivemindStrategy(target_batch_size, run_id='lightning_run', batch_size=None, delay_state_averaging=False, delay_optimizer_step=None, delay_grad_averaging=False, offload_optimizer=None, reuse_grad_buffers=False, scheduler_fn=None, matchmaking_time=5.0, averaging_timeout=30.0, verbose=False, averager_opts=None, host_maddrs=None, initial_peers=None, **optimizer_kwargs)[source]

Bases: pytorch_lightning.strategies.strategy.Strategy

Provides capabilities to train using the Hivemind Library, training collaboratively across the internet with unreliable machines. For more information, refer to the docs.

Warning

HivemindStrategy is experimental and subject to change.

Parameters
  • target_batch_size (int) – When training, the batch size to accumulate to before running a step. The larger this batch size, the more work can be done asynchronously without communication.

  • run_id (str) – A unique identifier of this training run, used as a common prefix for all DHT keys. See https://learning-at-home.readthedocs.io/en/latest/user/dht.html.

  • batch_size (Optional[int]) – The local batch size per process. If not provided, we infer this from the first batch of data passed in at training (lazy). Note that this should not change throughout training.

  • delay_state_averaging (bool) – If enabled (default), average parameters and extra tensors in a background thread; if set to False, average parameters synchronously within the corresponding hivemind.Optimizer.step() call.

  • delay_optimizer_step (Optional[bool]) – Run optimizer in background, apply results in future .step. requires offload_optimizer.

  • delay_grad_averaging (bool) – Average gradients in background; requires offload_optimizer and delay_optimizer_step.

  • offload_optimizer (Optional[bool]) – Offload the optimizer to host memory, saving GPU memory for parameters and gradients.

  • reuse_grad_buffers (bool) – Use the model’s gradient buffers (params.grad) for gradient accumulation which is more memory efficient. Lightning will automatically disable zero_grad in the LightningModule.

  • scheduler_fn (Optional[Callable]) – callable(optimizer) -> PyTorch LRScheduler or a pre-initialized PyTorch scheduler. When using offload_optimizer/delay_optimizer_step/delay_state_averaging scheduler_fn is required to be passed to the HivemindStrategy. This is because the optimizer is re-created and the scheduler needs to be re-created as well.

  • matchmaking_time (float) – When looking for group, wait for peers to join for up to this many seconds. Increase if you see “averaged gradients with N peers” where N is below 0.9x on >=25% of epochs. Training with low-latency network, decreasing matchmaking_time allows training with smaller batch sizes.

  • averaging_timeout (float) – If an averaging step hangs for this long, it will be cancelled automatically. Increase averaging_timeout if you see “Proceeding with local gradients” at least 25% of the time. Do not set this timeout too high, as it may cause your optimizer to hang after some types of network errors.

  • verbose (bool) – Report internal Hivemind events such as accumulating gradients and running background tasks.

  • averager_opts (Optional[Dict]) – Additional keyword arguments forwarded to both GradientAverager and TrainingStateAverager.

  • host_maddrs (Optional[List]) – List of multi-addrs to create visible peers for other processes. https://learning-at-home.readthedocs.io/en/latest/user/dht.html#running-across-the-internet

  • initial_peers (Union[str, List, None]) – If connecting to a running process, a list of initial peers needs to be passed in. This can also be set via the env variable INITIAL_PEERS.

  • **optimizer_kwargs – kwargs are passed to the hivemind.Optimizer class.

all_gather(tensor, group=None, sync_grads=False)[source]

Perform an all_gather on all processes.

Parameters
  • tensor (Tensor) – the tensor to all_gather

  • group (Optional[Any]) – the process group to gather results from

  • sync_grads (bool) – flag that allows users to synchronize gradients for all_gather op

Return type

Tensor

barrier(*args, **kwargs)[source]

Synchronizes all processes which blocks processes until the whole group enters this function.

Parameters

name – an optional name to pass into barrier.

Return type

None

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters
  • obj (~TBroadcast) – the object to broadcast

  • src (int) – source rank

Return type

~TBroadcast

model_to_device()[source]

Moves the model to the correct device.

Return type

None

on_train_batch_start(batch, batch_idx, dataloader_idx=0)[source]

Called in the training loop before anything happens for that batch.

Return type

None

reduce(tensor, *args, **kwargs)[source]

Reduces the given tensor (e.g. across GPUs/processes).

Parameters
  • tensor (Union[Any, Tensor]) – the tensor to sync and reduce

  • group – the process group to reduce

  • reduce_op – the reduction operation. Defaults to ‘mean’. Can also be a string ‘sum’ or ReduceOp.

Return type

Union[Any, Tensor]

setup(trainer)[source]

Setup plugins for the trainer fit and creates optimizers.

Parameters

trainer (Trainer) – the trainer instance

Return type

None

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type

None

property is_global_zero: bool

Whether the current process is the rank zero process not only on the local node, but for all nodes.

Return type

bool

property root_device: torch.device

Returns the root device.

Return type

device

Read the Docs v: latest
Versions
latest
stable
1.7.7
1.7.6
1.7.5
1.7.4
1.7.3
1.7.2
1.7.1
1.7.0
1.6.5
1.6.4
1.6.3
1.6.2
1.6.1
1.6.0
1.5.10
1.5.9
1.5.8
1.5.7
1.5.6
1.5.5
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
future-structure
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.