DeepSpeedPlugin¶
-
class
pytorch_lightning.plugins.training_type.
DeepSpeedPlugin
(zero_optimization=True, stage=2, cpu_offload=False, cpu_offload_params=False, cpu_offload_use_pin_memory=False, contiguous_gradients=True, overlap_comm=True, allgather_partitions=True, reduce_scatter=True, allgather_bucket_size=200000000.0, reduce_bucket_size=200000000.0, zero_allow_untested_optimizer=True, logging_batch_size_per_gpu='auto', config=None, logging_level=30, num_nodes=1, parallel_devices=None, cluster_environment=None, loss_scale=0, initial_scale_power=16, loss_scale_window=1000, hysteresis=2, min_loss_scale=1, partition_activations=False, cpu_checkpointing=False, contiguous_memory_optimization=False, synchronize_checkpoint_boundary=False, save_full_weights=True)[source]¶ Bases:
pytorch_lightning.plugins.training_type.ddp.DDPPlugin
Provides capabilities to run training using the DeepSpeed library, with training optimizations for large billion parameter models. For more information: https://www.deepspeed.ai/.
Warning
DeepSpeedPlugin
is in beta and subject to change.Defaults have been set to enable ZeRO-Offload and some have been taken from the link below. These defaults have been set generally, but may require tuning for optimum performance based on your model size. For more information: https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training.
- Parameters
zero_optimization¶ (
bool
) – Enable ZeRO optimization. This is only compatible with precision=16. (default: True)stage¶ (
int
) – Different stages of the ZeRO Optimizer. 0 is disabled, 1 is optimizer state partitioning, 2 is optimizer+gradient state partitioning (default: 2)cpu_offload¶ (
bool
) – Enable offloading optimizer memory and computation to CPUcpu_offload_params¶ (
bool
) – When using ZeRO stage 3, offload parameters to CPUcpu_offload_use_pin_memory¶ (
bool
) – When using ZeRO stage 3, pin memory on CPUcontiguous_gradients¶ (
bool
) – Copies gradients to a continuous buffer as they are produced. Avoids memory fragmentation during backwards. Useful when training large models. (default: True)overlap_comm¶ (
bool
) – Overlap the reduction (synchronization) of gradients with the backwards computation. This is a speed optimization when training across multiple GPUs/machines. (default: True)allgather_partitions¶ (
bool
) – All gather updated parameters at the end of training step, instead of using a series of broadcast collectives (default: True)reduce_scatter¶ (
bool
) – Use reduce/scatter instead of allreduce to average gradients (default:True)allgather_bucket_size¶ (
int
) – Number of elements to allgather at once. Used to limit the memory required for larger model sizes, with a tradeoff with speed. (default: 2e8)reduce_bucket_size¶ (
int
) – Number of elements to reduce at once. Used to limit the memory required for larger model sizes, with a tradeoff with speed (default: 2e8)zero_allow_untested_optimizer¶ (
bool
) – Allow untested optimizers to be used with ZeRO. Currently only Adam is a DeepSpeed supported optimizer when using ZeRO (default: True)logging_batch_size_per_gpu¶ (
Union
[str
,int
]) – Config used in DeepSpeed to calculate verbose timing for logging on a per sample per second basis (only displayed if logging=logging.INFO). If set to “auto”, the plugin tries to infer this from the train DataLoader’s BatchSampler, else defaults to 1. To obtain accurate logs when using datasets that do not support batch samplers, set this to the actual per gpu batch size (trainer.batch_size).config¶ (
Union
[Path
,str
,dict
,None
]) – Pass in a deepspeed formatted config dict, or path to a deepspeed config: https://www.deepspeed.ai/docs/config-json. All defaults will be ignored if a config is passed in. (Default:None
)logging_level¶ (
int
) – Set logging level for deepspeed. (Default:logging.WARN
)loss_scale¶ (
float
) – Loss scaling value for FP16 training. 0.0 results in dynamic loss scaling, otherwise static (Default: 0)initial_scale_power¶ (
int
) – Power of the initial dynamic loss scale value. Loss scale is computed by2^initial_scale_power
(Default: 32)loss_scale_window¶ (
int
) – Window in which to raise/lower the dynamic FP16 loss scaling value (Default: 1000)hysteresis¶ (
int
) – FP16 Delay shift in Dynamic Loss scaling (Default: 2)min_loss_scale¶ (
int
) – The minimum FP16 dynamic loss scaling value (Default: 1000)partition_activations¶ (
bool
) – Enables partition activation when used with ZeRO stage 3. Still requires you to wrap your forward functions in deepspeed.checkpointing.checkpoint. See deepspeed tutorialcpu_checkpointing¶ (
bool
) – Offloads partitioned activations to CPU ifpartition_activations
is enabledcontiguous_memory_optimization¶ (
bool
) – Copies partitioned activations so that they are contiguous in memory. Not supported by all modelssynchronize_checkpoint_boundary¶ (
bool
) – Inserttorch.cuda.synchronize()
at each checkpoint boundary.save_full_weights¶ (
bool
) – Gathers weights across all processes before saving to disk when using ZeRO Stage 3. This allows a single weight file to contain the entire model, rather than individual sharded weight files. Disable to save sharded states individually. (Default: True)
-
model_sharded_context
()[source]¶ Provide hook to create modules in a distributed aware context. This is useful for when we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.
Returns: Model parallel context.
-
restore_model_state_from_ckpt_path
(ckpt_path, map_location=<function DeepSpeedPlugin.<lambda>>)[source]¶ This function is used to load and restore the model state.
- Parameters
- Return
checkpoint: Return loaded checkpoint bool: Wether to load optimizer / lr_schedulers states from checkpoint
-
save_checkpoint
(checkpoint, filepath)[source]¶ Save model/training states as a checkpoint file through state-dump and file-write.
-
update_global_step
(total_batch_idx, current_global_step)[source]¶ Provide a hook to count optimizer step calls.
- Parameters
Returns: New optimizer step calls
- Return type
-
property
lightning_module
¶ Returns the pure LightningModule without potential wrappers