- class pytorch_lightning.plugins.training_type.DeepSpeedPlugin(zero_optimization=True, stage=2, remote_device='cpu', offload_optimizer=False, offload_parameters=False, offload_params_device='cpu', nvme_path='/local_nvme', params_buffer_count=5, params_buffer_size=100000000.0, max_in_cpu=1000000000.0, offload_optimizer_device='cpu', optimizer_buffer_count=4, block_size=1048576, queue_depth=8, single_submit=False, overlap_events=True, thread_count=1, pin_memory=False, sub_group_size=1000000000000.0, contiguous_gradients=True, overlap_comm=True, allgather_partitions=True, reduce_scatter=True, allgather_bucket_size=200000000.0, reduce_bucket_size=200000000.0, zero_allow_untested_optimizer=True, logging_batch_size_per_gpu='auto', config=None, logging_level=30, num_nodes=None, parallel_devices=None, cluster_environment=None, loss_scale=0, initial_scale_power=16, loss_scale_window=1000, hysteresis=2, min_loss_scale=1, partition_activations=False, cpu_checkpointing=False, contiguous_memory_optimization=False, synchronize_checkpoint_boundary=False, load_full_weights=False, partition_module=True)¶
Provides capabilities to run training using the DeepSpeed library, with training optimizations for large billion parameter models. For more information: https://pytorch- lightning.readthedocs.io/en/latest/advanced/multi_gpu.html#deepspeed.
DeepSpeedPluginis in beta and subject to change.
Defaults have been set to enable ZeRO-Offload and some have been taken from the link below. These defaults have been set generally, but may require tuning for optimum performance based on your model size. For more information: https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training.
int) – Different stages of the ZeRO Optimizer. 0 is disabled, 1 is optimizer state partitioning, 2 is optimizer+gradient state partitioning, 3 is optimizer+gradient_parameter partitioning using the infinity engine.
int) – Number of buffers in buffer pool for optimizer state offloading when
offload_optimizer_deviceis set to to
nvme. This should be at least the number of states maintained per parameter by the optimizer. For example, Adam optimizer has 4 states (parameter, gradient, momentum, and variance).
int]) – Config used in DeepSpeed to calculate verbose timing for logging on a per sample per second basis (only displayed if logging=logging.INFO). If set to “auto”, the plugin tries to infer this from the train DataLoader’s BatchSampler, else defaults to 1. To obtain accurate logs when using datasets that do not support batch samplers, set this to the actual per gpu batch size (trainer.batch_size).
None]) – Pass in a deepspeed formatted config dict, or path to a deepspeed config: https://www.deepspeed.ai/docs/config-json. All defaults will be ignored if a config is passed in.
bool) – Enables partition activation when used with ZeRO stage 3 and model parallelism. Still requires you to wrap your forward functions in deepspeed.checkpointing.checkpoint. See deepspeed tutorial.
bool) – True when loading a single checkpoint file containing the model state dict when using ZeRO Stage 3. This differs from the DeepSpeed checkpoint which contains shards per worker.
bool) – When True, partitions the
LightningModuleacross devices when using ZeRO Stage 3. This is the default behaviour to ensure that the entire module is appropriately initialized for DeepSpeed. When False we do not explicitly convert the model, which is fine if NO layers or ALL layers are defined in
configure_sharded_model. This is useful for layers such as
torch.nn.RNNwhich do internal logic when moving to device.
Provide hook to create modules in a distributed aware context. This is useful for when we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.
Returns: Model parallel context.
Hook to do something before the training/evaluation/prediction starts.
- save_checkpoint(checkpoint, filepath)¶
Save model/training states as a checkpoint file through state-dump and file-write.
- property handles_gradient_accumulation: bool¶
Whether the plugin handles gradient accumulation internally.
- Return type
- property lightning_module¶
Returns the pure LightningModule without potential wrappers.
- property lightning_restore_optimizer_and_schedulers: bool¶
Override to disable Lightning restoring optimizers/schedulers.
This is useful for plugins which manage restoring optimizers/schedulers.
- Return type