Lightning implements various tricks to help during training
Accumulated gradients runs K small batches of size N before doing a backwards pass. The effect is a large effective batch size of size KxN.
# DEFAULT (ie: no accumulated grads) trainer = Trainer(accumulate_grad_batches=1)
Gradient clipping may be enabled to avoid exploding gradients. Specifically, this will clip the gradient norm computed over all model parameters together.
# DEFAULT (ie: don't clip) trainer = Trainer(gradient_clip_val=0) # clip gradients with norm above 0.5 trainer = Trainer(gradient_clip_val=0.5)
Stochastic Weight Averaging¶
Stochastic Weight Averaging (SWA) can make your models generalize better at virtually no additional cost. This can be used with both non-trained and trained models. The SWA procedure smooths the loss landscape thus making it harder to end up in a local minimum during optimization.
For a more detailed explanation of SWA and how it works, read this post by the PyTorch team.
# Enable Stochastic Weight Averaging trainer = Trainer(stochastic_weight_avg=True)
Auto scaling of batch size¶
Auto scaling of batch size may be enabled to find the largest batch size that fits into memory. Larger batch size often yields better estimates of gradients, but may also result in longer training time. Inspired by https://github.com/BlackHC/toma.
# DEFAULT (ie: don't scale batch size automatically) trainer = Trainer(auto_scale_batch_size=None) # Autoscale batch size trainer = Trainer(auto_scale_batch_size=None|'power'|'binsearch') # find the batch size trainer.tune(model)
Currently, this feature supports two modes ‘power’ scaling and ‘binsearch’ scaling. In ‘power’ scaling, starting from a batch size of 1 keeps doubling the batch size until an out-of-memory (OOM) error is encountered. Setting the argument to ‘binsearch’ will initially also try doubling the batch size until it encounters an OOM, after which it will do a binary search that will finetune the batch size. Additionally, it should be noted that the batch size scaler cannot search for batch sizes larger than the size of the training dataset.
This feature expects that a batch_size field is either located as a model attribute i.e. model.batch_size or as a field in your hparams i.e. model.hparams.batch_size. The field should exist and will be overridden by the results of this algorithm. Additionally, your train_dataloader() method should depend on this field for this feature to work i.e.
def train_dataloader(self): return DataLoader(train_dataset, batch_size=self.batch_size|self.hparams.batch_size)
Due to these constraints, this features does NOT work when passing dataloaders directly to .fit().
The scaling algorithm has a number of parameters that the user can control by invoking the trainer method .scale_batch_size themself (see description below).
# Use default in trainer construction trainer = Trainer() tuner = Tuner(trainer) # Invoke method new_batch_size = tuner.scale_batch_size(model, *extra_parameters_here) # Override old batch size model.hparams.batch_size = new_batch_size # Fit as normal trainer.fit(model)
- The algorithm in short works by:
Dumping the current state of the model and trainer
- Iteratively until convergence or maximum number of tries max_trials (default 25) has been reached:
Call fit() method of trainer. This evaluates steps_per_trial (default 3) number of training steps. Each training step can trigger an OOM error if the tensors (training batch, weights, gradients, etc.) allocated during the steps have a too large memory footprint.
If an OOM error is encountered, decrease batch size else increase it. How much the batch size is increased/decreased is determined by the chosen strategy.
The found batch size is saved to either model.batch_size or model.hparams.batch_size
Restore the initial state of model and trainer
scale_batch_size(model, mode='power', steps_per_trial=3, init_val=2, max_trials=25, batch_arg_name='batch_size', **fit_kwargs)
Will iteratively try to find the largest batch size for a given model that does not give an out of memory (OOM) error.
model¶ – Model to fit.
str) – string setting the search mode. Either power or binsearch. If mode is power we keep multiplying the batch size by 2, until we get an OOM error. If mode is ‘binsearch’, we will initially also keep multiplying by 2 and after encountering an OOM error do a binary search between the last successful batch size and the batch size that failed.
trainer.datamodule(the datamodule passed to the tune method)
**fit_kwargs¶ – remaining arguments to be passed to .fit(), e.g., dataloader or datamodule.
Batch size finder is not supported for DDP yet, it is coming soon.
Sequential Model Parallelism with Checkpointing¶
PyTorch Lightning integration for Sequential Model Parallelism using FairScale. Sequential Model Parallelism splits a sequential module onto multiple GPUs, reducing peak GPU memory requirements substantially.
For more information, refer to Sequential Model Parallelism with Checkpointing.