Shortcuts

Trainer

Once you’ve organized your PyTorch code into a LightningModule, the Trainer automates everything else.

The Trainer achieves the following:

  1. You maintain control over all aspects via PyTorch code in your LightningModule.

  2. The trainer uses best practices embedded by contributors and users from top AI labs such as Facebook AI Research, NYU, MIT, Stanford, etc…

  3. The trainer allows disabling any key part that you don’t want automated.



Basic use

This is the basic use of the trainer:

model = MyLightningModule()

trainer = Trainer()
trainer.fit(model, train_dataloader, val_dataloader)

Under the hood

The Lightning Trainer does much more than just “training”. Under the hood, it handles all loop details for you, some examples include:

  • Automatically enabling/disabling grads

  • Running the training, validation and test dataloaders

  • Calling the Callbacks at the appropriate times

  • Putting batches and computations on the correct devices

Here’s the pseudocode for what the trainer does under the hood (showing the train loop only)

# put model in train mode
model.train()
torch.set_grad_enabled(True)

losses = []
for batch in train_dataloader:
    # calls hooks like this one
    on_train_batch_start()

    # train step
    loss = training_step(batch)

    # clear gradients
    optimizer.zero_grad()

    # backward
    loss.backward()

    # update parameters
    optimizer.step()

    losses.append(loss)

Trainer in Python scripts

In Python scripts, it’s recommended you use a main function to call the Trainer.

from argparse import ArgumentParser


def main(hparams):
    model = LightningModule()
    trainer = Trainer(accelerator=hparams.accelerator, devices=hparams.devices)
    trainer.fit(model)


if __name__ == "__main__":
    parser = ArgumentParser()
    parser.add_argument("--accelerator", default=None)
    parser.add_argument("--devices", default=None)
    args = parser.parse_args()

    main(args)

So you can run it like so:

python main.py --accelerator 'gpu' --devices 2

Note

Pro-tip: You don’t need to define all flags manually. You can let the LightningCLI create the Trainer and model with arguments supplied from the CLI.

If you want to stop a training run early, you can press “Ctrl + C” on your keyboard. The trainer will catch the KeyboardInterrupt and attempt a graceful shutdown. The trainer object will also set an attribute interrupted to True in such cases. If you have a callback which shuts down compute resources, for example, you can conditionally run the shutdown logic for only uninterrupted runs by overriding lightning.pytorch.Callback.on_exception().


Validation

You can perform an evaluation epoch over the validation set, outside of the training loop, using validate(). This might be useful if you want to collect new metrics from a model right at its initialization or after it has already been trained.

trainer.validate(model=model, dataloaders=val_dataloaders)

Testing

Once you’re done training, feel free to run the test set! (Only right before publishing your paper or pushing to production)

trainer.test(dataloaders=test_dataloaders)

Reproducibility

To ensure full reproducibility from run to run you need to set seeds for pseudo-random generators, and set deterministic flag in Trainer.

Example:

from lightning.pytorch import Trainer, seed_everything

seed_everything(42, workers=True)
# sets seeds for numpy, torch and python.random.
model = Model()
trainer = Trainer(deterministic=True)

By setting workers=True in seed_everything(), Lightning derives unique seeds across all dataloader workers and processes for torch, numpy and stdlib random number generators. When turned on, it ensures that e.g. data augmentations are not repeated across workers.


Trainer flags

accelerator

Supports passing different accelerator types ("cpu", "gpu", "tpu", "ipu", "auto") as well as custom accelerator instances.

# CPU accelerator
trainer = Trainer(accelerator="cpu")

# Training with GPU Accelerator using 2 GPUs
trainer = Trainer(devices=2, accelerator="gpu")

# Training with TPU Accelerator using 8 tpu cores
trainer = Trainer(devices=8, accelerator="tpu")

# Training with GPU Accelerator using the DistributedDataParallel strategy
trainer = Trainer(devices=4, accelerator="gpu", strategy="ddp")

Note

The "auto" option recognizes the machine you are on, and selects the appropriate Accelerator.

# If your machine has GPUs, it will use the GPU Accelerator for training
trainer = Trainer(devices=2, accelerator="auto")

You can also modify hardware behavior by subclassing an existing accelerator to adjust for your needs.

Example:

class MyOwnAcc(CPUAccelerator):
    ...

Trainer(accelerator=MyOwnAcc())

Note

If the devices flag is not defined, it will assume devices to be "auto" and fetch the auto_device_count from the accelerator.

# This is part of the built-in `CUDAAccelerator`
class CUDAAccelerator(Accelerator):
    """Accelerator for GPU devices."""

    @staticmethod
    def auto_device_count() -> int:
        """Get the devices when set to auto."""
        return torch.cuda.device_count()


# Training with GPU Accelerator using total number of gpus available on the system
Trainer(accelerator="gpu")

accumulate_grad_batches

Accumulates gradients over k batches before stepping the optimizer.

# default used by the Trainer (no accumulation)
trainer = Trainer(accumulate_grad_batches=1)

Example:

# accumulate every 4 batches (effective batch size is batch*4)
trainer = Trainer(accumulate_grad_batches=4)

See also: Gradient Accumulation to enable more fine-grained accumulation schedules.

benchmark


The value (True or False) to set torch.backends.cudnn.benchmark to. The value for torch.backends.cudnn.benchmark set in the current session will be used (False if not manually set). If deterministic is set to True, this will default to False. You can read more about the interaction of torch.backends.cudnn.benchmark and torch.backends.cudnn.deterministic here

Setting this flag to True can increase the speed of your system if your input sizes don’t change. However, if they do, then it might make your system slower. The CUDNN auto-tuner will try to find the best algorithm for the hardware when a new input size is encountered. This might also increase the memory usage. Read more about it here.

Example:

# Will use whatever the current value for torch.backends.cudnn.benchmark, normally False
trainer = Trainer(benchmark=None)  # default

# you can overwrite the value
trainer = Trainer(benchmark=True)

deterministic


This flag sets the torch.backends.cudnn.deterministic flag. Might make your system slower, but ensures reproducibility.

For more info check PyTorch docs.

Example:

# default used by the Trainer
trainer = Trainer(deterministic=False)

callbacks

This argument can be used to add a Callback or a list of them. Callbacks run sequentially in the order defined here with the exception of ModelCheckpoint callbacks which run after all others to ensure all states are saved to the checkpoints.

# single callback
trainer = Trainer(callbacks=PrintCallback())

# a list of callbacks
trainer = Trainer(callbacks=[PrintCallback()])

Example:

from lightning.pytorch.callbacks import Callback

class PrintCallback(Callback):
    def on_train_start(self, trainer, pl_module):
        print("Training is started!")
    def on_train_end(self, trainer, pl_module):
        print("Training is done.")

Model-specific callbacks can also be added inside the LightningModule through configure_callbacks(). Callbacks returned in this hook will extend the list initially given to the Trainer argument, and replace the trainer callbacks should there be two or more of the same type. ModelCheckpoint callbacks always run last.

check_val_every_n_epoch


Check val every n train epochs.

Example:

# default used by the Trainer
trainer = Trainer(check_val_every_n_epoch=1)

# run val loop every 10 training epochs
trainer = Trainer(check_val_every_n_epoch=10)

default_root_dir


Default path for logs and weights when no logger or lightning.pytorch.callbacks.ModelCheckpoint callback passed. On certain clusters you might want to separate where logs and checkpoints are stored. If you don’t then use this argument for convenience. Paths can be local paths or remote paths such as s3://bucket/path or hdfs://path/. Credentials will need to be set up to use remote filepaths.

# default used by the Trainer
trainer = Trainer(default_root_dir=os.getcwd())

devices

Number of devices to train on (int), which devices to train on (list or str), or "auto".

# Training with CPU Accelerator using 2 processes
trainer = Trainer(devices=2, accelerator="cpu")

# Training with GPU Accelerator using GPUs 1 and 3
trainer = Trainer(devices=[1, 3], accelerator="gpu")

# Training with TPU Accelerator using 8 tpu cores
trainer = Trainer(devices=8, accelerator="tpu")

Tip

The "auto" option recognizes the devices to train on, depending on the Accelerator being used.

# Use whatever hardware your machine has available
trainer = Trainer(devices="auto", accelerator="auto")

# Training with CPU Accelerator using 1 process
trainer = Trainer(devices="auto", accelerator="cpu")

# Training with TPU Accelerator using 8 tpu cores
trainer = Trainer(devices="auto", accelerator="tpu")

# Training with IPU Accelerator using 4 ipus
trainer = Trainer(devices="auto", accelerator="ipu")

Note

If the devices flag is not defined, it will assume devices to be "auto" and fetch the auto_device_count from the accelerator.

# This is part of the built-in `CUDAAccelerator`
class CUDAAccelerator(Accelerator):
    """Accelerator for GPU devices."""

    @staticmethod
    def auto_device_count() -> int:
        """Get the devices when set to auto."""
        return torch.cuda.device_count()


# Training with GPU Accelerator using total number of gpus available on the system
Trainer(accelerator="gpu")

enable_checkpointing

By default Lightning saves a checkpoint for you in your current working directory, with the state of your last training epoch, Checkpoints capture the exact value of all parameters used by a model. To disable automatic checkpointing, set this to False.

# default used by Trainer, saves the most recent model to a single checkpoint after each epoch
trainer = Trainer(enable_checkpointing=True)

# turn off automatic checkpointing
trainer = Trainer(enable_checkpointing=False)

You can override the default behavior by initializing the ModelCheckpoint callback, and adding it to the callbacks list. See Saving and Loading Checkpoints for how to customize checkpointing.

from lightning.pytorch.callbacks import ModelCheckpoint

# Init ModelCheckpoint callback, monitoring 'val_loss'
checkpoint_callback = ModelCheckpoint(monitor="val_loss")

# Add your callback to the callbacks list
trainer = Trainer(callbacks=[checkpoint_callback])

fast_dev_run


Runs n if set to n (int) else 1 if set to True batch(es) to ensure your code will execute without errors. This applies to fitting, validating, testing, and predicting. This flag is only recommended for debugging purposes and should not be used to limit the number of batches to run.

# default used by the Trainer
trainer = Trainer(fast_dev_run=False)

# runs only 1 training and 1 validation batch and the program ends
trainer = Trainer(fast_dev_run=True)
trainer.fit(...)

# runs 7 predict batches and program ends
trainer = Trainer(fast_dev_run=7)
trainer.predict(...)

This argument is different from limit_{train,val,test,predict}_batches because side effects are avoided to reduce the impact to subsequent runs. These are the changes enabled:

  • Sets Trainer(max_epochs=1).

  • Sets Trainer(max_steps=...) to 1 or the number passed.

  • Sets Trainer(num_sanity_val_steps=0).

  • Sets Trainer(val_check_interval=1.0).

  • Sets Trainer(check_every_n_epoch=1).

  • Disables all loggers.

  • Disables passing logged metrics to loggers.

  • The ModelCheckpoint callbacks will not trigger.

  • The EarlyStopping callbacks will not trigger.

  • Sets limit_{train,val,test,predict}_batches to 1 or the number passed.

  • Disables the tuning callbacks (BatchSizeFinder, LearningRateFinder).

  • If using the CLI, the configuration file is not saved.

gradient_clip_val


Gradient clipping value

# default used by the Trainer
trainer = Trainer(gradient_clip_val=None)

limit_train_batches


How much of training dataset to check. Useful when debugging or testing something that happens at the end of an epoch.

# default used by the Trainer
trainer = Trainer(limit_train_batches=1.0)

Example:

# default used by the Trainer
trainer = Trainer(limit_train_batches=1.0)

# run through only 25% of the training set each epoch
trainer = Trainer(limit_train_batches=0.25)

# run through only 10 batches of the training set each epoch
trainer = Trainer(limit_train_batches=10)

limit_test_batches


How much of test dataset to check.

# default used by the Trainer
trainer = Trainer(limit_test_batches=1.0)

# run through only 25% of the test set each epoch
trainer = Trainer(limit_test_batches=0.25)

# run for only 10 batches
trainer = Trainer(limit_test_batches=10)

In the case of multiple test dataloaders, the limit applies to each dataloader individually.

limit_val_batches


How much of validation dataset to check. Useful when debugging or testing something that happens at the end of an epoch.

# default used by the Trainer
trainer = Trainer(limit_val_batches=1.0)

# run through only 25% of the validation set each epoch
trainer = Trainer(limit_val_batches=0.25)

# run for only 10 batches
trainer = Trainer(limit_val_batches=10)

# disable validation
trainer = Trainer(limit_val_batches=0)

In the case of multiple validation dataloaders, the limit applies to each dataloader individually.

log_every_n_steps


How often to add logging rows (does not write to disk)

# default used by the Trainer
trainer = Trainer(log_every_n_steps=50)
See Also:

logger

Logger (or iterable collection of loggers) for experiment tracking. A True value uses the default TensorBoardLogger shown below. False will disable logging.

from lightning.pytorch.loggers import TensorBoardLogger

# default logger used by trainer (if tensorboard is installed)
logger = TensorBoardLogger(save_dir=os.getcwd(), version=1, name="lightning_logs")
Trainer(logger=logger)

max_epochs


Stop training once this number of epochs is reached

# default used by the Trainer
trainer = Trainer(max_epochs=1000)

If both max_epochs and max_steps aren’t specified, max_epochs will default to 1000. To enable infinite training, set max_epochs = -1.

min_epochs


Force training for at least these many epochs

# default used by the Trainer
trainer = Trainer(min_epochs=1)

max_steps


Stop training after this number of global steps. Training will stop if max_steps or max_epochs have reached (earliest).

# Default (disabled)
trainer = Trainer(max_steps=-1)

# Stop after 100 steps
trainer = Trainer(max_steps=100)

If max_steps is not specified, max_epochs will be used instead (and max_epochs defaults to 1000 if max_epochs is not specified). To disable this default, set max_steps = -1.

min_steps


Force training for at least this number of global steps. Trainer will train model for at least min_steps or min_epochs (latest).

# Default (disabled)
trainer = Trainer(min_steps=None)

# Run at least for 100 steps (disable min_epochs)
trainer = Trainer(min_steps=100, min_epochs=0)

max_time

Set the maximum amount of time for training. Training will get interrupted mid-epoch. For customizable options use the Timer callback.

# Default (disabled)
trainer = Trainer(max_time=None)

# Stop after 12 hours of training or when reaching 10 epochs (string)
trainer = Trainer(max_time="00:12:00:00", max_epochs=10)

# Stop after 1 day and 5 hours (dict)
trainer = Trainer(max_time={"days": 1, "hours": 5})

In case max_time is used together with min_steps or min_epochs, the min_* requirement always has precedence.

num_nodes


Number of GPU nodes for distributed training.

# default used by the Trainer
trainer = Trainer(num_nodes=1)

# to train on 8 nodes
trainer = Trainer(num_nodes=8)

num_sanity_val_steps


Sanity check runs n batches of val before starting the training routine. This catches any bugs in your validation without having to wait for the first validation check. The Trainer uses 2 steps by default. Turn it off or modify it here.

# default used by the Trainer
trainer = Trainer(num_sanity_val_steps=2)

# turn it off
trainer = Trainer(num_sanity_val_steps=0)

# check all validation data
trainer = Trainer(num_sanity_val_steps=-1)

This option will reset the validation dataloader unless num_sanity_val_steps=0.

overfit_batches


Uses this much data of the training & validation set. If the training & validation dataloaders have shuffle=True, Lightning will automatically disable it.

Useful for quickly debugging or trying to overfit on purpose.

# default used by the Trainer
trainer = Trainer(overfit_batches=0.0)

# use only 1% of the train & val set
trainer = Trainer(overfit_batches=0.01)

# overfit on 10 of the same batches
trainer = Trainer(overfit_batches=10)

plugins

Plugins allow you to connect arbitrary backends, precision libraries, clusters etc. For example:

To define your own behavior, subclass the relevant class and pass it in. Here’s an example linking up your own ClusterEnvironment.

from lightning.pytorch.plugins.environments import ClusterEnvironment


class MyCluster(ClusterEnvironment):
    def main_address(self):
        return your_main_address

    def main_port(self):
        return your_main_port

    def world_size(self):
        return the_world_size


trainer = Trainer(plugins=[MyCluster()], ...)

precision

Lightning supports either double (64), float (32), bfloat16 (bf16), or half (16) precision training.

Half precision, or mixed precision, is the combined use of 32 and 16 bit floating points to reduce memory footprint during model training. This can result in improved performance, achieving +3X speedups on modern GPUs.

# default used by the Trainer
trainer = Trainer(precision=32)

# 16-bit precision
trainer = Trainer(precision="16-mixed", accelerator="gpu", devices=1)  # works only on CUDA

# bfloat16 precision
trainer = Trainer(precision="bf16-mixed")

# 64-bit precision
trainer = Trainer(precision=64)

Note

When running on TPUs, torch.bfloat16 will be used but tensor printing will still show torch.float32.

profiler


To profile individual steps during training and assist in identifying bottlenecks.

See the profiler documentation. for more details.

from lightning.pytorch.profilers import SimpleProfiler, AdvancedProfiler

# default used by the Trainer
trainer = Trainer(profiler=None)

# to profile standard training events, equivalent to `profiler=SimpleProfiler()`
trainer = Trainer(profiler="simple")

# advanced profiler for function-level stats, equivalent to `profiler=AdvancedProfiler()`
trainer = Trainer(profiler="advanced")

enable_progress_bar

Whether to enable or disable the progress bar. Defaults to True.

# default used by the Trainer
trainer = Trainer(enable_progress_bar=True)

# disable progress bar
trainer = Trainer(enable_progress_bar=False)

reload_dataloaders_every_n_epochs


Set to a positive integer to reload dataloaders every n epochs from your currently used data source. DataSource can be a LightningModule or a LightningDataModule.

# if 0 (default)
train_loader = model.train_dataloader()
# or if using data module: datamodule.train_dataloader()
for epoch in epochs:
    for batch in train_loader:
        ...

# if a positive integer
for epoch in epochs:
    if not epoch % reload_dataloaders_every_n_epochs:
        train_loader = model.train_dataloader()
        # or if using data module: datamodule.train_dataloader()
    for batch in train_loader:
        ...

The pseudocode applies also to the val_dataloader.

use_distributed_sampler

See lightning.pytorch.trainer.Trainer.params.use_distributed_sampler.

# default used by the Trainer
trainer = Trainer(use_distributed_sampler=True)

By setting to False, you have to add your own distributed sampler:

# in your LightningModule or LightningDataModule
def train_dataloader(self):
    dataset = ...
    # default used by the Trainer
    sampler = torch.utils.data.DistributedSampler(dataset, shuffle=True)
    dataloader = DataLoader(dataset, batch_size=32, sampler=sampler)
    return dataloader

strategy

Supports passing different training strategies with aliases (ddp, fsdp, etc) as well as configured strategies.

# Data-parallel training with the DDP strategy on 4 GPUs
trainer = Trainer(strategy="ddp", accelerator="gpu", devices=4)

# Model-parallel training with the FSDP strategy on 4 GPUs
trainer = Trainer(strategy="fsdp", accelerator="gpu", devices=4)

Additionally, you can pass a strategy object.

from lightning.pytorch.strategies import DDPStrategy

trainer = Trainer(strategy=DDPStrategy(static_graph=True), accelerator="gpu", devices=2)
See Also:

sync_batchnorm


Enable synchronization between batchnorm layers across all GPUs.

trainer = Trainer(sync_batchnorm=True)

val_check_interval


How often within one training epoch to check the validation set. Can specify as float or int.

  • pass a float in the range [0.0, 1.0] to check after a fraction of the training epoch.

  • pass an int to check after a fixed number of training batches. An int value can only be higher than the number of training batches when check_val_every_n_epoch=None, which validates after every N training batches across epochs or iteration-based training.

# default used by the Trainer
trainer = Trainer(val_check_interval=1.0)

# check validation set 4 times during a training epoch
trainer = Trainer(val_check_interval=0.25)

# check validation set every 1000 training batches in the current epoch
trainer = Trainer(val_check_interval=1000)

# check validation set every 1000 training batches across complete epochs or during iteration-based training
# use this when using iterableDataset and your dataset has no length
# (ie: production cases with streaming data)
trainer = Trainer(val_check_interval=1000, check_val_every_n_epoch=None)
# Here is the computation to estimate the total number of batches seen within an epoch.

# Find the total number of train batches
total_train_batches = total_train_samples // (train_batch_size * world_size)

# Compute how many times we will call validation during the training loop
val_check_batch = max(1, int(total_train_batches * val_check_interval))
val_checks_per_epoch = total_train_batches / val_check_batch

# Find the total number of validation batches
total_val_batches = total_val_samples // (val_batch_size * world_size)

# Total number of batches run
total_fit_batches = total_train_batches + total_val_batches

enable_model_summary

Whether to enable or disable the model summarization. Defaults to True.

# default used by the Trainer
trainer = Trainer(enable_model_summary=True)

# disable summarization
trainer = Trainer(enable_model_summary=False)

# enable custom summarization
from lightning.pytorch.callbacks import ModelSummary

trainer = Trainer(enable_model_summary=True, callbacks=[ModelSummary(max_depth=-1)])

inference_mode

Whether to use torch.inference_mode() or torch.no_grad() mode during evaluation (validate/test/predict)

# default used by the Trainer
trainer = Trainer(inference_mode=True)

# Use `torch.no_grad` instead
trainer = Trainer(inference_mode=False)

With torch.inference_mode() disabled, you can enable the grad of your model layers if required.

class LitModel(LightningModule):
    def validation_step(self, batch, batch_idx):
        preds = self.layer1(batch)
        with torch.enable_grad():
            grad_preds = preds.requires_grad_()
            preds2 = self.layer2(grad_preds)


model = LitModel()
trainer = Trainer(inference_mode=False)
trainer.validate(model)

Trainer class API

Methods

init

Trainer.__init__(*, accelerator='auto', strategy='auto', devices='auto', num_nodes=1, precision='32-true', logger=None, callbacks=None, fast_dev_run=False, max_epochs=None, min_epochs=None, max_steps=- 1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, overfit_batches=0.0, val_check_interval=None, check_val_every_n_epoch=1, num_sanity_val_steps=None, log_every_n_steps=None, enable_checkpointing=None, enable_progress_bar=None, enable_model_summary=None, accumulate_grad_batches=1, gradient_clip_val=None, gradient_clip_algorithm=None, deterministic=None, benchmark=None, inference_mode=True, use_distributed_sampler=True, profiler=None, detect_anomaly=False, barebones=False, plugins=None, sync_batchnorm=False, reload_dataloaders_every_n_epochs=0, default_root_dir=None)[source]

Customize every aspect of training via flags.

Parameters
  • accelerator (Union[str, Accelerator]) – Supports passing different accelerator types (“cpu”, “gpu”, “tpu”, “ipu”, “hpu”, “mps”, “auto”) as well as custom accelerator instances.

  • strategy (Union[str, Strategy]) – Supports different training strategies with aliases as well custom strategies. Default: "auto".

  • devices (Union[List[int], str, int]) – The devices to use. Can be set to a positive number (int or str), a sequence of device indices (list or str), the value -1 to indicate all available devices should be used, or "auto" for automatic selection based on the chosen accelerator. Default: "auto".

  • num_nodes (int) – Number of GPU nodes for distributed training. Default: 1.

  • precision (Union[Literal[64, 32, 16], Literal[‘16-mixed’, ‘bf16-mixed’, ‘32-true’, ‘64-true’], Literal[‘64’, ‘32’, ‘16’, ‘bf16’]]) – Double precision (64, ‘64’ or ‘64-true’), full precision (32, ‘32’ or ‘32-true’), 16bit mixed precision (16, ‘16’, ‘16-mixed’) or bfloat16 mixed precision (‘bf16’, ‘bf16-mixed’). Can be used on CPU, GPU, TPUs, HPUs or IPUs. Default: '32-true'.

  • logger (Union[Logger, Iterable[Logger], bool, None]) – Logger (or iterable collection of loggers) for experiment tracking. A True value uses the default TensorBoardLogger if it is installed, otherwise CSVLogger. False will disable logging. If multiple loggers are provided, local files (checkpoints, profiler traces, etc.) are saved in the log_dir of he first logger. Default: True.

  • callbacks (Union[List[Callback], Callback, None]) – Add a callback or list of callbacks. Default: None.

  • fast_dev_run (Union[int, bool]) – Runs n if set to n (int) else 1 if set to True batch(es) of train, val and test to find any bugs (ie: a sort of unit test). Default: False.

  • max_epochs (Optional[int]) – Stop training once this number of epochs is reached. Disabled by default (None). If both max_epochs and max_steps are not specified, defaults to max_epochs = 1000. To enable infinite training, set max_epochs = -1.

  • min_epochs (Optional[int]) – Force training for at least these many epochs. Disabled by default (None).

  • max_steps (int) – Stop training after this number of steps. Disabled by default (-1). If max_steps = -1 and max_epochs = None, will default to max_epochs = 1000. To enable infinite training, set max_epochs to -1.

  • min_steps (Optional[int]) – Force training for at least these number of steps. Disabled by default (None).

  • max_time (Union[str, timedelta, Dict[str, int], None]) – Stop training after this amount of time has passed. Disabled by default (None). The time duration can be specified in the format DD:HH:MM:SS (days, hours, minutes seconds), as a datetime.timedelta, or a dictionary with keys that will be passed to datetime.timedelta.

  • limit_train_batches (Union[int, float, None]) – How much of training dataset to check (float = fraction, int = num_batches). Default: 1.0.

  • limit_val_batches (Union[int, float, None]) – How much of validation dataset to check (float = fraction, int = num_batches). Default: 1.0.

  • limit_test_batches (Union[int, float, None]) – How much of test dataset to check (float = fraction, int = num_batches). Default: 1.0.

  • limit_predict_batches (Union[int, float, None]) – How much of prediction dataset to check (float = fraction, int = num_batches). Default: 1.0.

  • overfit_batches (Union[int, float]) – Overfit a fraction of training/validation data (float) or a set number of batches (int). Default: 0.0.

  • val_check_interval (Union[int, float, None]) – How often to check the validation set. Pass a float in the range [0.0, 1.0] to check after a fraction of the training epoch. Pass an int to check after a fixed number of training batches. An int value can only be higher than the number of training batches when check_val_every_n_epoch=None, which validates after every N training batches across epochs or during iteration-based training. Default: 1.0.

  • check_val_every_n_epoch (Optional[int]) – Perform a validation loop every after every N training epochs. If None, validation will be done solely based on the number of training batches, requiring val_check_interval to be an integer value. Default: 1.

  • num_sanity_val_steps (Optional[int]) – Sanity check runs n validation batches before starting the training routine. Set it to -1 to run all batches in all validation dataloaders. Default: 2.

  • log_every_n_steps (Optional[int]) – How often to log within steps. Default: 50.

  • enable_checkpointing (Optional[bool]) – If True, enable checkpointing. It will configure a default ModelCheckpoint callback if there is no user-defined ModelCheckpoint in callbacks. Default: True.

  • enable_progress_bar (Optional[bool]) – Whether to enable to progress bar by default. Default: True.

  • enable_model_summary (Optional[bool]) – Whether to enable model summarization by default. Default: True.

  • accumulate_grad_batches (int) – Accumulates gradients over k batches before stepping the optimizer. Default: 1.

  • gradient_clip_val (Union[int, float, None]) – The value at which to clip gradients. Passing gradient_clip_val=None disables gradient clipping. If using Automatic Mixed Precision (AMP), the gradients will be unscaled before. Default: None.

  • gradient_clip_algorithm (Optional[str]) – The gradient clipping algorithm to use. Pass gradient_clip_algorithm="value" to clip by value, and gradient_clip_algorithm="norm" to clip by norm. By default it will be set to "norm".

  • deterministic (Union[bool, Literal[‘warn’], None]) – If True, sets whether PyTorch operations must use deterministic algorithms. Set to "warn" to use deterministic algorithms whenever possible, throwing warnings on operations that don’t support deterministic mode (requires PyTorch 1.11+). If not set, defaults to False. Default: None.

  • benchmark (Optional[bool]) – The value (True or False) to set torch.backends.cudnn.benchmark to. The value for torch.backends.cudnn.benchmark set in the current session will be used (False if not manually set). If deterministic is set to True, this will default to False. Override to manually set a different value. Default: None.

  • inference_mode (bool) – Whether to use torch.inference_mode() or torch.no_grad() during evaluation (validate/test/predict).

  • use_distributed_sampler (bool) – Whether to wrap the DataLoader’s sampler with torch.utils.data.DistributedSampler. If not specified this is toggled automatically for strategies that require it. By default, it will add shuffle=True for the train sampler and shuffle=False for validation/test/predict samplers. If you want to disable this logic, you can pass False and add your own distributed sampler in the dataloader hooks. If True and a distributed sampler was already added, Lightning will not replace the existing one. For iterable-style datasets, we don’t do this automatically.

  • profiler (Union[Profiler, str, None]) – To profile individual steps during training and assist in identifying bottlenecks. Default: None.

  • detect_anomaly (bool) – Enable anomaly detection for the autograd engine. Default: False.

  • barebones (bool) – Whether to run in “barebones mode”, where all features that may impact raw speed are disabled. This is meant for analyzing the Trainer overhead and is discouraged during regular training runs. The following features are deactivated: enable_checkpointing, logger, enable_progress_bar, log_every_n_steps, enable_model_summary, num_sanity_val_steps, fast_dev_run, detect_anomaly, profiler, log(), log_dict().

  • plugins (Union[PrecisionPlugin, ClusterEnvironment, CheckpointIO, LayerSync, str, List[Union[PrecisionPlugin, ClusterEnvironment, CheckpointIO, LayerSync, str]], None]) – Plugins allow modification of core behavior like ddp and amp, and enable custom lightning plugins. Default: None.

  • sync_batchnorm (bool) – Synchronize batch norm layers between process groups/whole world. Default: False.

  • reload_dataloaders_every_n_epochs (int) – Set to a non-negative integer to reload dataloaders every n epochs. Default: 0.

  • default_root_dir (Union[str, Path, None]) – Default path for logs and weights when no logger/ckpt_callback passed. Default: os.getcwd(). Can be remote file paths such as s3://mybucket/path or ‘hdfs://path/’

fit

Trainer.fit(model, train_dataloaders=None, val_dataloaders=None, datamodule=None, ckpt_path=None)[source]

Runs the full optimization routine.

Parameters
  • model (LightningModule) – Model to fit.

  • train_dataloaders (Union[Any, LightningDataModule, None]) – An iterable or collection of iterables specifying training samples. Alternatively, a LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.train_dataloader hook.

  • val_dataloaders (Optional[Any]) – An iterable or collection of iterables specifying validation samples.

  • ckpt_path (Optional[str]) – Path/URL of the checkpoint from which training is resumed. Could also be one of two special keywords "last" and "hpc". If there is no checkpoint file at the path, an exception is raised. If resuming from mid-epoch checkpoint, training will start from the beginning of the next epoch.

  • datamodule (Optional[LightningDataModule]) – A LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.train_dataloader hook.

For more information about multiple dataloaders, see this section.

Return type

None

validate

Trainer.validate(model=None, dataloaders=None, ckpt_path=None, verbose=True, datamodule=None)[source]

Perform one evaluation epoch over the validation set.

Parameters
  • model (Optional[LightningModule]) – The model to validate.

  • dataloaders (Union[Any, LightningDataModule, None]) – An iterable or collection of iterables specifying validation samples. Alternatively, a LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.val_dataloader hook.

  • ckpt_path (Optional[str]) – Either "best", "last", "hpc" or path to the checkpoint you wish to validate. If None and the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previous trainer.fit call will be loaded if a checkpoint callback is configured.

  • verbose (bool) – If True, prints the validation results.

  • datamodule (Optional[LightningDataModule]) – A LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.val_dataloader hook.

For more information about multiple dataloaders, see this section.

Return type

List[Dict[str, float]]

Returns

List of dictionaries with metrics logged during the validation phase, e.g., in model- or callback hooks like validation_step() etc. The length of the list corresponds to the number of validation dataloaders used.

test

Trainer.test(model=None, dataloaders=None, ckpt_path=None, verbose=True, datamodule=None)[source]

Perform one evaluation epoch over the test set. It’s separated from fit to make sure you never run on your test set until you want to.

Parameters
  • model (Optional[LightningModule]) – The model to test.

  • dataloaders (Union[Any, LightningDataModule, None]) – An iterable or collection of iterables specifying test samples. Alternatively, a LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.test_dataloader hook.

  • ckpt_path (Optional[str]) – Either "best", "last", "hpc" or path to the checkpoint you wish to test. If None and the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previous trainer.fit call will be loaded if a checkpoint callback is configured.

  • verbose (bool) – If True, prints the test results.

  • datamodule (Optional[LightningDataModule]) – A LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.test_dataloader hook.

For more information about multiple dataloaders, see this section.

Return type

List[Dict[str, float]]

Returns

List of dictionaries with metrics logged during the test phase, e.g., in model- or callback hooks like test_step() etc. The length of the list corresponds to the number of test dataloaders used.

predict

Trainer.predict(model=None, dataloaders=None, datamodule=None, return_predictions=None, ckpt_path=None)[source]

Run inference on your data. This will call the model forward function to compute predictions. Useful to perform distributed and batched predictions. Logging is disabled in the predict hooks.

Parameters
  • model (Optional[LightningModule]) – The model to predict with.

  • dataloaders (Union[Any, LightningDataModule, None]) – An iterable or collection of iterables specifying predict samples. Alternatively, a LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.predict_dataloader hook.

  • datamodule (Optional[LightningDataModule]) – A LightningDataModule that defines the :class:`~lightning.pytorch.core.hooks.DataHooks.predict_dataloader hook.

  • return_predictions (Optional[bool]) – Whether to return predictions. True by default except when an accelerator that spawns processes is used (not supported).

  • ckpt_path (Optional[str]) – Either "best", "last", "hpc" or path to the checkpoint you wish to predict. If None and the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previous trainer.fit call will be loaded if a checkpoint callback is configured.

For more information about multiple dataloaders, see this section.

Return type

Union[List[Any], List[List[Any]], None]

Returns

Returns a list of dictionaries, one for each provided dataloader containing their respective predictions.

See Lightning inference section for more.

Properties

callback_metrics

The metrics available to callbacks. These are automatically set when you log via self.log

def training_step(self, batch, batch_idx):
    self.log("a_val", 2)


callback_metrics = trainer.callback_metrics
assert callback_metrics["a_val"] == 2

current_epoch

The number of epochs run.

if trainer.current_epoch >= 10:
    ...

datamodule

The current datamodule, which is used by the trainer.

used_datamodule = trainer.datamodule

is_last_batch

Whether trainer is executing last batch in the current epoch.

if trainer.is_last_batch:
    ...

global_step

The number of optimizer steps taken (does not reset each epoch). This includes multiple optimizers (if enabled).

if trainer.global_step >= 100:
    ...

logger

The current logger being used. Here’s an example using tensorboard

logger = trainer.logger
tensorboard = logger.experiment

loggers

The list of loggers currently being used by the Trainer.

# List of Logger objects
loggers = trainer.loggers
for logger in loggers:
    logger.log_metrics({"foo": 1.0})

logged_metrics

The metrics sent to the logger (visualizer).

def training_step(self, batch, batch_idx):
    self.log("a_val", 2, logger=True)


logged_metrics = trainer.logged_metrics
assert logged_metrics["a_val"] == 2

log_dir

The directory for the current experiment. Use this to save images to, etc…

def training_step(self, batch, batch_idx):
    img = ...
    save_img(img, self.trainer.log_dir)

is_global_zero

Whether this process is the global zero in multi-node training

def training_step(self, batch, batch_idx):
    if self.trainer.is_global_zero:
        print("in node 0, accelerator 0")

progress_bar_metrics

The metrics sent to the progress bar.

def training_step(self, batch, batch_idx):
    self.log("a_val", 2, prog_bar=True)


progress_bar_metrics = trainer.progress_bar_metrics
assert progress_bar_metrics["a_val"] == 2

predict_dataloaders

The current predict dataloaders of the trainer. Note that property returns a list of predict dataloaders.

used_predict_dataloaders = trainer.predict_dataloaders

estimated_stepping_batches

Check out estimated_stepping_batches().

state

The current state of the Trainer, including the current function that is running, the stage of execution within that function, and the status of the Trainer.

# fn in ("fit", "validate", "test", "predict")
trainer.state.fn
# status in ("initializing", "running", "finished", "interrupted")
trainer.state.status
# stage in ("train", "sanity_check", "validate", "test", "predict")
trainer.state.stage

should_stop

If you want to terminate the training during .fit, you can set trainer.should_stop=True to terminate the training as soon as possible. Note that, it will respect the arguments min_steps and min_epochs to check whether to stop. If these arguments are set and the current_epoch or global_step don’t meet these minimum conditions, training will continue until both conditions are met. If any of these arguments is not set, it won’t be considered for the final decision.

# setting `trainer.should_stop` at any point of training will terminate it
class LitModel(LightningModule):
    def training_step(self, *args, **kwargs):
        self.trainer.should_stop = True


trainer = Trainer()
model = LitModel()
trainer.fit(model)
# setting `trainer.should_stop` will stop training only after at least 5 epochs have run
class LitModel(LightningModule):
    def training_step(self, *args, **kwargs):
        if self.current_epoch == 2:
            self.trainer.should_stop = True


trainer = Trainer(min_epochs=5, max_epochs=100)
model = LitModel()
trainer.fit(model)
# setting `trainer.should_stop` will stop training only after at least 5 steps have run
class LitModel(LightningModule):
    def training_step(self, *args, **kwargs):
        if self.global_step == 2:
            self.trainer.should_stop = True


trainer = Trainer(min_steps=5, max_epochs=100)
model = LitModel()
trainer.fit(model)
# setting `trainer.should_stop` at any until both min_steps and min_epochs are satisfied
class LitModel(LightningModule):
    def training_step(self, *args, **kwargs):
        if self.global_step == 7:
            self.trainer.should_stop = True


trainer = Trainer(min_steps=5, min_epochs=5, max_epochs=100)
model = LitModel()
trainer.fit(model)

train_dataloader

The current train dataloader of the trainer.

used_train_dataloader = trainer.train_dataloader

test_dataloaders

The current test dataloaders of the trainer. Note that property returns a list of test dataloaders.

used_test_dataloaders = trainer.test_dataloaders

val_dataloaders

The current val dataloaders of the trainer. Note that property returns a list of val dataloaders.

used_val_dataloaders = trainer.val_dataloaders