Shortcuts

pytorch_lightning.lite.LightningLite

class pytorch_lightning.lite.LightningLite(accelerator=None, strategy=None, devices=None, num_nodes=1, precision=32, plugins=None, gpus=None, tpu_cores=None)[source]

Bases: lightning_lite.lite.LightningLite, abc.ABC

Lite accelerates your PyTorch training or inference code with minimal changes required.

  • Automatic placement of models and data onto the device.

  • Automatic support for mixed and double precision (smaller memory footprint).

  • Seamless switching between hardware (CPU, GPU, TPU) and distributed training strategies (data-parallel training, sharded training, etc.).

  • Automated spawning of processes, no launch utilities required.

  • Multi-node support.

Parameters
  • accelerator (Union[str, Accelerator, None]) – The hardware to run on. Possible choices are: "cpu", "cuda", "mps", "gpu", "tpu", "auto".

  • strategy (Union[str, Strategy, None]) – Strategy for how to run across multiple devices. Possible choices are: "dp", "ddp", "ddp_spawn", "deepspeed", "ddp_sharded".

  • devices (Union[List[int], str, int, None]) – Number of devices to train on (int), which GPUs to train on (list or str), or "auto". The value applies per node.

  • num_nodes (int) – Number of GPU nodes for distributed training.

  • precision (Union[int, str]) – Double precision (64), full precision (32), half precision (16), or bfloat16 precision ("bf16").

  • plugins (Union[PrecisionPlugin, ClusterEnvironment, CheckpointIO, str, List[Union[PrecisionPlugin, ClusterEnvironment, CheckpointIO, str]], None]) – One or several custom plugins

  • gpus (Union[List[int], str, int, None]) –

    Provides the same function as the devices argument but implies accelerator="gpu".

    Deprecated since version v1.8.0: gpus has been deprecated in v1.8.0 and will be removed in v1.10.0. Please use accelerator='gpu' and devices=x instead.

  • tpu_cores (Union[List[int], str, int, None]) –

    Provides the same function as the devices argument but implies accelerator="tpu".

    Deprecated since version v1.8.0: tpu_cores has been deprecated in v1.8.0 and will be removed in v1.10.0. Please use accelerator='tpu' and devices=x instead.

__init__(accelerator=None, strategy=None, devices=None, num_nodes=1, precision=32, plugins=None, gpus=None, tpu_cores=None)[source]

Methods

__init__([accelerator, strategy, devices, ...])

all_gather(data[, group, sync_grads])

Gather tensors or collections of tensors from multiple processes.

autocast()

A context manager to automatically convert operations for the chosen precision.

backward(tensor, *args[, model])

Replaces loss.backward() in your training loop.

barrier([name])

Wait for all processes to enter this call.

broadcast(obj[, src])

rtype

~TBroadcast

load(filepath)

Load a checkpoint from a file.

print(*args, **kwargs)

Print something only on the first process.

run(*args, **kwargs)

All the code inside this run method gets accelerated by Lite.

save(content, filepath)

Save checkpoint contents to a file.

seed_everything([seed, workers])

Helper function to seed everything without explicitly importing Lightning.

setup(model, *optimizers[, move_to_device])

Set up a model and its optimizers for accelerated training.

setup_dataloaders(*dataloaders[, ...])

Set up one or multiple dataloaders for accelerated training.

to_device()

Move a torch.nn.Module or a collection of tensors to the current device, if it is not already on that device.

Attributes

device

The current device this process runs on.

global_rank

The global index of the current process across all devices and nodes.

is_global_zero

Wether this rank is rank zero.

local_rank

The index of the current process among the processes running on the local node.

node_rank

The index of the current node.

world_size

The total number of processes running across all devices and nodes.

Read the Docs v: latest
Versions
latest
stable
1.7.7
1.7.6
1.7.5
1.7.4
1.7.3
1.7.2
1.7.1
1.7.0
1.6.5
1.6.4
1.6.3
1.6.2
1.6.1
1.6.0
1.5.10
1.5.9
1.5.8
1.5.7
1.5.6
1.5.5
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
future-structure
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.