Shortcuts

PrecisionPlugin

class pytorch_lightning.plugins.precision.PrecisionPlugin[source]

Bases: lightning_lite.plugins.precision.precision.Precision, pytorch_lightning.core.hooks.CheckpointHooks

Base class for all plugins handling the precision-specific parts of the training.

The class attribute precision must be overwritten in child classes. The default value reflects fp32 training.

backward(tensor, model, optimizer, optimizer_idx, *args, **kwargs)[source]

Performs the actual backpropagation.

Parameters
  • tensor (Tensor) – the loss value obtained from the closure

  • model (LightningModule) – the model to be optimized

  • optimizer (Optional[Steppable]) – current optimizer being used. None if using manual optimization

  • optimizer_idx (Optional[int]) – the index of the current optimizer. None if using manual optimization

  • *args – Positional arguments intended for the actual function that performs the backward, like backward().

  • **kwargs – Keyword arguments for the same purpose as *args.

Return type

None

clip_grad_by_norm(optimizer, clip_val)[source]

Clip gradients by norm.

Return type

None

clip_grad_by_value(optimizer, clip_val)[source]

Clip gradients by value.

Return type

None

clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=GradClipAlgorithmType.NORM)[source]

Clips the gradients.

Return type

None

connect(model, optimizers, lr_schedulers)[source]

Connects this plugin to the accelerator and the training process.

Return type

Tuple[Module, List[Optimizer], List[Any]]

dispatch(trainer)[source]

Hook to do something when Strategy.dispatch() gets called.

Return type

None

optimizer_step(optimizer, model, optimizer_idx, closure, **kwargs)[source]

Hook to run the optimizer step.

Return type

Any

post_backward(tensor, module)[source]

Runs after precision plugin executes backward.

Parameters
  • tensor (Tensor) – The tensor that will be used for backpropagation

  • module (LightningModule) – The module that was involved in producing the tensor and whose parameters need the gradients

Return type

Tensor

pre_backward(tensor, module)[source]

Runs before precision plugin executes backward.

Parameters
  • tensor (Tensor) – The tensor that will be used for backpropagation

  • module (LightningModule) – The module that was involved in producing the tensor and whose parameters need the gradients

Return type

Tensor

predict_step_context()[source]

A contextmanager for the predict step.

Return type

Generator[None, None, None]

test_step_context()[source]

A contextmanager for the test step.

Return type

Generator[None, None, None]

train_step_context()[source]

A contextmanager for the training step.

Return type

Generator[None, None, None]

val_step_context()[source]

A contextmanager for the validation step.

Return type

Generator[None, None, None]

Read the Docs v: latest
Versions
latest
stable
1.7.7
1.7.6
1.7.5
1.7.4
1.7.3
1.7.2
1.7.1
1.7.0
1.6.5
1.6.4
1.6.3
1.6.2
1.6.1
1.6.0
1.5.10
1.5.9
1.5.8
1.5.7
1.5.6
1.5.5
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
future-structure
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.