Shortcuts

PrecisionPlugin

class pytorch_lightning.plugins.precision.PrecisionPlugin[source]

Bases: pytorch_lightning.core.hooks.CheckpointHooks

Base class for all plugins handling the precision-specific parts of the training.

The class attribute precision must be overwritten in child classes. The default value reflects fp32 training.

backward(model, closure_loss, optimizer, *args, **kwargs)[source]

Performs the actual backpropagation.

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

  • optimizer (Optional[Optimizer]) – current optimizer being used. None if using manual optimization

Return type

None

clip_grad_by_norm(optimizer, clip_val)[source]

Clip gradients by norm.

Return type

None

clip_grad_by_value(optimizer, clip_val)[source]

Clip gradients by value.

Return type

None

clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=<GradClipAlgorithmType.NORM: 'norm'>)[source]

Clips the gradients.

Return type

None

connect(model, optimizers, lr_schedulers)[source]

Connects this plugin to the accelerator and the training process.

Return type

Tuple[Module, List[Optimizer], List[Any]]

dispatch(trainer)[source]

Hook to do something when Accelerator.dispatch() gets called.

Return type

None

forward_context()[source]

A contextmanager for managing model forward/training_step/evaluation_step/predict_step.

Return type

Generator[None, None, None]

main_params(optimizer)[source]

The main params of the model.

Returns the plain model params here. Maybe different in other precision plugins.

Return type

Iterator[Parameter]

master_params(optimizer)[source]

The main params of the model.

Deprecated since version v1.5: This method is deprecated in v1.5 and will be removed in v1.6. Use main_params() instead.

Return type

Iterator[Parameter]

optimizer_step(model, optimizer, optimizer_idx, closure, **kwargs)[source]

Hook to run the optimizer step.

Return type

None

post_backward(model, closure_loss)[source]

Run after precision plugin executes backward.

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

Return type

Tensor

post_dispatch()[source]

Hook to do something after the training/evaluation/prediction finishes.

Return type

None

pre_backward(model, closure_loss)[source]

Run before precision plugin executes backward.

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

Return type

Tensor

pre_dispatch()[source]

Hook to do something before the training/evaluation/prediction starts.

Return type

None

predict_step_context()[source]

A contextmanager for the predict step.

Return type

Generator[None, None, None]

test_step_context()[source]

A contextmanager for the test step.

Return type

Generator[None, None, None]

train_step_context()[source]

A contextmanager for the training step.

Return type

Generator[None, None, None]

val_step_context()[source]

A contextmanager for the validation step.

Return type

Generator[None, None, None]

Read the Docs v: stable
Versions
latest
stable
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
ipynb-update
docs-search
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.