Shortcuts

PrecisionPlugin

class pytorch_lightning.plugins.precision.PrecisionPlugin[source]

Bases: pytorch_lightning.core.hooks.CheckpointHooks

Base class for all plugins handling the precision-specific parts of the training.

The class attribute precision must be overwritten in child classes. The default value reflects fp32 training.

backward(model, closure_loss, optimizer, *args, **kwargs)[source]

Performs the actual backpropagation.

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

  • optimizer (Optional[Optimizer]) – current optimizer being used. None if using manual optimization

Return type

None

clip_grad_by_norm(optimizer, clip_val)[source]

Clip gradients by norm.

Return type

None

clip_grad_by_value(optimizer, clip_val)[source]

Clip gradients by value.

Return type

None

clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=GradClipAlgorithmType.NORM)[source]

Clips the gradients.

Return type

None

connect(model, optimizers, lr_schedulers)[source]

Connects this plugin to the accelerator and the training process.

Return type

Tuple[Module, List[Optimizer], List[Any]]

dispatch(trainer)[source]

Hook to do something when Strategy.dispatch() gets called.

Return type

None

forward_context()[source]

A contextmanager for managing model forward/training_step/evaluation_step/predict_step.

Return type

Generator[None, None, None]

load_state_dict(state_dict)[source]

Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.

Parameters

state_dict (Dict[str, Any]) – the precision plugin state returned by state_dict.

Return type

None

main_params(optimizer)[source]

The main params of the model.

Returns the plain model params here. Maybe different in other precision plugins.

Return type

Iterator[Parameter]

on_load_checkpoint(checkpoint)[source]

PrecisionPlugin.on_load_checkpoint was deprecated in v1.6 and will be removed in v1.8.

Use load_state_dict instead.

Return type

None

on_save_checkpoint(checkpoint)[source]

PrecisionPlugin.on_save_checkpoint was deprecated in v1.6 and will be removed in v1.8.

Use state_dict instead.

Return type

None

optimizer_step(model, optimizer, optimizer_idx, closure, **kwargs)[source]

Hook to run the optimizer step.

Return type

Any

post_backward(model, closure_loss)[source]

Run after precision plugin executes backward.

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

Return type

Tensor

pre_backward(model, closure_loss)[source]

Run before precision plugin executes backward.

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

Return type

Tensor

predict_step_context()[source]

A contextmanager for the predict step.

Return type

Generator[None, None, None]

state_dict()[source]

Called when saving a checkpoint, implement to generate precision plugin state_dict.

Return type

Dict[str, Any]

Returns

A dictionary containing precision plugin state.

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type

None

test_step_context()[source]

A contextmanager for the test step.

Return type

Generator[None, None, None]

train_step_context()[source]

A contextmanager for the training step.

Return type

Generator[None, None, None]

val_step_context()[source]

A contextmanager for the validation step.

Return type

Generator[None, None, None]

Read the Docs v: latest
Versions
latest
stable
1.6.3
1.6.2
1.6.1
1.6.0
1.5.10
1.5.9
1.5.8
1.5.7
1.5.6
1.5.5
1.5.4
1.5.3
1.5.2
1.5.1
1.5.0
1.4.9
1.4.8
1.4.7
1.4.6
1.4.5
1.4.4
1.4.3
1.4.2
1.4.1
1.4.0
1.3.8
1.3.7
1.3.6
1.3.5
1.3.4
1.3.3
1.3.2
1.3.1
1.3.0
1.2.10
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
docs_2
Downloads
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.