Shortcuts

TPUAccelerator

class pytorch_lightning.accelerators.TPUAccelerator(precision_plugin, training_type_plugin)[source]

Bases: pytorch_lightning.accelerators.accelerator.Accelerator

Accelerator for TPU devices.

Parameters
  • precision_plugin (PrecisionPlugin) – the plugin to handle precision-specific parts

  • training_type_plugin (TrainingTypePlugin) – the plugin to handle different training routines

clip_gradients(optimizer, clip_val, gradient_clip_algorithm=<GradClipAlgorithmType.NORM: 'norm'>)[source]

clips all the optimizer parameters to the given value

Return type

None

setup(trainer, model)[source]
Raises

MisconfigurationException – If AMP is used with TPU, or if TPUs are not using a single TPU core or TPU spawn training.

Return type

None

teardown()[source]

This method is called to teardown the training process. It is the right place to release memory and free other ressources.

By default we add a barrier here to synchronize processes before returning control back to the caller.

Return type

None