Shortcuts

TPUAccelerator

class pytorch_lightning.accelerators.TPUAccelerator(precision_plugin, training_type_plugin)[source]

Bases: pytorch_lightning.accelerators.accelerator.Accelerator

Accelerator for TPU devices.

Parameters
  • precision_plugin (PrecisionPlugin) – the plugin to handle precision-specific parts

  • training_type_plugin (TrainingTypePlugin) – the plugin to handle different training routines

setup(trainer, model)[source]
Raises

MisconfigurationException – If AMP is used with TPU, or if TPUs are not using a single TPU core or TPU spawn training.

Return type

None

teardown()[source]

This method is called to teardown the training process. It is the right place to release memory and free other ressources.

By default we add a barrier here to synchronize processes before returning control back to the caller.

Return type

None

Read the Docs v: latest
Versions
latest
stable
1.3.1
1.3.0
1.2.10
1.2.9_a
1.2.8
1.2.7
1.2.6
1.2.5
1.2.4
1.2.3
1.2.2
1.2.1
1.2.0
1.1.8
1.1.7
1.1.6
1.1.5
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3
0.4.9
docs-robots
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.