Shortcuts

gpu_stats_monitor

Classes

GPUStatsMonitor

Automatically monitors and logs GPU stats during training stage.

GPU Stats Monitor

Monitor and logs GPU stats during training.

class pytorch_lightning.callbacks.gpu_stats_monitor.GPUStatsMonitor(memory_utilization=True, gpu_utilization=True, intra_step_time=False, inter_step_time=False, fan_speed=False, temperature=False)[source]

Bases: pytorch_lightning.callbacks.base.Callback

Automatically monitors and logs GPU stats during training stage. GPUStatsMonitor is a callback and in order to use it you need to assign a logger in the Trainer.

Parameters
  • memory_utilization (bool) – Set to True to monitor used, free and percentage of memory utilization at the start and end of each step. Default: True.

  • gpu_utilization (bool) – Set to True to monitor percentage of GPU utilization at the start and end of each step. Default: True.

  • intra_step_time (bool) – Set to True to monitor the time of each step. Default: False.

  • inter_step_time (bool) – Set to True to monitor the time between the end of one step and the start of the next step. Default: False.

  • fan_speed (bool) – Set to True to monitor percentage of fan speed. Default: False.

  • temperature (bool) – Set to True to monitor the memory and gpu temperature in degree Celsius. Default: False.

Example:

>>> from pytorch_lightning import Trainer
>>> from pytorch_lightning.callbacks import GPUStatsMonitor
>>> gpu_stats = GPUStatsMonitor() 
>>> trainer = Trainer(callbacks=[gpu_stats]) 

GPU stats are mainly based on nvidia-smi –query-gpu command. The description of the queries is as follows:

  • fan.speed – The fan speed value is the percent of maximum speed that the device’s fan is currently intended to run at. It ranges from 0 to 100 %. Note: The reported speed is the intended fan speed. If the fan is physically blocked and unable to spin, this output will not match the actual fan speed. Many parts do not report fan speeds because they rely on cooling via fans in the surrounding enclosure.

  • memory.used – Total memory allocated by active contexts.

  • memory.free – Total free memory.

  • utilization.gpu – Percent of time over the past sample period during which one or more kernels was executing on the GPU. The sample period may be between 1 second and 1/6 second depending on the product.

  • utilization.memory – Percent of time over the past sample period during which global (device) memory was being read or written. The sample period may be between 1 second and 1/6 second depending on the product.

  • temperature.gpu – Core GPU temperature, in degrees C.

  • temperature.memory – HBM memory temperature, in degrees C.

on_train_batch_end(trainer, *args, **kwargs)[source]

Called when the train batch ends.

on_train_batch_start(trainer, *args, **kwargs)[source]

Called when the train batch begins.

on_train_epoch_start(*args, **kwargs)[source]

Called when the train epoch begins.

on_train_start(trainer, *args, **kwargs)[source]

Called when the train begins.

Read the Docs v: stable
Versions
latest
stable
1.1.4
1.1.3
1.1.2
1.1.1
1.1.0
1.0.8
1.0.7
1.0.6
1.0.5
1.0.4
1.0.3
1.0.2
1.0.1
1.0.0
0.10.0
0.9.0
0.8.5
0.8.4
0.8.3
0.8.2
0.8.1
0.8.0
0.7.6
0.7.5
0.7.4
0.7.3
0.7.2
0.7.1
0.7.0
0.6.0
0.5.3.2
0.5.3
0.4.9
release-1.0.x
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.