Shortcuts

memory

Functions

garbage_collection_cuda

Garbage collection Torch (CUDA) memory.

get_gpu_memory_map

Deprecated since version v1.5.

get_model_size_mb

Calculates the size of a Module in megabytes.

is_cuda_out_of_memory

rtype

bool

is_cudnn_snafu

rtype

bool

is_oom_error

rtype

bool

is_out_of_cpu_memory

rtype

bool

recursive_detach

Detach all tensors in in_dict.

Utilities related to memory.

pytorch_lightning.utilities.memory.garbage_collection_cuda()[source]

Garbage collection Torch (CUDA) memory.

Return type

None

pytorch_lightning.utilities.memory.get_gpu_memory_map()[source]

Deprecated since version v1.5: This function was deprecated in v1.5 in favor of pytorch_lightning.accelerators.cuda._get_nvidia_gpu_stats and will be removed in v1.7.

Get the current gpu usage.

Return type

Dict[str, float]

Returns

A dictionary in which the keys are device ids as integers and values are memory usage as integers in MB.

Raises

FileNotFoundError – If nvidia-smi installation not found

pytorch_lightning.utilities.memory.get_model_size_mb(model)[source]

Calculates the size of a Module in megabytes.

The computation includes everything in the state_dict(), i.e., by default the parameters and buffers.

Return type

float

Returns

Number of megabytes in the parameters of the input module.

pytorch_lightning.utilities.memory.recursive_detach(in_dict, to_cpu=False)[source]

Detach all tensors in in_dict.

May operate recursively if some of the values in in_dict are dictionaries which contain instances of Tensor. Other types in in_dict are not affected by this utility function.

Parameters
  • in_dict (Any) – Dictionary with tensors to detach

  • to_cpu (bool) – Whether to move tensor to cpu

Returns

Dictionary with detached tensors

Return type

out_dict