Shortcuts

pytorch_lightning.core.memory module

Generates a summary of a model’s layers and dimensionality

class pytorch_lightning.core.memory.ModelSummary(model, mode='full')[source]

Bases: object

Generates summaries of model layers and dimensions.

get_layer_names()[source]

Collect Layer Names

Return type

None

get_parameter_nums()[source]

Get number of parameters in each layer.

Return type

None

get_parameter_sizes()[source]

Get sizes of all parameters in model.

Return type

None

get_variable_sizes()[source]

Run sample input through each layer to get output sizes.

Return type

None

make_summary()[source]

Makes a summary listing with:

Layer Name, Layer Type, Input Size, Output Size, Number of Parameters

Return type

None

named_modules()[source]
Return type

List[Tuple[str, Module]]

summarize()[source]
Return type

None

pytorch_lightning.core.memory._format_summary_table(*cols)[source]

Takes in a number of arrays, each specifying a column in the summary table, and combines them all into one big string defining the summary table that are nicely formatted.

Return type

str

pytorch_lightning.core.memory.count_mem_items()[source]
Return type

Tuple[int, int]

pytorch_lightning.core.memory.get_gpu_memory_map()[source]

Get the current gpu usage.

Return type

Dict[str, int]

Returns

A dictionary in which the keys are device ids as integers and values are memory usage as integers in MB.

pytorch_lightning.core.memory.get_human_readable_count(number)[source]

Abbreviates an integer number with K, M, B, T for thousands, millions, billions and trillions, respectively.

Examples

>>> get_human_readable_count(123)
'123  '
>>> get_human_readable_count(1234)  # (one thousand)
'1 K'
>>> get_human_readable_count(2e6)   # (two million)
'2 M'
>>> get_human_readable_count(3e9)   # (three billion)
'3 B'
>>> get_human_readable_count(4e12)  # (four trillion)
'4 T'
>>> get_human_readable_count(5e15)  # (more than trillion)
'5,000 T'
Parameters

number (int) – a positive integer number

Return type

str

Returns

A string formatted according to the pattern described above.

pytorch_lightning.core.memory.get_memory_profile(mode)[source]

Get a profile of the current memory usage.

Parameters

mode (str) –

There are two modes:

  • ’all’ means return memory for all gpus

  • ’min_max’ means return memory for max and min

Return type

Union[Dict[str, int], Dict[int, int]]

Returns

A dictionary in which the keys are device ids as integers and values are memory usage as integers in MB. If mode is ‘min_max’, the dictionary will also contain two additional keys:

  • ’min_gpu_mem’: the minimum memory usage in MB

  • ’max_gpu_mem’: the maximum memory usage in MB

pytorch_lightning.core.memory.print_mem_stack()[source]
Return type

None