pytorch_lightning.core.memory module¶
-
class
pytorch_lightning.core.memory.
LayerSummary
(module)[source]¶ Bases:
object
Summary class for a single layer in a
LightningModule
. It collects the following information:Type of the layer (e.g. Linear, BatchNorm1d, …)
Input shape
Output shape
Number of parameters
The input and output shapes are only known after the example input array was passed through the model.
Example:
>>> model = torch.nn.Conv2d(3, 8, 3) >>> summary = LayerSummary(model) >>> summary.num_parameters 224 >>> summary.layer_type 'Conv2d' >>> output = model(torch.rand(1, 3, 5, 5)) >>> summary.in_size [1, 3, 5, 5] >>> summary.out_size [1, 8, 3, 3]
-
_register_hook
()[source]¶ Registers a hook on the module that computes the input- and output size(s) on the first forward pass. If the hook is called, it will remove itself from the from the module, meaning that recursive models will only record their input- and output shapes once.
- Return type
RemovableHandle
- Returns
A handle for the installed hook.
-
class
pytorch_lightning.core.memory.
ModelSummary
(model, mode='top')[source]¶ Bases:
object
Generates a summary of all layers in a
LightningModule
.- Parameters
The string representation of this summary prints a table with columns containing the name, type and number of parameters for each layer.
The root module may also have an attribute
example_input_array
as shown in the example below. If present, the root module will be called with it as input to determine the intermediate input- and output shapes of all layers. Supported are tensors and nested lists and tuples of tensors. All other types of inputs will be skipped and show as ? in the summary table. The summary will also display ? for layers not used in the forward pass.Example:
>>> import pytorch_lightning as pl >>> class LitModel(pl.LightningModule): ... ... def __init__(self): ... super().__init__() ... self.net = nn.Sequential(nn.Linear(256, 512), nn.BatchNorm1d(512)) ... self.example_input_array = torch.zeros(10, 256) # optional ... ... def forward(self, x): ... return self.net(x) ... >>> model = LitModel() >>> ModelSummary(model, mode='top') | Name | Type | Params | In sizes | Out sizes ------------------------------------------------------------ 0 | net | Sequential | 132 K | [10, 256] | [10, 512] >>> ModelSummary(model, mode='full') | Name | Type | Params | In sizes | Out sizes -------------------------------------------------------------- 0 | net | Sequential | 132 K | [10, 256] | [10, 512] 1 | net.0 | Linear | 131 K | [10, 256] | [10, 512] 2 | net.1 | BatchNorm1d | 1 K | [10, 512] | [10, 512]
-
_forward_example_input
()[source]¶ Run the example input through each layer to get input- and output sizes.
- Return type
None
-
pytorch_lightning.core.memory.
_format_summary_table
(*cols)[source]¶ Takes in a number of arrays, each specifying a column in the summary table, and combines them all into one big string defining the summary table that are nicely formatted.
- Return type
-
pytorch_lightning.core.memory.
get_human_readable_count
(number)[source]¶ Abbreviates an integer number with K, M, B, T for thousands, millions, billions and trillions, respectively.
Examples
>>> get_human_readable_count(123) '123 ' >>> get_human_readable_count(1234) # (one thousand) '1 K' >>> get_human_readable_count(2e6) # (two million) '2 M' >>> get_human_readable_count(3e9) # (three billion) '3 B' >>> get_human_readable_count(4e12) # (four trillion) '4 T' >>> get_human_readable_count(5e15) # (more than trillion) '5,000 T'
-
pytorch_lightning.core.memory.
get_memory_profile
(mode)[source]¶ Get a profile of the current memory usage.
- Parameters
There are two modes:
’all’ means return memory for all gpus
’min_max’ means return memory for max and min
- Return type
- Returns
A dictionary in which the keys are device ids as integers and values are memory usage as integers in MB. If mode is ‘min_max’, the dictionary will also contain two additional keys:
’min_gpu_mem’: the minimum memory usage in MB
’max_gpu_mem’: the maximum memory usage in MB