Shortcuts

PyTorchProfiler

class pytorch_lightning.profiler.PyTorchProfiler(dirpath=None, filename=None, group_by_input_shapes=False, emit_nvtx=False, export_to_chrome=True, row_limit=20, sort_by_key=None, record_functions=None, record_module_names=True, profiled_functions=None, output_filename=None, **profiler_kwargs)[source]

Bases: pytorch_lightning.profiler.base.BaseProfiler

This profiler uses PyTorch’s Autograd Profiler and lets you inspect the cost of different operators inside your model - both on the CPU and GPU

Parameters
  • dirpath (Union[str, Path, None]) – Directory path for the filename. If dirpath is None but filename is present, the trainer.log_dir (from TensorBoardLogger) will be used.

  • filename (Optional[str]) – If present, filename where the profiler results will be saved instead of printing to stdout. The .txt extension will be used automatically.

  • group_by_input_shapes (bool) – Include operator input shapes and group calls by shape.

  • emit_nvtx (bool) –

    Context manager that makes every autograd operation emit an NVTX range Run:

    nvprof --profile-from-start off -o trace_name.prof -- <regular command here>
    

    To visualize, you can either use:

    nvvp trace_name.prof
    torch.autograd.profiler.load_nvprof(path)
    

  • export_to_chrome (bool) – Whether to export the sequence of profiled operators for Chrome. It will generate a .json file which can be read by Chrome.

  • row_limit (int) – Limit the number of rows in a table, -1 is a special value that removes the limit completely.

  • sort_by_key (Optional[str]) – Attribute used to sort entries. By default they are printed in the same order as they were registered. Valid keys include: cpu_time, cuda_time, cpu_time_total, cuda_time_total, cpu_memory_usage, cuda_memory_usage, self_cpu_memory_usage, self_cuda_memory_usage, count.

  • record_functions (Optional[Set[str]]) – Set of profiled functions which will create a context manager on. Any other will be pass through.

  • record_module_names (bool) – Whether to add module names while recording autograd operation.

  • profiler_kwargs (Any) – Keyword arguments for the PyTorch profiler. This depends on your PyTorch version

Raises

MisconfigurationException – If arg sort_by_key is not present in AVAILABLE_SORT_KEYS. If arg schedule is not a Callable. If arg schedule does not return a torch.profiler.ProfilerAction.

start(action_name)[source]

Defines how to start recording an action.

Return type

None

stop(action_name)[source]

Defines how to record the duration once an action is complete.

Return type

None

summary()[source]

Create profiler summary in text format.

Return type

str

teardown(stage=None)[source]

Execute arbitrary post-profiling tear-down steps.

Closes the currently open file and stream.

Return type

None