Shortcuts

PredictionEpochLoop

class pytorch_lightning.loops.epoch.PredictionEpochLoop[source]

Bases: pytorch_lightning.loops.base.Loop

Loop performing prediction on arbitrary sequentially used dataloaders.

advance(dataloader_iter, dataloader_idx, dl_max_batches, num_dataloaders, return_predictions=False)[source]

Runs one prediction step.

Parameters
  • dataloader_iter (Iterator) – the iterator over the current dataloader

  • dataloader_idx (int) – the index of the current dataloader

  • dl_max_batches (int) – the maximum number of batches the current loader can produce

  • num_dataloaders (int) – the total number of dataloaders

  • return_predictions (bool) – whether to return the obtained predictions

Return type

None

connect(**kwargs)[source]

Optionally connect one or multiple loops to this one.

Linked loops should form a tree.

Return type

None

on_run_end()[source]

Returns the predictions and the corresponding batch indices.

Return type

Tuple[List[Any], List[List[int]]]

on_run_start(dataloader_iter, dataloader_idx, dl_max_batches, num_dataloaders, return_predictions=False)[source]

Prepares the loops internal state.

Parameters
  • dataloader_iter (Iterator) – the iterator over the current dataloader

  • dataloader_idx (int) – the index of the current dataloader

  • dl_max_batches (int) – the maximum number of batches the current loader can produce

  • num_dataloaders (int) – the total number of dataloaders

  • return_predictions (bool) – whether to return the obtained predictions

Return type

None

reset()[source]

Resets the loops internal state.

Return type

None

property done: bool

Ends prediction when the iteration count exceeds the total number of available batches.

Return type

bool

property should_store_predictions: bool

Whether the predictions should be stored for later usage (e.g. aggregation or returning)

Return type

bool