Shortcuts

pytorch_lightning.core.step_result module

class pytorch_lightning.core.step_result.EvalResult(early_stop_on=None, checkpoint_on=None, hiddens=None)[source]

Bases: pytorch_lightning.core.step_result.Result

Used in val/train loop to auto-log to a logger or progress bar without needing to define a _step_end or _epoch_end method

Example:

def validation_step(self, batch, batch_idx):
    loss = ...
    result = EvalResult()
    result.log('val_loss', loss)
    return result

def test_step(self, batch, batch_idx):
    loss = ...
    result = EvalResult()
    result.log('val_loss', loss)
    return result
Parameters
get_callback_metrics()[source]
Return type

dict

log(name, value, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]

Log a key, value

Example:

result.log('val_loss', loss)

# defaults used
result.log(
    name,
    value,
    on_step=False,
    on_epoch=True,
    logger=True,
    prog_bar=False,
    reduce_fx=torch.mean
)
Parameters
  • name – key name

  • value – value name

  • prog_bar (bool) – if True logs to the progress base

  • logger (bool) – if True logs to the logger

  • on_step (bool) – if True logs the output of validation_step or test_step

  • on_epoch (bool) – if True, logs the output of the training loop aggregated

  • reduce_fx (Callable) – Torch.mean by default

  • tbptt_reduce_fx (Callable) – function to reduce on truncated back prop

  • tbptt_pad_token (int) – token to use for padding

  • enable_graph (bool) – if True, will not auto detach the graph

  • sync_dist (bool) – if True, reduces the metric across GPUs/TPUs

  • sync_dist_op (Union[Any, str]) – the op to sync across

  • sync_dist_group (Optional[Any]) – the ddp group

log_dict(dictionary, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]

Log a dictonary of values at once

Example:

values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n}
result.log_dict(values)
Parameters
  • dictionary (dict) – key value pairs (str, tensors)

  • prog_bar (bool) – if True logs to the progress base

  • logger (bool) – if True logs to the logger

  • on_step (bool) – if True logs the output of validation_step or test_step

  • on_epoch (bool) – if True, logs the output of the training loop aggregated

  • reduce_fx (Callable) – Torch.mean by default

  • tbptt_reduce_fx (Callable) – function to reduce on truncated back prop

  • tbptt_pad_token (int) – token to use for padding

  • enable_graph (bool) – if True, will not auto detach the graph

  • sync_dist (bool) – if True, reduces the metric across GPUs/TPUs

  • sync_dist_op (Union[Any, str]) – the op to sync across

  • sync_dist_group (Optional[Any]) – the ddp group

write(name, values, filename='predictions.pt')[source]

Add feature name and value pair to collection of predictions that will be written to disk on validation_end or test_end. If running on multiple GPUs, you will get separate n_gpu prediction files with the rank prepended onto filename.

Example:

result = pl.EvalResult()
result.write('ids', [0, 1, 2])
result.write('preds', ['cat', 'dog', 'dog'])
Parameters
  • name (str) – Feature name that will turn into column header of predictions file

  • values (Union[Tensor, list]) – Flat tensor or list of row values for given feature column ‘name’.

  • filename (str) – Filepath where your predictions will be saved. Defaults to ‘predictions.pt’.

write_dict(predictions_dict, filename='predictions.pt')[source]

Calls EvalResult.write() for each key-value pair in predictions_dict.

It is recommended that you use this function call instead of .write if you need to store more than one column of predictions in your output file.

Example:

predictions_to_write = {'preds': ['cat', 'dog'], 'ids': tensor([0, 1])}
result.write_dict(predictions_to_write)
Parameters
  • predictions_dict ([type]) – Dict of predictions to store and then write to filename at eval end.

  • filename (str, optional) – File where your predictions will be stored. Defaults to ‘./predictions.pt’.

class pytorch_lightning.core.step_result.Result(minimize=None, early_stop_on=None, checkpoint_on=None, hiddens=None)[source]

Bases: dict, typing.Generic

_Result__set_meta(name, value, prog_bar, logger, on_step, on_epoch, reduce_fx, tbptt_pad_token, tbptt_reduce_fx)[source]
_assert_grad_tensor_metric(name, x, additional_err='')[source]
_assert_tensor_metric(name, potential_metric)[source]
detach()[source]
dp_reduce()[source]
drop_hiddens()[source]
classmethod gather(outputs)[source]
get_batch_log_metrics()[source]

Gets the metrics to log at the end of the batch step

Return type

dict

get_batch_pbar_metrics()[source]

Gets the metrics to log at the end of the batch step

get_batch_sizes()[source]
get_callback_metrics()[source]
Return type

dict

get_epoch_log_metrics()[source]

Gets the metrics to log at the end of the batch step

Return type

dict

get_epoch_pbar_metrics()[source]

Gets the metrics to log at the end of the batch step

log(name, value, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]
classmethod padded_gather(outputs)[source]
classmethod reduce_across_time(time_outputs)[source]
classmethod reduce_on_epoch_end(outputs)[source]
rename_keys(map_dict)[source]

Maps key values to the target values. Useful when renaming variables in mass.

Parameters

map_dict (dict) –

track_batch_size(batch_size)[source]
property should_reduce_on_epoch_end[source]
Return type

bool

class pytorch_lightning.core.step_result.TrainResult(minimize=None, early_stop_on=None, checkpoint_on=None, hiddens=None)[source]

Bases: pytorch_lightning.core.step_result.Result

Used in train loop to auto-log to a logger or progress bar without needing to define a train_step_end or train_epoch_end method

Example:

def training_step(self, batch, batch_idx):
    loss = ...
    result = pl.TrainResult(loss)
    result.log('train_loss', loss)
    return result

# without val/test loop can model checkpoint or early stop
def training_step(self, batch, batch_idx):
    loss = ...
    result = pl.TrainResult(loss, early_stop_on=loss, checkpoint_on=loss)
    result.log('train_loss', loss)
    return result
Parameters
log(name, value, prog_bar=False, logger=True, on_step=True, on_epoch=False, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]

Log a key, value

Example:

result.log('train_loss', loss)

# defaults used
result.log(
    name,
    value,
    on_step=True,
    on_epoch=False,
    logger=True,
    prog_bar=False,
    reduce_fx=torch.mean,
    enable_graph=False
)
Parameters
  • name – key name

  • value – value name

  • prog_bar (bool) – if True logs to the progress base

  • logger (bool) – if True logs to the logger

  • on_step (bool) – if True logs the output of validation_step or test_step

  • on_epoch (bool) – if True, logs the output of the training loop aggregated

  • reduce_fx (Callable) – Torch.mean by default

  • tbptt_reduce_fx (Callable) – function to reduce on truncated back prop

  • tbptt_pad_token (int) – token to use for padding

  • enable_graph (bool) – if True, will not auto detach the graph

  • sync_dist (bool) – if True, reduces the metric across GPUs/TPUs

  • sync_dist_op (Union[Any, str]) – the op to sync across

  • sync_dist_group (Optional[Any]) – the ddp group

log_dict(dictionary, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]

Log a dictonary of values at once

Example:

values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n}
result.log_dict(values)
Parameters
  • dictionary (dict) – key value pairs (str, tensors)

  • prog_bar (bool) – if True logs to the progress base

  • logger (bool) – if True logs to the logger

  • on_step (bool) – if True logs the output of validation_step or test_step

  • on_epoch (bool) – if True, logs the output of the training loop aggregated

  • reduce_fx (Callable) – Torch.mean by default

  • tbptt_reduce_fx (Callable) – function to reduce on truncated back prop

  • tbptt_pad_token (int) – token to use for padding

  • enable_graph (bool) – if True, will not auto detach the graph

  • sync_dist (bool) – if True, reduces the metric across GPUs/TPUs

  • sync_dist_op (Union[Any, str]) – the op to sync across

  • sync_dist_group (Optional[Any]) – the ddp group:

pytorch_lightning.core.step_result.collate_tensors(items)[source]
Return type

Union[Tensor, List, Tuple]

pytorch_lightning.core.step_result.recursive_gather(outputs, result=None)[source]
Return type

Optional[MutableMapping]

pytorch_lightning.core.step_result.recursive_stack(result)[source]
pytorch_lightning.core.step_result.weighted_mean(result, weights)[source]