pytorch_lightning.core.step_result module¶
-
class
pytorch_lightning.core.step_result.
EvalResult
(early_stop_on=None, checkpoint_on=None, hiddens=None)[source]¶ Bases:
pytorch_lightning.core.step_result.Result
Used in val/train loop to auto-log to a logger or progress bar without needing to define a _step_end or _epoch_end method
Example:
def validation_step(self, batch, batch_idx): loss = ... result = EvalResult() result.log('val_loss', loss) return result def test_step(self, batch, batch_idx): loss = ... result = EvalResult() result.log('val_loss', loss) return result
- Parameters
-
log
(name, value, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]¶ Log a key, value
Example:
result.log('val_loss', loss) # defaults used result.log( name, value, on_step=False, on_epoch=True, logger=True, prog_bar=False, reduce_fx=torch.mean )
- Parameters
name¶ – key name
value¶ – value name
on_step¶ (
bool
) – if True logs the output of validation_step or test_stepon_epoch¶ (
bool
) – if True, logs the output of the training loop aggregatedtbptt_reduce_fx¶ (
Callable
) – function to reduce on truncated back propenable_graph¶ (
bool
) – if True, will not auto detach the graphsync_dist¶ (
bool
) – if True, reduces the metric across GPUs/TPUs
-
log_dict
(dictionary, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]¶ Log a dictonary of values at once
Example:
values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n} result.log_dict(values)
- Parameters
on_step¶ (
bool
) – if True logs the output of validation_step or test_stepon_epoch¶ (
bool
) – if True, logs the output of the training loop aggregatedtbptt_reduce_fx¶ (
Callable
) – function to reduce on truncated back propenable_graph¶ (
bool
) – if True, will not auto detach the graphsync_dist¶ (
bool
) – if True, reduces the metric across GPUs/TPUs
-
write
(name, values, filename='predictions.pt')[source]¶ Add feature name and value pair to collection of predictions that will be written to disk on validation_end or test_end. If running on multiple GPUs, you will get separate n_gpu prediction files with the rank prepended onto filename.
Example:
result = pl.EvalResult() result.write('ids', [0, 1, 2]) result.write('preds', ['cat', 'dog', 'dog'])
-
write_dict
(predictions_dict, filename='predictions.pt')[source]¶ Calls EvalResult.write() for each key-value pair in predictions_dict.
It is recommended that you use this function call instead of .write if you need to store more than one column of predictions in your output file.
Example:
predictions_to_write = {'preds': ['cat', 'dog'], 'ids': tensor([0, 1])} result.write_dict(predictions_to_write)
-
class
pytorch_lightning.core.step_result.
Result
(minimize=None, early_stop_on=None, checkpoint_on=None, hiddens=None)[source]¶ Bases:
dict
,typing.Generic
-
_Result__set_meta
(name, value, prog_bar, logger, on_step, on_epoch, reduce_fx, tbptt_pad_token, tbptt_reduce_fx)[source]¶
-
log
(name, value, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]¶
-
-
class
pytorch_lightning.core.step_result.
TrainResult
(minimize=None, early_stop_on=None, checkpoint_on=None, hiddens=None)[source]¶ Bases:
pytorch_lightning.core.step_result.Result
Used in train loop to auto-log to a logger or progress bar without needing to define a train_step_end or train_epoch_end method
Example:
def training_step(self, batch, batch_idx): loss = ... result = pl.TrainResult(loss) result.log('train_loss', loss) return result # without val/test loop can model checkpoint or early stop def training_step(self, batch, batch_idx): loss = ... result = pl.TrainResult(loss, early_stop_on=loss, checkpoint_on=loss) result.log('train_loss', loss) return result
- Parameters
-
log
(name, value, prog_bar=False, logger=True, on_step=True, on_epoch=False, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]¶ Log a key, value
Example:
result.log('train_loss', loss) # defaults used result.log( name, value, on_step=True, on_epoch=False, logger=True, prog_bar=False, reduce_fx=torch.mean, enable_graph=False )
- Parameters
name¶ – key name
value¶ – value name
on_step¶ (
bool
) – if True logs the output of validation_step or test_stepon_epoch¶ (
bool
) – if True, logs the output of the training loop aggregatedtbptt_reduce_fx¶ (
Callable
) – function to reduce on truncated back propenable_graph¶ (
bool
) – if True, will not auto detach the graphsync_dist¶ (
bool
) – if True, reduces the metric across GPUs/TPUs
-
log_dict
(dictionary, prog_bar=False, logger=True, on_step=False, on_epoch=True, reduce_fx=torch.mean, tbptt_reduce_fx=torch.mean, tbptt_pad_token=0, enable_graph=False, sync_dist=False, sync_dist_op='mean', sync_dist_group=None)[source]¶ Log a dictonary of values at once
Example:
values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n} result.log_dict(values)
- Parameters
on_step¶ (
bool
) – if True logs the output of validation_step or test_stepon_epoch¶ (
bool
) – if True, logs the output of the training loop aggregatedtbptt_reduce_fx¶ (
Callable
) – function to reduce on truncated back propenable_graph¶ (
bool
) – if True, will not auto detach the graphsync_dist¶ (
bool
) – if True, reduces the metric across GPUs/TPUs