Shortcuts

pytorch_lightning.trainer.logging module

class pytorch_lightning.trainer.logging.TrainerLoggingMixin[source]

Bases: abc.ABC

add_progress_bar_metrics(metrics)[source]
configure_logger(logger)[source]
log_metrics(metrics, grad_norm_dic, step=None)[source]

Logs the metric dict passed in. If step parameter is None and step key is presented is metrics, uses metrics[“step”] as a step

Parameters
  • metrics (dict) – Metric values

  • grad_norm_dic (dict) – Gradient norms

  • step (int) – Step for which metrics should be logged. Default value corresponds to self.global_step

metrics_to_scalars(metrics)[source]
process_output(output, train=False)[source]

Reduces output according to the training mode.

Separates loss from logging and progress bar metrics

reduce_distributed_output(output, num_gpus)[source]
current_epoch: int = None[source]
default_root_dir: str = None[source]
global_rank: int = None[source]
global_step: int = None[source]
log_gpu_memory: ... = None[source]
logged_metrics: ... = None[source]
logger: Union[LightningLoggerBase, bool] = None[source]
num_gpus: int = None[source]
on_gpu: bool = None[source]
progress_bar_metrics: ... = None[source]
slurm_job_id: int = None[source]
use_ddp2: bool = None[source]
use_dp: bool = None[source]