WandbLogger¶
-
class
pytorch_lightning.loggers.
WandbLogger
(name=None, save_dir=None, offline=False, id=None, anonymous=False, version=None, project=None, log_model=False, experiment=None, **kwargs)[source]¶ Bases:
pytorch_lightning.loggers.base.LightningLoggerBase
Log using Weights and Biases.
Install it with pip:
pip install wandb
- Parameters
offline¶ (
bool
) – Run offline (data can be streamed later to wandb servers).id¶ (
Optional
[str
]) – Sets the version, mainly used to resume a previous run.anonymous¶ (
bool
) – Enables or explicitly disables anonymous logging.version¶ (
Optional
[str
]) – Sets the version, mainly used to resume a previous run.project¶ (
Optional
[str
]) – The name of the project to which this run will belong.log_model¶ (
bool
) – Save checkpoints in wandb dir to upload on W&B servers.experiment¶ – WandB experiment object.
**kwargs¶ – Additional arguments like entity, group, tags, etc. used by
wandb.init()
can be passed as keyword arguments in this logger.
Example:
from pytorch_lightning.loggers import WandbLogger from pytorch_lightning import Trainer wandb_logger = WandbLogger() trainer = Trainer(logger=wandb_logger)
See also
Tutorial on how to use W&B with Pytorch Lightning.
-
log_metrics
(metrics, step=None)[source]¶ Records metrics. This method logs metrics as as soon as it received them. If you want to aggregate metrics for one specific step, use the
agg_and_log_metrics()
method.
-
property
experiment
¶ Actual wandb object. To use wandb features in your
LightningModule
do the following.Example:
self.logger.experiment.some_wandb_function()
- Return type
Run
-
property
save_dir
¶ Return the root directory where experiment logs get saved, or None if the logger does not save data locally.