- class pytorch_lightning.loggers.WandbLogger(name=None, save_dir=None, offline=False, id=None, anonymous=None, version=None, project=None, log_model=False, experiment=None, prefix='', sync_step=None, **kwargs)¶
Log using Weights and Biases.
Install it with pip:
pip install wandb
Log checkpoints created by
ModelCheckpointas W&B artifacts.
log_model == 'all', checkpoints are logged during training.
log_model == True, checkpoints are logged at the end of training, except when
== -1which also logs every checkpoint during training.
log_model == False(default), no checkpoint is logged.
experiment¶ – WandB experiment object. Automatically set when creating a run.
**kwargs¶ – Arguments passed to
wandb.init()like entity, group, tags, etc.
ImportError – If required WandB package is not installed on the device.
MisconfigurationException – If both
offline``is set to ``True.
from pytorch_lightning.loggers import WandbLogger from pytorch_lightning import Trainer # instrument experiment with W&B wandb_logger = WandbLogger(project='MNIST', log_model='all') trainer = Trainer(logger=wandb_logger) # log gradients and model topology wandb_logger.watch(model)
Called after model checkpoint callback saves a new checkpoint
model_checkpoint¶ – the model checkpoint callback instance
Do any processing that is necessary to finalize an experiment.
- log_metrics(metrics, step=None)¶
Records metrics. This method logs metrics as as soon as it received them. If you want to aggregate metrics for one specific step, use the
- property experiment: wandb.wandb_run.Run¶
Actual wandb object. To use wandb features in your
LightningModuledo the following.
- Return type
- property save_dir: Optional[str]¶
Return the root directory where experiment logs get saved, or None if the logger does not save data locally.