pytorch_lightning.loggers.wandb module¶
Weights and Biases¶
-
class
pytorch_lightning.loggers.wandb.
WandbLogger
(name=None, save_dir=None, offline=False, id=None, anonymous=False, version=None, project=None, tags=None, log_model=False, experiment=None, entity=None, group=None)[source]¶ Bases:
pytorch_lightning.loggers.base.LightningLoggerBase
Log using Weights and Biases. Install it with pip:
pip install wandb
- Parameters
offline¶ (
bool
) – Run offline (data can be streamed later to wandb servers).id¶ (
Optional
[str
]) – Sets the version, mainly used to resume a previous run.anonymous¶ (
bool
) – Enables or explicitly disables anonymous logging.version¶ (
Optional
[str
]) – Sets the version, mainly used to resume a previous run.project¶ (
Optional
[str
]) – The name of the project to which this run will belong.tags¶ (
Optional
[List
[str
]]) – Tags associated with this run.log_model¶ (
bool
) – Save checkpoints in wandb dir to upload on W&B servers.experiment¶ – WandB experiment object
entity¶ – The team posting this run (default: your username or your default team)
group¶ (
Optional
[str
]) – A unique string shared by all runs in a given group
Example
>>> from pytorch_lightning.loggers import WandbLogger >>> from pytorch_lightning import Trainer >>> wandb_logger = WandbLogger() >>> trainer = Trainer(logger=wandb_logger)
See also
Tutorial on how to use W&B with Pytorch Lightning.
-
log_metrics
(metrics, step=None)[source]¶ Records metrics. This method logs metrics as as soon as it received them. If you want to aggregate metrics for one specific step, use the
agg_and_log_metrics()
method.
-
property
experiment
[source]¶ Actual wandb object. To use wandb features in your
LightningModule
do the following.Example:
self.logger.experiment.some_wandb_function()
- Return type
Run