Shortcuts

wandb

Classes

WandbLogger

Log using Weights and Biases.

Weights and Biases Logger

class pytorch_lightning.loggers.wandb.WandbLogger(name=None, save_dir=None, offline=False, id=None, anonymous=None, version=None, project=None, log_model=False, experiment=None, prefix='', sync_step=None, **kwargs)[source]

Bases: pytorch_lightning.loggers.base.LightningLoggerBase

Log using Weights and Biases.

Install it with pip:

pip install wandb
Parameters
  • name (Optional[str]) – Display name for the run.

  • save_dir (Optional[str]) – Path where data is saved (wandb dir by default).

  • offline (Optional[bool]) – Run offline (data can be streamed later to wandb servers).

  • id (Optional[str]) – Sets the version, mainly used to resume a previous run.

  • version (Optional[str]) – Same as id.

  • anonymous (Optional[bool]) – Enables or explicitly disables anonymous logging.

  • project (Optional[str]) – The name of the project to which this run will belong.

  • log_model (Optional[bool]) –

    Log checkpoints created by ModelCheckpoint as W&B artifacts.

    • if log_model == 'all', checkpoints are logged during training.

    • if log_model == True, checkpoints are logged at the end of training, except when save_top_k == -1 which also logs every checkpoint during training.

    • if log_model == False (default), no checkpoint is logged.

  • prefix (Optional[str]) – A string to put at the beginning of metric keys.

  • experiment – WandB experiment object. Automatically set when creating a run.

  • **kwargs – Arguments passed to wandb.init() like entity, group, tags, etc.

Raises
  • ImportError – If required WandB package is not installed on the device.

  • MisconfigurationException – If both log_model and offline``is set to ``True.

Example:

from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning import Trainer

# instrument experiment with W&B
wandb_logger = WandbLogger(project='MNIST', log_model='all')
trainer = Trainer(logger=wandb_logger)

# log gradients and model topology
wandb_logger.watch(model)

See also

after_save_checkpoint(checkpoint_callback)[source]

Called after model checkpoint callback saves a new checkpoint

Parameters

checkpoint_callback – the model checkpoint callback instance

finalize(status)[source]

Do any processing that is necessary to finalize an experiment.

Parameters

status (str) – Status that the experiment finished with (e.g. success, failed, aborted)

Return type

None

log_hyperparams(params)[source]

Record hyperparameters.

Parameters
  • params (Union[Dict[str, Any], Namespace]) – Namespace containing the hyperparameters

  • args – Optional positional arguments, depends on the specific logger being used

  • kwargs – Optional keywoard arguments, depends on the specific logger being used

Return type

None

log_metrics(metrics, step=None)[source]

Records metrics. This method logs metrics as as soon as it received them. If you want to aggregate metrics for one specific step, use the agg_and_log_metrics() method.

Parameters
  • metrics (Dict[str, float]) – Dictionary with metric names as keys and measured quantities as values

  • step (Optional[int]) – Step number at which the metrics should be recorded

Return type

None

property experiment: wandb.wandb_run.Run

Actual wandb object. To use wandb features in your LightningModule do the following.

Example:

self.logger.experiment.some_wandb_function()
Return type

Run

property name: Optional[str]

Return the experiment name.

Return type

Optional[str]

property save_dir: Optional[str]

Return the root directory where experiment logs get saved, or None if the logger does not save data locally.

Return type

Optional[str]

property version: Optional[str]

Return the experiment version.

Return type

Optional[str]