pytorch_lightning.core.datamodule module¶
-
class
pytorch_lightning.core.datamodule.
LightningDataModule
(train_transforms=None, val_transforms=None, test_transforms=None)[source]¶ Bases:
object
A DataModule standardizes the training, val, test splits, data preparation and transforms. The main advantage is consistent data splits, data preparation and transforms across models.
Example:
class MyDataModule(LightningDataModule): def __init__(self): super().__init__() def prepare_data(self): # download, split, etc... # only called on 1 GPU/TPU in distributed def setup(self): # make assignments here (val/train/test split) # called on every process in DDP def train_dataloader(self): train_split = Dataset(...) return DataLoader(train_split) def val_dataloader(self): val_split = Dataset(...) return DataLoader(val_split) def test_dataloader(self): test_split = Dataset(...) return DataLoader(test_split)
A DataModule implements 5 key methods:
prepare_data (things to do on 1 GPU/TPU not on every GPU/TPU in distributed mode).
setup (things to do on every accelerator in distributed mode).
train_dataloader the training dataloader.
val_dataloader the val dataloader(s).
test_dataloader the test dataloader(s).
This allows you to share a full dataset without explaining how to download, split transform and process the data
-
classmethod
add_argparse_args
(parent_parser)[source]¶ Extends existing argparse by default LightningDataModule attributes.
- Return type
-
classmethod
from_argparse_args
(args, **kwargs)[source]¶ Create an instance from CLI arguments.
- Parameters
args¶ (
Union
[Namespace
,ArgumentParser
]) – The parser or namespace to take arguments from. Only known arguments will be parsed and passed to theLightningDataModule
.**kwargs¶ – Additional keyword arguments that may override ones in the parser or namespace. These must be valid DataModule arguments.
Example:
parser = ArgumentParser(add_help=False) parser = LightningDataModule.add_argparse_args(parser) module = LightningDataModule.from_argparse_args(args)
-
classmethod
get_init_arguments_and_types
()[source]¶ Scans the DataModule signature and returns argument names, types and default values. :returns: (argument name, set with argument types, argument default value). :rtype: List with tuples of 3 values
-
abstract
prepare_data
(*args, **kwargs)[source]¶ Use this to download and prepare data. In distributed (GPU, TPU), this will only be called once.
Warning
Do not assign anything to the datamodule in this step since this will only be called on 1 GPU.
Pseudocode:
dm.prepare_data() dm.setup()
Example:
def prepare_data(self): download_imagenet() clean_imagenet() cache_imagenet()
-
abstract
setup
(stage=None)[source]¶ Use this to load your data from file, split it, etc. You are safe to make state assignments here. This hook is called on every process when using DDP.
Example:
def setup(self, stage): data = load_data(...) self.train_ds, self.val_ds, self.test_ds = split_data(data)
-
abstract
test_dataloader
(*args, **kwargs)[source]¶ Implement a PyTorch DataLoader for training. :rtype:
Union
[DataLoader
,List
[DataLoader
]] :returns: Single PyTorchDataLoader
.Note
Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
You can also return a list of DataLoaders
Example:
def test_dataloader(self): dataset = MNIST(root=PATH, train=False, transform=transforms.ToTensor(), download=False) loader = torch.utils.data.DataLoader(dataset=dataset, shuffle=False) return loader
-
abstract
train_dataloader
(*args, **kwargs)[source]¶ Implement a PyTorch DataLoader for training. :rtype:
DataLoader
:returns: Single PyTorchDataLoader
.Note
Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Example:
def train_dataloader(self): dataset = MNIST(root=PATH, train=True, transform=transforms.ToTensor(), download=False) loader = torch.utils.data.DataLoader(dataset=dataset) return loader
-
abstract
transfer_batch_to_device
(batch, device)[source]¶ Override this hook if your
DataLoader
returns tensors wrapped in a custom data structure.The data types listed below (and any arbitrary nesting of them) are supported out of the box:
torch.Tensor
or anything that implements .to(…)torchtext.data.batch.Batch
For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).
Example:
def transfer_batch_to_device(self, batch, device) if isinstance(batch, CustomBatch): # move all tensors in your custom data structure to the device batch.samples = batch.samples.to(device) batch.targets = batch.targets.to(device) else: batch = super().transfer_batch_to_device(data, device) return batch
- Parameters
- Return type
- Returns
A reference to the data on the new device.
Note
This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing).
Note
This hook only runs on single GPU training (no data-parallel). If you need multi-GPU support for your custom batch objects, you need to define your custom
DistributedDataParallel
orLightningDistributedDataParallel
and overrideconfigure_ddp()
.
-
abstract
val_dataloader
(*args, **kwargs)[source]¶ Implement a PyTorch DataLoader for training. :rtype:
Union
[DataLoader
,List
[DataLoader
]] :returns: Single PyTorchDataLoader
.Note
Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
You can also return a list of DataLoaders
Example:
def val_dataloader(self): dataset = MNIST(root=PATH, train=False, transform=transforms.ToTensor(), download=False) loader = torch.utils.data.DataLoader(dataset=dataset, shuffle=False) return loader
-
property
has_prepared_data
[source]¶ Return bool letting you know if datamodule.prepare_data() has been called or not.
- Returns
True if datamodule.prepare_data() has been called. False by default.
- Return type
-
property
has_setup_fit
[source]¶ Return bool letting you know if datamodule.setup(‘fit’) has been called or not.
- Returns
True if datamodule.setup(‘fit’) has been called. False by default.
- Return type
-
property
has_setup_test
[source]¶ Return bool letting you know if datamodule.setup(‘test’) has been called or not.
- Returns
True if datamodule.setup(‘test’) has been called. False by default.
- Return type
-
property
test_transforms
[source]¶ Optional transforms (or collection of transforms) you can apply to test dataset
-
pytorch_lightning.core.datamodule.
track_data_hook_calls
(fn)[source]¶ A decorator that checks if prepare_data/setup have been called.
When dm.prepare_data() is called, dm.has_prepared_data gets set to True
When dm.setup(‘fit’) is called, dm.has_setup_fit gets set to True
When dm.setup(‘test’) is called, dm.has_setup_test gets set to True
When dm.setup() is called without stage arg, both dm.has_setup_fit and dm.has_setup_test get set to True
- Parameters
fn¶ (function) – Function that will be tracked to see if it has been called.
- Returns
Decorated function that tracks its call status and saves it to private attrs in its obj instance.
- Return type
function