datamodule¶
Functions
A decorator that checks if prepare_data/setup have been called. |
Classes
A DataModule standardizes the training, val, test splits, data preparation and transforms. |
LightningDataModule for loading DataLoaders with ease.
-
class
pytorch_lightning.core.datamodule.
LightningDataModule
(train_transforms=None, val_transforms=None, test_transforms=None, dims=None)[source]¶ Bases:
pytorch_lightning.core.hooks.DataHooks
,pytorch_lightning.core.hooks.CheckpointHooks
A DataModule standardizes the training, val, test splits, data preparation and transforms. The main advantage is consistent data splits, data preparation and transforms across models.
Example:
class MyDataModule(LightningDataModule): def __init__(self): super().__init__() def prepare_data(self): # download, split, etc... # only called on 1 GPU/TPU in distributed def setup(self): # make assignments here (val/train/test split) # called on every process in DDP def train_dataloader(self): train_split = Dataset(...) return DataLoader(train_split) def val_dataloader(self): val_split = Dataset(...) return DataLoader(val_split) def test_dataloader(self): test_split = Dataset(...) return DataLoader(test_split)
A DataModule implements 5 key methods:
prepare_data (things to do on 1 GPU/TPU not on every GPU/TPU in distributed mode).
setup (things to do on every accelerator in distributed mode).
train_dataloader the training dataloader.
val_dataloader the val dataloader(s).
test_dataloader the test dataloader(s).
This allows you to share a full dataset without explaining how to download, split transform and process the data
-
classmethod
add_argparse_args
(parent_parser)[source]¶ Extends existing argparse by default LightningDataModule attributes.
- Return type
-
classmethod
from_argparse_args
(args, **kwargs)[source]¶ Create an instance from CLI arguments.
- Parameters
args¶ (
Union
[Namespace
,ArgumentParser
]) – The parser or namespace to take arguments from. Only known arguments will be parsed and passed to theLightningDataModule
.**kwargs¶ – Additional keyword arguments that may override ones in the parser or namespace. These must be valid DataModule arguments.
Example:
parser = ArgumentParser(add_help=False) parser = LightningDataModule.add_argparse_args(parser) module = LightningDataModule.from_argparse_args(args)
-
classmethod
get_init_arguments_and_types
()[source]¶ Scans the DataModule signature and returns argument names, types and default values. :returns: (argument name, set with argument types, argument default value). :rtype: List with tuples of 3 values
-
abstract
prepare_data
(*args, **kwargs)[source]¶ Use this to download and prepare data.
Warning
DO NOT set state to the model (use setup instead) since this is NOT called on every GPU in DDP/TPU
Example:
def prepare_data(self): # good download_data() tokenize() etc() # bad self.split = data_split self.some_state = some_other_state()
In DDP prepare_data can be called in two ways (using Trainer(prepare_data_per_node)):
Once per node. This is the default and is only called on LOCAL_RANK=0.
Once in total. Only called on GLOBAL_RANK=0.
Example:
# DEFAULT # called once per node on LOCAL_RANK=0 of that node Trainer(prepare_data_per_node=True) # call on GLOBAL_RANK=0 (great for shared file systems) Trainer(prepare_data_per_node=False)
This is called before requesting the dataloaders:
model.prepare_data() if ddp/tpu: init() model.setup(stage) model.train_dataloader() model.val_dataloader() model.test_dataloader()
-
size
(dim=None)[source]¶ Return the dimension of each input either as a tuple or list of tuples. You can index this just as you would with a torch tensor.
-
abstract
test_dataloader
(*args, **kwargs)[source]¶ Implement one or multiple PyTorch DataLoaders for testing.
The dataloader you return will not be called every epoch unless you set
reload_dataloaders_every_epoch
toTrue
.For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
fit()
…
setup()
Note
Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- Return type
- Returns
Single or multiple PyTorch DataLoaders.
Example
def test_dataloader(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform, download=True) loader = torch.utils.data.DataLoader( dataset=dataset, batch_size=self.batch_size, shuffle=False ) return loader # can also return multiple dataloaders def test_dataloader(self): return [loader_a, loader_b, ..., loader_n]
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.Note
In the case where you return multiple test dataloaders, the
test_step()
will have an argumentdataloader_idx
which matches the order here.
-
abstract
train_dataloader
(*args, **kwargs)[source]¶ Implement a PyTorch DataLoader for training.
- Return type
- Returns
Single PyTorch
DataLoader
.
The dataloader you return will not be called every epoch unless you set
reload_dataloaders_every_epoch
toTrue
.For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
fit()
…
setup()
Note
Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Example
def train_dataloader(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) dataset = MNIST(root='/path/to/mnist/', train=True, transform=transform, download=True) loader = torch.utils.data.DataLoader( dataset=dataset, batch_size=self.batch_size, shuffle=True ) return loader
-
abstract
transfer_batch_to_device
(batch, device)[source]¶ Override this hook if your
DataLoader
returns tensors wrapped in a custom data structure.The data types listed below (and any arbitrary nesting of them) are supported out of the box:
torch.Tensor
or anything that implements .to(…)torchtext.data.batch.Batch
For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).
Example:
def transfer_batch_to_device(self, batch, device) if isinstance(batch, CustomBatch): # move all tensors in your custom data structure to the device batch.samples = batch.samples.to(device) batch.targets = batch.targets.to(device) else: batch = super().transfer_batch_to_device(data, device) return batch
- Parameters
- Return type
- Returns
A reference to the data on the new device.
Note
This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing).
Note
This hook only runs on single GPU training (no data-parallel). If you need multi-GPU support for your custom batch objects, you need to define your custom
DistributedDataParallel
orLightningDistributedDataParallel
and overrideconfigure_ddp()
.See also
move_data_to_device()
apply_to_collection()
-
abstract
val_dataloader
(*args, **kwargs)[source]¶ Implement one or multiple PyTorch DataLoaders for validation.
The dataloader you return will not be called every epoch unless you set
reload_dataloaders_every_epoch
toTrue
.It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Return type
- Returns
Single or multiple PyTorch DataLoaders.
Examples
def val_dataloader(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))]) dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform, download=True) loader = torch.utils.data.DataLoader( dataset=dataset, batch_size=self.batch_size, shuffle=False ) return loader # can also return multiple dataloaders def val_dataloader(self): return [loader_a, loader_b, ..., loader_n]
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.Note
In the case where you return multiple validation dataloaders, the
validation_step()
will have an argumentdataloader_idx
which matches the order here.
-
property
dims
¶ A tuple describing the shape of your data. Extra functionality exposed in
size
.
-
property
has_prepared_data
¶ Return bool letting you know if datamodule.prepare_data() has been called or not.
- Returns
True if datamodule.prepare_data() has been called. False by default.
- Return type
-
property
has_setup_fit
¶ Return bool letting you know if datamodule.setup(‘fit’) has been called or not.
- Returns
True if datamodule.setup(‘fit’) has been called. False by default.
- Return type
-
property
has_setup_test
¶ Return bool letting you know if datamodule.setup(‘test’) has been called or not.
- Returns
True if datamodule.setup(‘test’) has been called. False by default.
- Return type
-
property
test_transforms
¶ Optional transforms (or collection of transforms) you can apply to test dataset
-
property
train_transforms
¶ Optional transforms (or collection of transforms) you can apply to train dataset
-
property
val_transforms
¶ Optional transforms (or collection of transforms) you can apply to validation dataset
-
pytorch_lightning.core.datamodule.
track_data_hook_calls
(fn)[source]¶ A decorator that checks if prepare_data/setup have been called.
When dm.prepare_data() is called, dm.has_prepared_data gets set to True
When dm.setup(‘fit’) is called, dm.has_setup_fit gets set to True
When dm.setup(‘test’) is called, dm.has_setup_test gets set to True
When dm.setup() is called without stage arg, both dm.has_setup_fit and dm.has_setup_test get set to True
- Parameters
fn¶ (function) – Function that will be tracked to see if it has been called.
- Returns
Decorated function that tracks its call status and saves it to private attrs in its obj instance.
- Return type
function