Shortcuts

pytorch_lightning.trainer.data_loading module

class pytorch_lightning.trainer.data_loading.TrainerDataLoadingMixin[source]

Bases: abc.ABC

_check_batch_limits(name)[source]
Return type

None

_get_distributed_sampler(dataloader)[source]
_reset_eval_dataloader(model, mode)[source]

Generic method to reset a dataloader for evaluation.

Parameters
Return type

Tuple[List[Union[int, float]], List[DataLoader]]

Returns

Tuple (num_batches, dataloaders)

_worker_check(dataloader, name)[source]
Return type

None

auto_add_sampler(dataloader, train)[source]
Return type

DataLoader

determine_data_use_amount(overfit_batches)[source]

Use less data for debugging purposes

Return type

None

abstract is_overridden(*args)[source]

Warning: this is just empty shell for code implemented in other class.

replace_sampler(dataloader, sampler)[source]
request_dataloader(dataloader_fx)[source]

Handles downloading data in the GPU or TPU case.

Parameters

dataloader_fx (Callable) – The bound dataloader getter

Return type

DataLoader

Returns

The dataloader

reset_test_dataloader(model)[source]

Resets the validation dataloader and determines the number of batches.

Parameters

model – The current LightningModule

Return type

None

reset_train_dataloader(model)[source]

Resets the train dataloader and initialises required variables (number of batches, when to validate, etc.).

Parameters

model (LightningModule) – The current LightningModule

Return type

None

reset_val_dataloader(model)[source]

Resets the validation dataloader and determines the number of batches.

Parameters

model (LightningModule) – The current LightningModule

Return type

None

distributed_backend: Optional[str] = None[source]
global_rank: int = None[source]
limit_test_batches: Union[int, float] = None[source]
limit_train_batches: Union[int, float] = None[source]
limit_val_batches: Union[int, float] = None[source]
num_nodes: int = None[source]
num_processes: int = None[source]
num_test_batches: List[Union[int, float]] = None[source]
num_training_batches: Union[int, float] = None[source]
num_val_batches: List[Union[int, float]] = None[source]
replace_sampler_ddp: bool = None[source]
shown_warnings: ... = None[source]
test_dataloaders: List[DataLoader] = None[source]
tpu_local_core_rank: int = None[source]
train_dataloader: DataLoader = None[source]
use_ddp: bool = None[source]
use_ddp2: bool = None[source]
use_horovod: bool = None[source]
use_tpu: bool = None[source]
val_check_batch: ... = None[source]
val_check_interval: float = None[source]
val_dataloaders: List[DataLoader] = None[source]
pytorch_lightning.trainer.data_loading._has_iterable_dataset(dataloader)[source]
pytorch_lightning.trainer.data_loading._has_len(dataloader)[source]

Checks if a given Dataloader has __len__ method implemented i.e. if it is a finite dataloader or infinite dataloader.

Return type

bool