Shortcuts

pytorch_lightning.utilities.device_dtype_mixin module

class pytorch_lightning.utilities.device_dtype_mixin.DeviceDtypeModuleMixin(*args, **kwargs)[source]

Bases: torch.nn.Module

cpu()[source]

Moves all model parameters and buffers to the CPU. :returns: self :rtype: Module

cuda(device=None)[source]

Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Parameters

device (Optional[int]) – if specified, all parameters will be copied to that device

Returns

self

Return type

Module

double()[source]

Casts all floating point parameters and buffers to double datatype.

Returns

self

Return type

Module

float()[source]

Casts all floating point parameters and buffers to float datatype.

Returns

self

Return type

Module

half()[source]

Casts all floating point parameters and buffers to half datatype.

Returns

self

Return type

Module

to(*args, **kwargs)[source]

Moves and/or casts the parameters and buffers.

This can be called as .. function:: to(device=None, dtype=None, non_blocking=False) .. function:: to(dtype, non_blocking=False) .. function:: to(tensor, non_blocking=False) Its signature is similar to torch.Tensor.to(), but only accepts floating point desired dtype s. In addition, this method will only cast the floating point parameters and buffers to dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples.

Note

This method modifies the module in-place.

Parameters
  • device – the desired device of the parameters and buffers in this module

  • dtype – the desired floating point type of the floating point parameters and buffers in this module

  • tensor – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module

Returns

self

Return type

Module

Example::
>>> class ExampleModule(DeviceDtypeModuleMixin):
...     def __init__(self, weight: torch.Tensor):
...         super().__init__()
...         self.register_buffer('weight', weight)
>>> _ = torch.manual_seed(0)
>>> module = ExampleModule(torch.rand(3, 4))
>>> module.weight 
tensor([[...]])
>>> module.to(torch.double)
ExampleModule()
>>> module.weight 
tensor([[...]], dtype=torch.float64)
>>> cpu = torch.device('cpu')
>>> module.to(cpu, dtype=torch.half, non_blocking=True)
ExampleModule()
>>> module.weight 
tensor([[...]], dtype=torch.float16)
>>> module.to(cpu)
ExampleModule()
>>> module.weight 
tensor([[...]], dtype=torch.float16)
type(dst_type)[source]

Casts all parameters and buffers to dst_type.

Parameters

dst_type (type or string) – the desired type

Returns

self

Return type

Module

_device: ... = None[source]
_dtype: Union[str, torch.dtype] = None[source]
property device[source]
Return type

Union[str, device]

property dtype[source]
Return type

Union[str, dtype]