MetaMultiheadAttention
torchmeta.modules.MetaMultiheadAttention(*args, **kwargs)
Notes
See: torch.nn.MultiheadAttention
MetaBatchNorm1d
torchmeta.modules.MetaBatchNorm1d(num_features, eps=1e-05, momentum=0.1,
affine=True, track_running_stats=True)
Notes
See: torch.nn.BatchNorm1d
MetaBatchNorm2d
torchmeta.modules.MetaBatchNorm2d(num_features, eps=1e-05, momentum=0.1,
affine=True, track_running_stats=True)
Notes
See: torch.nn.BatchNorm2d
MetaBatchNorm3d
torchmeta.modules.MetaBatchNorm3d(num_features, eps=1e-05, momentum=0.1,
affine=True, track_running_stats=True)
Notes
See: torch.nn.BatchNorm3d
MetaSequential
torchmeta.modules.MetaSequential(*args:Any)
Notes
See: torch.nn.Sequential
MetaConv1d
torchmeta.modules.MetaConv1d(in_channels:int, out_channels:int,
kernel_size:Union[int, Tuple[int]], stride:Union[int, Tuple[int]]=1,
padding:Union[int, Tuple[int]]=0, dilation:Union[int, Tuple[int]]=1,
groups:int=1, bias:bool=True, padding_mode:str='zeros')
Notes
See: torch.nn.Conv1d
MetaConv2d
torchmeta.modules.MetaConv2d(in_channels:int, out_channels:int,
kernel_size:Union[int, Tuple[int, int]], stride:Union[int, Tuple[int,
int]]=1, padding:Union[int, Tuple[int, int]]=0, dilation:Union[int,
Tuple[int, int]]=1, groups:int=1, bias:bool=True,
padding_mode:str='zeros')
Notes
See: torch.nn.Conv2d
MetaConv3d
torchmeta.modules.MetaConv3d(in_channels:int, out_channels:int,
kernel_size:Union[int, Tuple[int, int, int]], stride:Union[int, Tuple[int,
int, int]]=1, padding:Union[int, Tuple[int, int, int]]=0,
dilation:Union[int, Tuple[int, int, int]]=1, groups:int=1, bias:bool=True,
padding_mode:str='zeros')
Notes
See: torch.nn.Conv3d
MetaLinear
torchmeta.modules.MetaLinear(in_features:int, out_features:int,
bias:bool=True) -> None
Notes
See: torch.nn.Linear
MetaBilinear
torchmeta.modules.MetaBilinear(in1_features:int, in2_features:int,
out_features:int, bias:bool=True) -> None
Notes
See: torch.nn.Bilinear
MetaModule
Base class for PyTorch meta-learning modules. These modules accept an additional argument params
in their forward
method.
torchmeta.modules.MetaModule()
Notes
Objects inherited from MetaModule
are fully compatible with PyTorch modules from torch.nn.Module
. The argument params
is a dictionary of tensors, with full support of the computation graph (for differentiation).
MetaLayerNorm
torchmeta.modules.MetaLayerNorm(normalized_shape:Union[int, List[int],
torch.Size], eps:float=1e-05, elementwise_affine:bool=True) -> None
Notes
See: torch.nn.LayerNorm
DataParallel
torchmeta.modules.DataParallel(module, device_ids=None, output_device=None,
dim=0)
Notes
See: torch.nn.Parallel
MetaEmbedding
torchmeta.modules.MetaEmbedding(num_embeddings:int, embedding_dim:int,
padding_idx:Union[int, NoneType]=None, max_norm:Union[float,
NoneType]=None, norm_type:float=2.0, scale_grad_by_freq:bool=False,
sparse:bool=False, _weight:Union[torch.Tensor, NoneType]=None) -> None
Notes
See: torch.nn.Embedding
MetaEmbeddingBag
torchmeta.modules.MetaEmbeddingBag(num_embeddings:int, embedding_dim:int,
max_norm:Union[float, NoneType]=None, norm_type:float=2.0,
scale_grad_by_freq:bool=False, mode:str='mean', sparse:bool=False,
_weight:Union[torch.Tensor, NoneType]=None,
include_last_offset:bool=False) -> None
Notes
See: torch.nn.EmbeddingBag