cvnets.layers.normalization package
Submodules
cvnets.layers.normalization.batch_norm module
- class cvnets.layers.normalization.batch_norm.BatchNorm2d(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
BatchNorm2d
Applies a Batch Normalization over a 4D input tensor
- Parameters:
num_features (Optional, int) – \(C\) from an expected input of size \((N, C, H, W)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
momentum (Optional, float) – Value used for the running_mean and running_var computation. Default: 0.1
affine (bool) – If
True
, use learnable affine parameters. Default:True
track_running_stats – If
True
, tracks running mean and variance. Default:True
- Shape:
Input: \((N, C, H, W)\) where \(N\) is the batch size, \(C\) is the number of input channels,
\(H\) is the input height, and \(W\) is the input width - Output: same shape as the input
- class cvnets.layers.normalization.batch_norm.BatchNorm2dFP32(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
BatchNorm2d
Applies a Batch Normalization over a 4D input tensor in FP32
- __init__(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs) None [source]
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor) Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cvnets.layers.normalization.batch_norm.BatchNorm1d(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
BatchNorm1d
Applies a Batch Normalization over a 2D or 3D input tensor
- Parameters:
num_features (Optional, int) – \(C\) from an expected input of size \((N, C)\) or \((N, C, L)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
momentum (Optional, float) – Value used for the running_mean and running_var computation. Default: 0.1
affine (bool) – If
True
, use learnable affine parameters. Default:True
track_running_stats – If
True
, tracks running mean and variance. Default:True
- Shape:
Input: \((N, C)\) or \((N, C, L)\) where \(N\) is the batch size,
\(C\) is the number of input channels, and \(L\) is the sequence length - Output: same shape as the input
- class cvnets.layers.normalization.batch_norm.BatchNorm3d(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
BatchNorm3d
- __init__(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs) None [source]
Applies a Batch Normalization over a 5D input tensor
- Parameters:
num_features (Optional, int) – \(C\) from an expected input of size \((N, C, D, H, W)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
momentum (Optional, float) – Value used for the running_mean and running_var computation. Default: 0.1
affine (bool) – If
True
, use learnable affine parameters. Default:True
track_running_stats – If
True
, tracks running mean and variance. Default:True
- Shape:
Input: \((N, C, D, H, W)\) where \(N\) is the batch size, \(C\) is the number of input
channels, \(D\) is the input depth, \(H\) is the input height, and \(W\) is the input width - Output: same shape as the input
cvnets.layers.normalization.group_norm module
- class cvnets.layers.normalization.group_norm.GroupNorm(num_groups: int, num_features: int, eps: float | None = 1e-05, affine: bool | None = True, *args, **kwargs)[source]
Bases:
GroupNorm
Applies a Group Normalization over an input tensor
- Parameters:
num_groups (int) – number of groups to separate the input channels into
num_features (int) – \(C\) from an expected input of size \((N, C, *)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
affine (bool) – If
True
, use learnable affine parameters. Default:True
- Shape:
Input: \((N, C, *)\) where \(N\) is the batch size, \(C\) is the number of input channels,
and \(*\) is the remaining dimensions of the input tensor - Output: same shape as the input
Note
GroupNorm is the same as LayerNorm when num_groups=1 and it is the same as InstanceNorm when num_groups=C.
cvnets.layers.normalization.instance_norm module
- class cvnets.layers.normalization.instance_norm.InstanceNorm2d(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
InstanceNorm2d
Applies a Instance Normalization over a 4D input tensor
- Parameters:
num_features (int) – \(C\) from an expected input of size \((N, C, H, W)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
momentum (Optional, float) – Value used for the running_mean and running_var computation. Default: 0.1
affine (bool) – If
True
, use learnable affine parameters. Default:True
track_running_stats – If
True
, tracks running mean and variance. Default:True
- Shape:
Input: \((N, C, H, W)\) where \(N\) is the batch size, \(C\) is the number of input channels,
\(H\) is the input height, and \(W\) is the input width - Output: same shape as the input
- class cvnets.layers.normalization.instance_norm.InstanceNorm1d(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
InstanceNorm1d
Applies a Instance Normalization over a 2D or 3D input tensor
- Parameters:
num_features (int) – \(C\) from an expected input of size \((N, C)\) or \((N, C, L)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
momentum (Optional, float) – Value used for the running_mean and running_var computation. Default: 0.1
affine (bool) – If
True
, use learnable affine parameters. Default:True
track_running_stats – If
True
, tracks running mean and variance. Default:True
- Shape:
Input: \((N, C)\) or \((N, C, L)\) where \(N\) is the batch size, \(C\) is the number
of input channels, and \(L\) is the sequence length
Output: same shape as the input
cvnets.layers.normalization.layer_norm module
- class cvnets.layers.normalization.layer_norm.LayerNorm(normalized_shape: int | List[int] | Size, eps: float | None = 1e-05, elementwise_affine: bool | None = True, *args, **kwargs)[source]
Bases:
LayerNorm
Applies Layer Normalization over a input tensor
- Parameters:
normalized_shape (int or list or torch.Size) –
input shape from an expected input of size
\[[* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]]\]If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine (bool) – If
True
, use learnable affine parameters. Default:True
- Shape:
Input: \((N, *)\) where \(N\) is the batch size
Output: same shape as the input
- __init__(normalized_shape: int | List[int] | Size, eps: float | None = 1e-05, elementwise_affine: bool | None = True, *args, **kwargs)[source]
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class cvnets.layers.normalization.layer_norm.LayerNorm2D_NCHW(num_features: int, eps: float | None = 1e-05, elementwise_affine: bool | None = True, *args, **kwargs)[source]
Bases:
GroupNorm
Applies Layer Normalization over a 4D input tensor
- Parameters:
num_features (int) – \(C\) from an expected input of size \((N, C, H, W)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine (bool) – If
True
, use learnable affine parameters. Default:True
- Shape:
Input: \((N, C, H, W)\) where \(N\) is the batch size, \(C\) is the number of input channels,
\(H\) is the input height, and \(W\) is the input width - Output: same shape as the input
- class cvnets.layers.normalization.layer_norm.LayerNormFP32(normalized_shape: int | List[int] | Size, eps: float | None = 1e-05, elementwise_affine: bool | None = True, *args, **kwargs)[source]
Bases:
LayerNorm
Applies Layer Normalization over a input tensor with FP32 precision
- __init__(normalized_shape: int | List[int] | Size, eps: float | None = 1e-05, elementwise_affine: bool | None = True, *args, **kwargs)[source]
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
cvnets.layers.normalization.sync_batch_norm module
- class cvnets.layers.normalization.sync_batch_norm.SyncBatchNorm(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
SyncBatchNorm
Applies a Syncronized Batch Normalization over the input tensor
- Parameters:
num_features (Optional, int) – \(C\) from an expected input of size \((N, C, *)\)
eps (Optional, float) – Value added to the denominator for numerical stability. Default: 1e-5
momentum (Optional, float) – Value used for the running_mean and running_var computation. Default: 0.1
affine (bool) – If
True
, use learnable affine parameters. Default:True
track_running_stats – If
True
, tracks running mean and variance. Default:True
- Shape:
Input: \((N, C, *)\) where \(N\) is the batch size, \(C\) is the number of input channels,
\(*\) is the remaining input dimensions - Output: same shape as the input
- class cvnets.layers.normalization.sync_batch_norm.SyncBatchNormFP32(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs)[source]
Bases:
SyncBatchNorm
Synchronized BN in FP32
- __init__(num_features: int, eps: float | None = 1e-05, momentum: float | None = 0.1, affine: bool | None = True, track_running_stats: bool | None = True, *args, **kwargs) None [source]
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor, *args, **kwargs) Tensor [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Module contents
- cvnets.layers.normalization.build_normalization_layer(opts: Namespace, num_features: int, norm_type: str | None = None, num_groups: int | None = None, momentum: float | None = None) Module [source]
Helper function to build the normalization layer. The function can be used in either of below mentioned ways: Scenario 1: Set the default normalization layers using command line arguments. This is useful when the same normalization layer is used for the entire network (e.g., ResNet). Scenario 2: Network uses different normalization layers. In that case, we can override the default normalization layer by specifying the name using norm_type argument.