normalization
– Normalization Layers¶
Contents
Extended Normalization Layers¶
-
class
neuralnet_pytorch.layers.
BatchNorm1d
(input_shape, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, activation=None, no_scale=False, **kwargs)[source]¶ Performs batch normalization on 1D signals.
Parameters: - input_shape – shape of the input tensor. If an integer is passed, it is treated as the size of each input sample.
- eps – a value added to the denominator for numerical stability. Default: 1e-5.
- momentum – the value used for the running_mean and running_var
computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1. - affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
. - track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
. - activation – non-linear function to activate the linear result.
It accepts any callable function
as well as a recognizable
str
. A list of possiblestr
is infunction
. - no_scale (bool) – whether to use a trainable scale parameter. Default:
True
. - kwargs – extra keyword arguments to pass to activation.
-
class
neuralnet_pytorch.layers.
BatchNorm2d
(input_shape, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, activation=None, no_scale=False, **kwargs)[source]¶ Performs batch normalization on 2D signals.
Parameters: - input_shape – shape of the 4D input image. If a single integer is passed, it is treated as the number of input channels and other sizes are unknown.
- eps – a value added to the denominator for numerical stability. Default: 1e-5.
- momentum – the value used for the running_mean and running_var
computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1. - affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
. - track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
. - activation – non-linear function to activate the linear result.
It accepts any callable function
as well as a recognizable
str
. A list of possiblestr
is infunction
. - no_scale (bool) – whether to use a trainable scale parameter. Default:
True
. - kwargs – extra keyword arguments to pass to activation.
-
class
neuralnet_pytorch.layers.
LayerNorm
(input_shape, eps=1e-05, elementwise_affine=True, activation=None, **kwargs)[source]¶ Performs layer normalization on input tensor.
Parameters: - input_shape –
input shape from an expected input of size
\[[\text{input_shape}[0] \times \text{input_shape}[1] \times \ldots \times \text{input_shape}[-1]]\]If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
- eps – a value added to the denominator for numerical stability. Default: 1e-5.
- elementwise_affine – a boolean value that when set to
True
, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:True
. - activation – non-linear function to activate the linear result.
It accepts any callable function
as well as a recognizable
str
. A list of possiblestr
is infunction
. - kwargs – extra keyword arguments to pass to activation.
- input_shape –
-
class
neuralnet_pytorch.layers.
InstanceNorm1d
(input_shape, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False, activation=None, **kwargs)[source]¶ Performs instance normalization on 1D signals.
Parameters: - input_shape – shape of the input tensor. If an integer is passed, it is treated as the size of each input sample.
- eps – a value added to the denominator for numerical stability. Default: 1e-5.
- momentum – the value used for the running_mean and running_var
computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1. - affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
. - track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
. - activation – non-linear function to activate the linear result.
It accepts any callable function
as well as a recognizable
str
. A list of possiblestr
is infunction
. - kwargs – extra keyword arguments to pass to activation.
-
class
neuralnet_pytorch.layers.
InstanceNorm2d
(input_shape, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False, activation=None, **kwargs)[source]¶ Performs instance normalization on 2D signals.
Parameters: - input_shape – shape of the 4D input image. If a single integer is passed, it is treated as the number of input channels and other sizes are unknown.
- eps – a value added to the denominator for numerical stability. Default: 1e-5.
- momentum – the value used for the running_mean and running_var
computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1. - affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
. - track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
. - activation – non-linear function to activate the linear result.
It accepts any callable function
as well as a recognizable
str
. A list of possiblestr
is infunction
. - kwargs – extra keyword arguments to pass to activation.
-
class
neuralnet_pytorch.layers.
GroupNorm
(input_shape, num_groups, eps=1e-05, affine=True, activation=None, **kwargs)[source]¶ - Performs instance normalization on 2D signals.
Parameters: - input_shape – shape of the 4D input image. If a single integer is passed, it is treated as the number of input channels and other sizes are unknown.
- num_groups (int) – number of channels expected in input
- eps – a value added to the denominator for numerical stability. Default: 1e-5.
- affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
. - activation – non-linear function to activate the linear result.
It accepts any callable function
as well as a recognizable
str
. A list of possiblestr
is infunction
. - kwargs – extra keyword arguments to pass to activation.
Custom Lormalization Layers¶
-
class
neuralnet_pytorch.layers.
FeatureNorm1d
(input_shape, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, activation=None, no_scale=False, **kwargs)[source]¶ Performs batch normalization over the last dimension of the input.
Parameters: - input_shape – shape of the input tensor. If an integer is passed, it is treated as the size of each input sample.
- eps – a value added to the denominator for numerical stability. Default: 1e-5.
- momentum – the value used for the running_mean and running_var
computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1. - affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
. - track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
. - activation – non-linear function to activate the linear result.
It accepts any callable function
as well as a recognizable
str
. A list of possiblestr
is infunction
. - no_scale (bool) – whether to use a trainable scale parameter. Default:
True
. - kwargs – extra keyword arguments to pass to activation.
-
class
neuralnet_pytorch.layers.
AdaIN
(module, dim=(2, 3))[source]¶ The original Adaptive Instance Normalization from https://arxiv.org/abs/1703.06868.
\(Y_1 = \text{module}(X)\)
\(Y_2 = \text{module}(X)\)
\(Y = \sigma_{Y_2} * (Y_1 - \mu_{Y_1}) / \sigma_{Y_1} + \mu_{Y_2}\)
Parameters: - module – a
torch
module which generates target feature maps. - dim – dimension to reduce in the target feature maps.
Default:
(2, 3)
.
- module – a
-
class
neuralnet_pytorch.layers.
MultiModuleAdaIN
(module1, module2, dim1=(2, 3), dim2=(2, 3))[source]¶ A modified Adaptive Instance Normalization from https://arxiv.org/abs/1703.06868.
\(Y_1 = \text{module1}(X)\)
\(Y_2 = \text{module2}(X)\)
\(Y = \sigma_{Y_2} * (Y_1 - \mu_{Y_1}) / \sigma_{Y_1} + \mu_{Y_2}\)
Parameters: - module1 – a
torch
module which generates target feature maps. - module2 – a
torch
module which generates style feature maps. - dim1 – dimension to reduce in the target feature maps.
Default:
(2, 3)
. - dim2 – dimension to reduce in the style feature maps.
Default:
(2, 3)
.
- module1 – a
-
class
neuralnet_pytorch.layers.
MultiInputAdaIN
(module1, module2, dim1=(2, 3), dim2=(2, 3))[source]¶ A modified Adaptive Instance Normalization from https://arxiv.org/abs/1703.06868.
\(Y_1 = \text{module1}(X_1)\)
\(Y_2 = \text{module2}(X_2)\)
\(Y = \sigma_{Y_2} * (Y_1 - \mu_{Y_1}) / \sigma_{Y_1} + \mu_{Y_2}\)
Parameters: - module1 – a
torch
module which generates target feature maps. - module2 – a
torch
module which generates style feature maps. - dim1 – dimension to reduce in the target feature maps.
Default:
(2, 3)
. - dim2 – dimension to reduce in the style feature maps.
Default:
(2, 3)
.
- module1 – a