ezmsg.learn.model.mlp_old#

Classes

class MLP(in_channels, hidden_channels, norm_layer=None, activation_layer=<class 'torch.nn.modules.activation.ReLU'>, inplace=None, bias=True, dropout=0.0)[source]#

Bases: Sequential

Parameters:
__init__(in_channels, hidden_channels, norm_layer=None, activation_layer=<class 'torch.nn.modules.activation.ReLU'>, inplace=None, bias=True, dropout=0.0)[source]#

Copy-pasted from torchvision MLP

Parameters:
  • in_channels (int) – Number of input channels

  • hidden_channels (list[int]) – List of the hidden channel dimensions

  • norm_layer (Module | None) – Norm layer that will be stacked on top of the linear layer. If None this layer won’t be used.

  • activation_layer (Module | None) – Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the linear layer. If None this layer won’t be used.

  • inplace (bool | None) – Parameter for the activation layer, which can optionally do the operation in-place. Default is None, which uses the respective default values of the activation_layer and Dropout layer.

  • bias (bool) – Whether to use bias in the linear layer.

  • dropout (float) – The probability for the dropout layer.

class MLP(in_channels, hidden_channels, norm_layer=None, activation_layer=<class 'torch.nn.modules.activation.ReLU'>, inplace=None, bias=True, dropout=0.0)[source]#

Bases: Sequential

Parameters:
__init__(in_channels, hidden_channels, norm_layer=None, activation_layer=<class 'torch.nn.modules.activation.ReLU'>, inplace=None, bias=True, dropout=0.0)[source]#

Copy-pasted from torchvision MLP

Parameters:
  • in_channels (int) – Number of input channels

  • hidden_channels (list[int]) – List of the hidden channel dimensions

  • norm_layer (Module | None) – Norm layer that will be stacked on top of the linear layer. If None this layer won’t be used.

  • activation_layer (Module | None) – Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the linear layer. If None this layer won’t be used.

  • inplace (bool | None) – Parameter for the activation layer, which can optionally do the operation in-place. Default is None, which uses the respective default values of the activation_layer and Dropout layer.

  • bias (bool) – Whether to use bias in the linear layer.

  • dropout (float) – The probability for the dropout layer.