lumin.nn.models.layers package¶
Submodules¶
lumin.nn.models.layers.activations module¶
-
lumin.nn.models.layers.activations.
lookup_act
(act)[source]¶ Map activation name to class
- Parameters
act (
str
) – string representation of activation function- Return type
Any
- Returns
Class implementing requested activation function
-
class
lumin.nn.models.layers.activations.
Swish
(inplace=False)[source]¶ Bases:
torch.nn.modules.module.Module
Non-trainable Swish activation function https://arxiv.org/abs/1710.05941
- Parameters
inplace – whether to apply activation inplace
- Examples::
>>> swish = Swish()
lumin.nn.models.layers.batchnorms module¶
-
class
lumin.nn.models.layers.batchnorms.
LCBatchNorm1d
(bn)[source]¶ Bases:
torch.nn.modules.module.Module
Wrapper class for 1D batchnorm to make it run over (Batch x length x channel) data for use in NNs designed to be broadcast across matrix data.
- Parameters
bn (
BatchNorm1d
) – base 1D batchnorm module to call
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Return type
Tensor
-
class
lumin.nn.models.layers.batchnorms.
RunningBatchNorm1d
(nf, mom=0.1, n_warmup=20, eps=1e-05)[source]¶ Bases:
torch.nn.modules.module.Module
1D Running batchnorm implementation from fastai (https://github.com/fastai/course-v3) distributed under apache2 licence. Modifcations: Adaptation to 1D & 3D, add eps in mom1 calculation, type hinting, docs
- Parameters
nf (
int
) – number of features/channelsmom (
float
) – momentum (fraction to add to running averages)n_warmup (
int
) – number of warmup iterations (during which variance is clamped)eps (
float
) – epsilon to prevent division by zero
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Return type
Tensor
-
class
lumin.nn.models.layers.batchnorms.
RunningBatchNorm2d
(nf, mom=0.1, n_warmup=20, eps=1e-05)[source]¶ Bases:
lumin.nn.models.layers.batchnorms.RunningBatchNorm1d
2D Running batchnorm implementation from fastai (https://github.com/fastai/course-v3) distributed under apache2 licence. Modifcations: add eps in mom1 calculation, type hinting, docs
- Parameters
nf (
int
) – number of features/channelsmom (
float
) – momentum (fraction to add to running averages)eps (
float
) – epsilon to prevent division by zero
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Return type
Tensor
-
class
lumin.nn.models.layers.batchnorms.
RunningBatchNorm3d
(nf, mom=0.1, n_warmup=20, eps=1e-05)[source]¶ Bases:
lumin.nn.models.layers.batchnorms.RunningBatchNorm2d
3D Running batchnorm implementation from fastai (https://github.com/fastai/course-v3) distributed under apache2 licence. Modifcations: Adaptation to 3D, add eps in mom1 calculation, type hinting, docs
- Parameters
nf (
int
) – number of features/channelsmom (
float
) – momentum (fraction to add to running averages)eps (
float
) – epsilon to prevent division by zero
lumin.nn.models.layers.mish module¶
This file contains code modfied from https://github.com/digantamisra98/Mish which is made available under the following MIT Licence: MIT License
Copyright (c) 2019 Diganta Misra
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The Apache Licence 2.0 underwhich the majority of the rest of LUMIN is distributed does not apply to the code within this file.
lumin.nn.models.layers.self_attention module¶
-
class
lumin.nn.models.layers.self_attention.
SelfAttention
(n_fpv, n_a, do=0, bn=False, act='relu', lookup_init=<function lookup_normal_init>, lookup_act=<function lookup_act>, bn_class=<class 'torch.nn.modules.batchnorm.BatchNorm1d'>)[source]¶ Bases:
torch.nn.modules.module.Module
Class for applying self attention (Vaswani et al. 2017 (https://arxiv.org/abs/1706.03762)) to features per vertex.
- Parameters
n_fpv (
int
) – number of features per vertex to expectn_a (
int
) – width of self attention representation (paper recommends n_fpv//4)do (
float
) – dropout rate to be applied to hidden layers in the NNsbn (
bool
) – whether batch normalisation should be applied to hidden layers in the NNsact (
str
) – activation function to apply to hidden layers in the NNslookup_init (
Callable
[[str
,Optional
[int
],Optional
[int
]],Callable
[[Tensor
],None
]]) – function taking choice of activation function, number of inputs, and number of outputs an returning a function to initialise layer weights.lookup_act (
Callable
[[str
],Any
]) – function taking choice of activation function and returning an activation function layerbn_class (
Callable
[[int
],Module
]) – class to use for BatchNorm, default isLCBatchNorm1d