Shortcuts

lumin.nn.models.layers package

Submodules

lumin.nn.models.layers.activations module

lumin.nn.models.layers.activations.lookup_act(act)[source]

Map activation name to class

Parameters

act (str) – string representation of activation function

Return type

Any

Returns

Class implementing requested activation function

class lumin.nn.models.layers.activations.Swish(inplace=False)[source]

Bases: torch.nn.modules.module.Module

Non-trainable Swish activation function https://arxiv.org/abs/1710.05941

Parameters

inplace – whether to apply activation inplace

Examples::
>>> swish = Swish()
forward(x)[source]

Pass tensor through Swish function

Parameters

x (Tensor) – incoming tensor

Return type

Tensor

Returns

Resulting tensor

lumin.nn.models.layers.batchnorms module

class lumin.nn.models.layers.batchnorms.LCBatchNorm1d(bn)[source]

Bases: torch.nn.modules.module.Module

Wrapper class for 1D batchnorm to make it run over (Batch x length x channel) data for use in NNs designed to be broadcast across matrix data.

Parameters

bn (BatchNorm1d) – base 1D batchnorm module to call

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

class lumin.nn.models.layers.batchnorms.RunningBatchNorm1d(nf, mom=0.1, n_warmup=20, eps=1e-05)[source]

Bases: torch.nn.modules.module.Module

1D Running batchnorm implementation from fastai (https://github.com/fastai/course-v3) distributed under apache2 licence. Modifcations: Adaptation to 1D & 3D, add eps in mom1 calculation, type hinting, docs

Parameters
  • nf (int) – number of features/channels

  • mom (float) – momentum (fraction to add to running averages)

  • n_warmup (int) – number of warmup iterations (during which variance is clamped)

  • eps (float) – epsilon to prevent division by zero

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

update_stats(x)[source]
Return type

None

class lumin.nn.models.layers.batchnorms.RunningBatchNorm2d(nf, mom=0.1, n_warmup=20, eps=1e-05)[source]

Bases: lumin.nn.models.layers.batchnorms.RunningBatchNorm1d

2D Running batchnorm implementation from fastai (https://github.com/fastai/course-v3) distributed under apache2 licence. Modifcations: add eps in mom1 calculation, type hinting, docs

Parameters
  • nf (int) – number of features/channels

  • mom (float) – momentum (fraction to add to running averages)

  • eps (float) – epsilon to prevent division by zero

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type

Tensor

class lumin.nn.models.layers.batchnorms.RunningBatchNorm3d(nf, mom=0.1, n_warmup=20, eps=1e-05)[source]

Bases: lumin.nn.models.layers.batchnorms.RunningBatchNorm2d

3D Running batchnorm implementation from fastai (https://github.com/fastai/course-v3) distributed under apache2 licence. Modifcations: Adaptation to 3D, add eps in mom1 calculation, type hinting, docs

Parameters
  • nf (int) – number of features/channels

  • mom (float) – momentum (fraction to add to running averages)

  • eps (float) – epsilon to prevent division by zero

lumin.nn.models.layers.mish module

This file contains code modfied from https://github.com/digantamisra98/Mish which is made available under the following MIT Licence: MIT License

Copyright (c) 2019 Diganta Misra

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

The Apache Licence 2.0 underwhich the majority of the rest of LUMIN is distributed does not apply to the code within this file.

class lumin.nn.models.layers.mish.Mish[source]

Bases: torch.nn.modules.module.Module

Applies the mish function element-wise: mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x))) Shape:

  • Input: (N, *) where * means, any number of additional dimensions

  • Output: (N, *), same shape as the input

Examples

>>> m = Mish()
>>> input = torch.randn(2)
>>> output = m(input)
forward(input)[source]

Forward pass of the function.

lumin.nn.models.layers.self_attention module

class lumin.nn.models.layers.self_attention.SelfAttention(n_fpv, n_a, do=0, bn=False, act='relu', lookup_init=<function lookup_normal_init>, lookup_act=<function lookup_act>, bn_class=<class 'torch.nn.modules.batchnorm.BatchNorm1d'>)[source]

Bases: torch.nn.modules.module.Module

Class for applying self attention (Vaswani et al. 2017 (https://arxiv.org/abs/1706.03762)) to features per vertex.

Parameters
  • n_fpv (int) – number of features per vertex to expect

  • n_a (int) – width of self attention representation (paper recommends n_fpv//4)

  • do (float) – dropout rate to be applied to hidden layers in the NNs

  • bn (bool) – whether batch normalisation should be applied to hidden layers in the NNs

  • act (str) – activation function to apply to hidden layers in the NNs

  • lookup_init (Callable[[str, Optional[int], Optional[int]], Callable[[Tensor], None]]) – function taking choice of activation function, number of inputs, and number of outputs an returning a function to initialise layer weights.

  • lookup_act (Callable[[str], Any]) – function taking choice of activation function and returning an activation function layer

  • bn_class (Callable[[int], Module]) – class to use for BatchNorm, default is LCBatchNorm1d

forward(x)[source]

Augments features per vertex

Arguemnts:

x: incoming data (batch x vertices x features)

Return type

Tensor

Returns

augmented features (batch x vertices x new features)

get_out_size()[source]
Return type

int

Module contents

Read the Docs v: stable
Versions
latest
stable
v0.8.0
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.1
v0.5.0
v0.4.0.1
v0.3.1
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.

Docs

Access comprehensive developer and user documentation for LUMIN

View Docs

Tutorials

Get tutorials for beginner and advanced researchers demonstrating many of the features of LUMIN

View Tutorials