Shortcuts

lumin.nn.losses package

Submodules

lumin.nn.losses.advanced_losses module

class lumin.nn.losses.advanced_losses.WeightedFractionalMSE(weight=None)[source]

Bases: torch.nn.modules.loss.MSELoss

Class for computing the Mean fractional Squared-Error loss (<Delta^2/true>) with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.

Parameters

weight (Optional[Tensor]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss

Examples::
>>> loss = WeightedFractionalMSE()
>>>
>>> loss = WeightedFractionalMSE(weights)
forward(input, target)[source]

Evaluate loss for given predictions

Parameters
  • input (Tensor) – prediction tensor

  • target (Tensor) – target tensor

Return type

Tensor

Returns

(weighted) loss

class lumin.nn.losses.advanced_losses.WeightedBinnedHuber(perc, bins, mom=0.1, weight=None)[source]

Bases: torch.nn.modules.loss.MSELoss

Class for computing the Huberised Mean Squared-Error loss (<Delta^2>) with optional weights per prediction. Losses soft-clamped with Huber like term above adaptive percentile in bins of the target. The thresholds used to transition from MSE to MAE per bin are initialised using the first batch of data as the value of the specified percentile in each bin, subsequently, the thresholds evolve according to: T <- (1-mom)*T + mom*T_batch, where T_batch are the percentiles comuted on the current batch, and mom(emtum) lies between [0,1]

For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.

Parameters
  • perc (float) – quantile of data in each bin above which to use MAE rather than MSE

  • bins (Tensor) – tensor of edges for the binning of the target data

  • mom – momentum for the running average of the thresholds

  • weight (Optional[Tensor]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss

Examples::
>>> loss = WeightedBinnedHuber(perc=0.68)
>>>
>>> loss = WeightedBinnedHuber(perc=0.68, weights=weights)
forward(input, target)[source]

Evaluate loss for given predictions

Parameters
  • input (Tensor) – prediction tensor

  • target (Tensor) – target tensor

Return type

Tensor

Returns

(weighted) loss

class lumin.nn.losses.advanced_losses.WeightedFractionalBinnedHuber(perc, bins, mom=0.1, weight=None)[source]

Bases: lumin.nn.losses.advanced_losses.WeightedBinnedHuber

Class for computing the Huberised Mean fractional Squared-Error loss (<Delta^2/true>) with optional weights per prediction. Losses soft-clamped with Huber like term above adaptive percentile in bins of the target. The thresholds used to transition from MSE to MAE per bin are initialised using the first batch of data as the value of the specified percentile in each bin, subsequently, the thresholds evolve according to: T <- (1-mom)*T + mom*T_batch, where T_batch are the percentiles comuted on the current batch, and mom(emtum) lies between [0,1]

For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.

Parameters
  • perc (float) – quantile of data in each bin above which to use MAE rather than MSE

  • bins (Tensor) – tensor of edges for the binning of the target data

  • mom – momentum for the running average of the thresholds

  • weight (Optional[Tensor]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss

forward(input, target)[source]

Evaluate loss for given predictions

Parameters
  • input (Tensor) – prediction tensor

  • target (Tensor) – target tensor

Return type

Tensor

Returns

(weighted) loss

lumin.nn.losses.basic_weighted module

class lumin.nn.losses.basic_weighted.WeightedMSE(weight=None)[source]

Bases: torch.nn.modules.loss.MSELoss

Class for computing Mean Squared-Error loss with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.

Parameters

weight (Optional[Tensor]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss

Examples::
>>> loss = WeightedMSE()
>>>
>>> loss = WeightedMSE(weights)
forward(input, target)[source]

Evaluate loss for given predictions

Parameters
  • input (Tensor) – prediction tensor

  • target (Tensor) – target tensor

Return type

Tensor

Returns

(weighted) loss

class lumin.nn.losses.basic_weighted.WeightedMAE(weight=None)[source]

Bases: torch.nn.modules.loss.L1Loss

Class for computing Mean Absolute-Error loss with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.

Parameters

weight (Optional[Tensor]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss

Examples::
>>> loss = WeightedMAE()
>>>
>>> loss = WeightedMAE(weights)
forward(input, target)[source]

Evaluate loss for given predictions

Parameters
  • input (Tensor) – prediction tensor

  • target (Tensor) – target tensor

Return type

Tensor

Returns

(weighted) loss

class lumin.nn.losses.basic_weighted.WeightedCCE(weight=None)[source]

Bases: torch.nn.modules.loss.NLLLoss

Class for computing Categorical Cross-Entropy loss with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.

Parameters

weight (Optional[Tensor]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss

Examples::
>>> loss = WeightedCCE()
>>>
>>> loss = WeightedCCE(weights)
forward(input, target)[source]

Evaluate loss for given predictions

Parameters
  • input (Tensor) – prediction tensor

  • target (Tensor) – target tensor

Return type

Tensor

Returns

(weighted) loss

lumin.nn.losses.hep_losses module

class lumin.nn.losses.hep_losses.SignificanceLoss(weight, sig_wgt=<class 'float'>, bkg_wgt=<class 'float'>, func=typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor])[source]

Bases: torch.nn.modules.module.Module

General class for implementing significance-based loss functions, e.g. Asimov Loss (https://arxiv.org/abs/1806.00322). For compatability with using basic PyTorch losses, event weights are passed during initialisation rather than when computing the loss.

Parameters
  • weight (Tensor) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss

  • sig_wgt – total weight of signal events

  • bkg_wgt – total weight of background events

  • func – callable which returns a float based on signal and background weights

Examples::
>>> loss = SignificanceLoss(weight, sig_weight=sig_weight,
...                         bkg_weight=bkg_weight, func=calc_ams_torch)
>>>
>>> loss = SignificanceLoss(weight, sig_weight=sig_weight,
...                         bkg_weight=bkg_weight,
...                         func=partial(calc_ams_torch, br=10))
forward(input, target)[source]

Evaluate loss for given predictions

Parameters
  • input (Tensor) – prediction tensor

  • target (Tensor) – target tensor

Return type

Tensor

Returns

(weighted) loss

Module contents

Read the Docs v: stable
Versions
latest
stable
v0.8.0
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.1
v0.5.0
v0.4.0.1
v0.3.1
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.

Docs

Access comprehensive developer and user documentation for LUMIN

View Docs

Tutorials

Get tutorials for beginner and advanced researchers demonstrating many of the features of LUMIN

View Tutorials