lumin.nn.losses package¶
Submodules¶
lumin.nn.losses.advanced_losses module¶
-
class
lumin.nn.losses.advanced_losses.
WeightedFractionalMSE
(weight=None)[source]¶ Bases:
torch.nn.modules.loss.MSELoss
Class for computing the Mean fractional Squared-Error loss (<Delta^2/true>) with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.
- Parameters
weight (
Optional
[Tensor
]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss
- Examples::
>>> loss = WeightedFractionalMSE() >>> >>> loss = WeightedFractionalMSE(weights)
-
class
lumin.nn.losses.advanced_losses.
WeightedBinnedHuber
(perc, bins, mom=0.1, weight=None)[source]¶ Bases:
torch.nn.modules.loss.MSELoss
Class for computing the Huberised Mean Squared-Error loss (<Delta^2>) with optional weights per prediction. Losses soft-clamped with Huber like term above adaptive percentile in bins of the target. The thresholds used to transition from MSE to MAE per bin are initialised using the first batch of data as the value of the specified percentile in each bin, subsequently, the thresholds evolve according to: T <- (1-mom)*T + mom*T_batch, where T_batch are the percentiles comuted on the current batch, and mom(emtum) lies between [0,1]
For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.
- Parameters
perc (
float
) – quantile of data in each bin above which to use MAE rather than MSEbins (
Tensor
) – tensor of edges for the binning of the target datamom – momentum for the running average of the thresholds
weight (
Optional
[Tensor
]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss
- Examples::
>>> loss = WeightedBinnedHuber(perc=0.68) >>> >>> loss = WeightedBinnedHuber(perc=0.68, weights=weights)
-
class
lumin.nn.losses.advanced_losses.
WeightedFractionalBinnedHuber
(perc, bins, mom=0.1, weight=None)[source]¶ Bases:
lumin.nn.losses.advanced_losses.WeightedBinnedHuber
Class for computing the Huberised Mean fractional Squared-Error loss (<Delta^2/true>) with optional weights per prediction. Losses soft-clamped with Huber like term above adaptive percentile in bins of the target. The thresholds used to transition from MSE to MAE per bin are initialised using the first batch of data as the value of the specified percentile in each bin, subsequently, the thresholds evolve according to: T <- (1-mom)*T + mom*T_batch, where T_batch are the percentiles comuted on the current batch, and mom(emtum) lies between [0,1]
For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.
- Parameters
perc (
float
) – quantile of data in each bin above which to use MAE rather than MSEbins (
Tensor
) – tensor of edges for the binning of the target datamom – momentum for the running average of the thresholds
weight (
Optional
[Tensor
]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss
lumin.nn.losses.basic_weighted module¶
-
class
lumin.nn.losses.basic_weighted.
WeightedMSE
(weight=None)[source]¶ Bases:
torch.nn.modules.loss.MSELoss
Class for computing Mean Squared-Error loss with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.
- Parameters
weight (
Optional
[Tensor
]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss
- Examples::
>>> loss = WeightedMSE() >>> >>> loss = WeightedMSE(weights)
-
class
lumin.nn.losses.basic_weighted.
WeightedMAE
(weight=None)[source]¶ Bases:
torch.nn.modules.loss.L1Loss
Class for computing Mean Absolute-Error loss with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.
- Parameters
weight (
Optional
[Tensor
]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss
- Examples::
>>> loss = WeightedMAE() >>> >>> loss = WeightedMAE(weights)
-
class
lumin.nn.losses.basic_weighted.
WeightedCCE
(weight=None)[source]¶ Bases:
torch.nn.modules.loss.NLLLoss
Class for computing Categorical Cross-Entropy loss with optional weights per prediction. For compatability with using basic PyTorch losses, weights are passed during initialisation rather than when computing the loss.
- Parameters
weight (
Optional
[Tensor
]) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the loss
- Examples::
>>> loss = WeightedCCE() >>> >>> loss = WeightedCCE(weights)
lumin.nn.losses.hep_losses module¶
-
class
lumin.nn.losses.hep_losses.
SignificanceLoss
(weight, sig_wgt=<class 'float'>, bkg_wgt=<class 'float'>, func=typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor])[source]¶ Bases:
torch.nn.modules.module.Module
General class for implementing significance-based loss functions, e.g. Asimov Loss (https://arxiv.org/abs/1806.00322). For compatability with using basic PyTorch losses, event weights are passed during initialisation rather than when computing the loss.
- Parameters
weight (
Tensor
) – sample weights as PyTorch Tensor, to be used with data to be passed when computing the losssig_wgt – total weight of signal events
bkg_wgt – total weight of background events
func – callable which returns a float based on signal and background weights
- Examples::
>>> loss = SignificanceLoss(weight, sig_weight=sig_weight, ... bkg_weight=bkg_weight, func=calc_ams_torch) >>> >>> loss = SignificanceLoss(weight, sig_weight=sig_weight, ... bkg_weight=bkg_weight, ... func=partial(calc_ams_torch, br=10))