Shortcuts

lumin.nn.callbacks package

Submodules

lumin.nn.callbacks.callback module

class lumin.nn.callbacks.callback.Callback(model=None, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.abs_callback.AbsCallback

Base callback class from which other callbacks should inherit.

Parameters
  • model (Optional[AbsModel]) – model to refer to during training

  • plot_settings (PlotSettings) – PlotSettings class

set_model(model)[source]

Sets the callback’s model in order to allow the callback to access and adjust model parameters

Parameters

model (AbsModel) – model to refer to during training

Return type

None

set_plot_settings(plot_settings)[source]

Sets the plot settings for any plots produced by the callback

Parameters

plot_settings (PlotSettings) – PlotSettings class

Return type

None

lumin.nn.callbacks.cyclic_callbacks module

class lumin.nn.callbacks.cyclic_callbacks.AbsCyclicCallback(interp, param_range, cycle_mult=1, decrease_param=False, scale=1, model=None, nb=None, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.callback.Callback

Abstract class for callbacks affecting lr or mom

Parameters
  • interp (str) – string representation of interpolation function. Either ‘linear’ or ‘cosine’.

  • param_range (Tuple[float, float]) – minimum and maximum values for parameter

  • cycle_mult (int) – multiplicative factor for adjusting the cycle length after each cycle. E.g cycle_mult=1 keeps the same cycle length, cycle_mult=2 doubles the cycle length after each cycle.

  • decrease_param (bool) – whether to begin by decreasing the parameter, otherwise begin by increasing it

  • scale (int) – multiplicative factor for setting the initial number of epochs per cycle. E.g scale=1 means 1 epoch per cycle, scale=5 means 5 epochs per cycle.

  • model (Optional[AbsModel]) – model to refer to during training

  • nb (Optional[int]) – number of minibatches (iterations) to expect per epoch

  • plot_settings (PlotSettings) – PlotSettings class

on_batch_begin(**kargs)[source]

Computes the new value for the optimiser parameter and returns it

Return type

float

Returns

new value for optimiser parameter

on_batch_end(**kargs)[source]

Increments the callback’s progress through the cycle

Return type

None

on_epoch_begin(**kargs)[source]

Ensures the cycle_end flag is false when the epoch starts

Return type

None

plot()[source]

Plots the history of the parameter evolution as a function of iterations

Return type

None

set_nb(nb)[source]

Sets the callback’s internal number of iterations per cycle equal to nb*scale

Parameters

nb (int) – number of minibatches per epoch

Return type

None

class lumin.nn.callbacks.cyclic_callbacks.CycleLR(lr_range, interp='cosine', cycle_mult=1, decrease_param='auto', scale=1, model=None, nb=None, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.cyclic_callbacks.AbsCyclicCallback

Callback to cycle learning rate during training according to either: cosine interpolation for SGDR https://arxiv.org/abs/1608.03983 or linear interpolation for Smith cycling https://arxiv.org/abs/1506.01186

Parameters
  • lr_range (Tuple[float, float]) – tuple of initial and final LRs

  • interp (str) – ‘cosine’ or ‘linear’ interpolation

  • cycle_mult (int) – Multiplicative constant for altering the cycle length after each complete cycle

  • decrease_param (Union[str, bool]) – whether to increase or decrease the LR (effectively reverses lr_range order), ‘auto’ selects according to interp

  • scale (int) – Multiplicative constant for altering the length of a cycle. 1 corresponds to one cycle = one (sub-)epoch

  • model (Optional[AbsModel]) – Model to alter, alternatively call set_model().

  • nb (Optional[int]) – Number of batches in a (sub-)epoch

  • plot_settings (PlotSettings) – PlotSettings class to control figure appearance

Examples::
>>> cosine_lr = CycleLR(lr_range=(0, 2e-3), cycle_mult=2, scale=1,
...                     interp='cosine', nb=100)
>>>
>>> cyclical_lr = CycleLR(lr_range=(2e-4, 2e-3), cycle_mult=1, scale=5,
                          interp='linear', nb=100)
on_batch_begin(**kargs)[source]

Computes the new lr and assignes it to the optimiser

Return type

None

class lumin.nn.callbacks.cyclic_callbacks.CycleMom(mom_range, interp='cosine', cycle_mult=1, decrease_param='auto', scale=1, model=None, nb=None, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.cyclic_callbacks.AbsCyclicCallback

Callback to cycle momentum (beta 1) during training according to either: cosine interpolation for SGDR https://arxiv.org/abs/1608.03983 or linear interpolation for Smith cycling https://arxiv.org/abs/1506.01186 By default is set to evolve in opposite direction to learning rate, a la https://arxiv.org/abs/1803.09820

Parameters
  • mom_range (Tuple[float, float]) – tuple of initial and final momenta

  • interp (str) – ‘cosine’ or ‘linear’ interpolation

  • cycle_mult (int) – Multiplicative constant for altering the cycle length after each complete cycle

  • decrease_param (Union[str, bool]) – whether to increase or decrease the momentum (effectively reverses mom_range order), ‘auto’ selects according to interp

  • scale (int) – Multiplicative constant for altering the length of a cycle. 1 corresponds to one cycle = one (sub-)epoch

  • model (Optional[AbsModel]) – Model to alter, alternatively call set_model()

  • nb (Optional[int]) – Number of batches in a (sub-)epoch

  • plot_settings (PlotSettings) – PlotSettings class to control figure appearance

Examples::
>>> cyclical_mom = CycleMom(mom_range=(0.85 0.95), cycle_mult=1,
...                         scale=5, interp='linear', nb=100)
on_batch_begin(**kargs)[source]

Computes the new momentum and assignes it to the optimiser

Return type

None

class lumin.nn.callbacks.cyclic_callbacks.OneCycle(lengths, lr_range, mom_range=(0.85, 0.95), interp='cosine', model=None, nb=None, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.cyclic_callbacks.AbsCyclicCallback

Callback implementing Smith 1-cycle evolution for lr and momentum (beta_1) https://arxiv.org/abs/1803.09820 Default interpolation uses fastai-style cosine function. Automatically triggers early stopping on cycle completion.

Parameters
  • lengths (Tuple[int, int]) – tuple of number of (sub-)epochs in first and second stages of cycle

  • lr_range (List[float]) – tuple of initial and final LRs

  • mom_range (Tuple[float, float]) – tuple of initial and final momenta

  • interp (str) – ‘cosine’ or ‘linear’ interpolation

  • model (Optional[AbsModel]) – Model to alter, alternatively call set_model()

  • nb (Optional[int]) – Number of batches in a (sub-)epoch

  • plot_settings (PlotSettings) – PlotSettings class to control figure appearance

Examples::
>>> onecycle = OneCycle(lengths=(15, 30), lr_range=[1e-4, 1e-2],
...                     mom_range=(0.85, 0.95), interp='cosine', nb=100)
on_batch_begin(**kargs)[source]

Computes the new lr and momentum and assignes them to the optimiser

Return type

None

plot()[source]

Plots the history of the lr and momentum evolution as a function of iterations

lumin.nn.callbacks.data_callbacks module

class lumin.nn.callbacks.data_callbacks.BinaryLabelSmooth(coefs=0, model=None)[source]

Bases: lumin.nn.callbacks.callback.Callback

Callback for applying label smoothing to binary classes, based on https://arxiv.org/abs/1512.00567 Applies smoothing during both training and inference.

Parameters
  • coefs (Union[float, Tuple[float, float]]) – Smoothing coefficients: 0->coef[0] 1->1-coef[1]. if passed float, coef[0]=coef[1]

  • model (Optional[AbsModel]) – not used, only for compatability

Examples::
>>> lbl_smooth = BinaryLabelSmooth(0.1)
>>>
>>> lbl_smooth = BinaryLabelSmooth((0.1, 0.02))
on_epoch_begin(by, **kargs)[source]

Apply smoothing at train-time

Return type

None

on_eval_begin(targets, **kargs)[source]

Apply smoothing at test-time

Return type

None

class lumin.nn.callbacks.data_callbacks.SequentialReweight(reweight_func, scale=0.1, model=None)[source]

Bases: lumin.nn.callbacks.callback.Callback

Caution

Experiemntal proceedure

During ensemble training, sequentially reweight training data in last validation fold based on prediction performance of last trained model. Reweighting highlights data which are easier or more difficult to predict to the next model being trained.

Parameters
  • reweight_func (Callable[[Tensor, Tensor], Tensor]) – callable function returning a tensor of same shape as targets, ideally quantifying model-prediction performance

  • scale (float) – multiplicative factor for rescaling returned tensor of reweight_func

  • model (Optional[AbsModel]) – Model to provide predictions, alternatively call set_model()

Examples::
>>> seq_reweight = SequentialReweight(
...     reweight_func=nn.BCELoss(reduction='none'), scale=0.1)
on_train_end(fy, val_id, **kargs)[source]

Reweighs the validation fold once training is finished

Parameters
  • fy (FoldYielder) – FoldYielder providing the training and validation data

  • fold_id – Fold index which was used for validation

Return type

None

class lumin.nn.callbacks.data_callbacks.SequentialReweightClasses(reweight_func, scale=0.1, model=None)[source]

Bases: lumin.nn.callbacks.data_callbacks.SequentialReweight

Caution

Experiemntal proceedure

Version of SequentialReweight designed for classification, which renormalises class weights to original weight-sum after reweighting During ensemble training, sequentially reweight training data in last validation fold based on prediction performance of last trained model. Reweighting highlights data which are easier or more difficult to predict to the next model being trained.

Parameters
  • reweight_func (Callable[[Tensor, Tensor], Tensor]) – callable function returning a tensor of same shape as targets, ideally quantifying model-prediction performance

  • scale (float) – multiplicative factor for rescaling returned tensor of reweight_func

  • model (Optional[AbsModel]) – Model to provide predictions, alternatively call set_model()

Examples::
>>> seq_reweight = SequentialReweight(
...     reweight_func=nn.BCELoss(reduction='none'), scale=0.1)
class lumin.nn.callbacks.data_callbacks.BootstrapResample(n_folds, bag_each_time=False, reweight=True, model=None)[source]

Bases: lumin.nn.callbacks.callback.Callback

Callback for bootstrap sampling new training datasets from original training data during (ensemble) training.

Parameters
  • n_folds (int) – the number of folds present in training FoldYielder

  • bag_each_time (bool) – whether to sample a new set for each sub-epoch or to use the same sample each time

  • reweight (bool) – whether to reweight the sampleed data to mathch the weight sum (per class) of the original data

  • model (Optional[AbsModel]) – not used, only for compatability

Examples::
>>> bs_resample BootstrapResample(n_folds=len(train_fy))
on_epoch_begin(by, **kargs)[source]

Resamples training data for new epoch

Parameters

by (BatchYielder) – BatchYielder providing data for the upcoming epoch

Return type

None

on_train_begin(**kargs)[source]

Resets internal parameters to prepare for a new training

Return type

None

class lumin.nn.callbacks.data_callbacks.FeatureSubsample(cont_feats, model=None)[source]

Bases: lumin.nn.callbacks.callback.Callback

Callback for training a model on a random sub-sample of the range of possible input features. Only sub-samples continuous features. Number of continuous inputs infered from model. Associated Model will automatically mask its inputs during inference; simply provide inputs with the same number of columns as trainig data.

Attention

This callback is now depreciated in favour of passing cont_subsample_rate and guaranteed_feats to ModelBuilder as these offer greater functionality and are compatable with using a MultiBlock body. Will be removed in V0.5.

Caution

This callback is incompatable with using a MultiBlock body

Parameters
  • cont_feats (List[str]) – list of all continuous features in input data. Order must match.

  • model (Optional[AbsModel]) – Model being trained, alternatively call set_model()

Examples::
>>> feat_subsample = FeatureSubsample(cont_feats=['pT', 'eta', 'phi'])
on_train_begin(**kargs)[source]

Subsamples features for use in training and sets model’s input mask for inference

Return type

None

lumin.nn.callbacks.loss_callbacks module

class lumin.nn.callbacks.loss_callbacks.GradClip(clip, clip_norm=True, model=None)[source]

Bases: lumin.nn.callbacks.callback.Callback

Callback for clipping gradients by norm or value.

Parameters
  • clip (float) – value to clip at

  • clip_norm (bool) – whether to clip according to norm (torch.nn.utils.clip_grad_norm_) or value (torch.nn.utils.clip_grad_value_)

  • model (Optional[AbsModel]) – Model with parameters to clip gradients, alternatively call set_model()

Examples::
>>> grad_clip = GradClip(1e-5)
on_backwards_end(**kargs)[source]

Clips gradients prior to parameter updates

Return type

None

lumin.nn.callbacks.model_callbacks module

class lumin.nn.callbacks.model_callbacks.SWA(start_epoch, renewal_period=-1, model=None, val_fold=None, cyclic_callback=None, update_on_cycle_end=None, verbose=False, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.model_callbacks.AbsModelCallback

Callback providing Stochastic Weight Averaging based on (https://arxiv.org/abs/1803.05407) This adapted version allows the tracking of a pair of average models in order to avoid having to hardcode a specific start point for averaging:

  • Model average x0 will begin to be tracked start_epoch (sub-)epochs/cycles after training begins.

  • cycle_since_replacement is set to 1

  • Renewal_period (sub-)epochs/cycles later, a second average x1 will be tracked.

  • At the next renewal period, the performance of x0 and x1 will be compared on data contained in val_fold.

    • If x0 is better than x1:
      • x1 is replaced by a copy of the current model

      • cycle_since_replacement is increased by 1

      • renewal_period is multiplied by cycle_since_replacement

    • Else:
      • x0 is replaced by x1

      • x1 is replaced by a copy of the current model

      • cycle_since_replacement is set to 1

      • renewal_period is set back to its original value

Additonally, will optionally (default True) lock-in to any cyclical callbacks to only update at the end of a cycle.

Parameters
  • start_epoch (int) – (sub-)epoch/cycle to begin averaging

  • renewal_period (int) – How often to check performance of averages, and renew tracking of least performant

  • model (Optional[AbsModel]) – Model to provide parameters, alternatively call set_model()

  • val_fold (Optional[Dict[str, ndarray]]) – Dictionary containing inputs, targets, and weights (or None) as Numpy arrays

  • cyclic_callback (Optional[AbsCyclicCallback]) – Optional for any cyclical callback which is running

  • update_on_cycle_end (Optional[bool]) – Whether to lock in to the cyclic callback and only update at the end of a cycle. Default yes, if cyclic callback present.

  • verbose (bool) – Whether to print out update information for testing and operation confirmation

  • plot_settings (PlotSettings) – PlotSettings class to control figure appearance

Examples::
>>> swa = SWA(start_epoch=5, renewal_period=5)
get_loss()[source]

Evaluates SWA model and returns loss

Return type

float

Returns

Loss on validation fold for oldest SWA average

on_epoch_begin(**kargs)[source]

Resets loss to prepare for new epoch

Return type

None

on_epoch_end(**kargs)[source]

Checks whether averages should be updated (or reset) and increments counters

Return type

None

on_train_begin(**kargs)[source]

Initialises model variables to begin tracking new model averages

Return type

None

class lumin.nn.callbacks.model_callbacks.AbsModelCallback(model=None, val_fold=None, cyclic_callback=None, update_on_cycle_end=None, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.callback.Callback

Abstract class for callbacks which provide alternative models during training

Parameters
  • model (Optional[AbsModel]) – Model to provide parameters, alternatively call set_model()

  • val_fold (Optional[Dict[str, ndarray]]) – Dictionary containing inputs, targets, and weights (or None) as Numpy arrays

  • cyclic_callback (Optional[AbsCyclicCallback]) – Optional for any cyclical callback which is running

  • update_on_cycle_end (Optional[bool]) – Whether to lock in to the cyclic callback and only update at the end of a cycle. Default yes, if cyclic callback present.

  • plot_settings (PlotSettings) – PlotSettings class to control figure appearance

abstract get_loss()[source]
Return type

float

set_cyclic_callback(cyclic_callback)[source]

Sets the cyclical callback to lock into for updating new models

Return type

None

set_val_fold(val_fold)[source]

Sets the validation fold used for evaluating new models

Return type

None

lumin.nn.callbacks.opt_callbacks module

class lumin.nn.callbacks.opt_callbacks.LRFinder(nb, lr_bounds=[1e-07, 10], model=None, plot_settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]

Bases: lumin.nn.callbacks.callback.Callback

Callback class for Smith learning-rate range test (https://arxiv.org/abs/1803.09820)

Parameters
  • nb (int) – number of batches in a (sub-)epoch

  • lr_bounds (Tuple[float, float]) – tuple of initial and final LR

  • model (Optional[AbsModel]) – Model to alter, alternatively call set_model()

  • plot_settings (PlotSettings) – PlotSettings class to control figure appearance

get_df()[source]

Returns a DataFrame of LRs and losses

Return type

DataFrame

on_batch_end(loss, **kargs)[source]

Records loss and increments LR

Parameters

loss (float) – training loss for most recent batch

Return type

None

on_train_begin(**kargs)[source]

Prepares variables and optimiser for new training

Return type

None

plot(n_skip=0, n_max=None, lim_y=None)[source]

Plot the loss as a function of the LR.

Parameters
  • n_skip (int) – Number of initial iterations to skip in plotting

  • n_max (Optional[int]) – Maximum iteration number to plot

  • lim_y (Optional[Tuple[float, float]]) – y-range for plotting

Return type

None

plot_lr()[source]

Plot the LR as a function of iterations.

Return type

None

Module contents

Read the Docs v: v0.4.0.1
Versions
latest
stable
v0.4.0.1
v0.3.1
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.

Docs

Access comprehensive developer and user documentation for LUMIN

View Docs

Tutorials

Get tutorials for beginner and advanced researchers demonstrating many of the features of LUMIN

View Tutorials