lumin.nn.interpretation package¶
Submodules¶
lumin.nn.interpretation.features module¶
-
lumin.nn.interpretation.features.
get_nn_feat_importance
(model, fy, bs=None, eval_metric=None, pb_parent=None, plot=True, savename=None, settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]¶ Compute permutation importance of features used by a
Model
on provided data using either loss or anEvalMetric
to quantify performance. Returns bootstrapped mean importance from sample constructed by computing importance for each fold in fy.- Parameters
model (
AbsModel
) –Model
to use to evaluate feature importancefy (
FoldYielder
) –FoldYielder
interfacing to data used to train modelbs (
Optional
[int
]) – If set, will evaluate model in batches of data, rather than all at onceeval_metric (
Optional
[EvalMetric
]) – OptionalEvalMetric
to use to quantify performance in place of losspb_parent (
Optional
[ConsoleMasterBar
]) – Not used if calling method directlyplot (
bool
) – whether to plot resulting feature importancessavename (
Optional
[str
]) – Optional name of file to which to save the plot of feature importancessettings (
PlotSettings
) –PlotSettings
class to control figure appearance
- Return type
DataFrame
- Returns
Pandas DataFrame containing mean importance and associated uncertainty for each feature
- Examples::
>>> fi = get_nn_feat_importance(model, train_fy) >>> >>> fi = get_nn_feat_importance(model, train_fy, savename='feat_import') >>> >>> fi = get_nn_feat_importance(model, train_fy, ... eval_metric=AMS(n_total=100000))
-
lumin.nn.interpretation.features.
get_ensemble_feat_importance
(ensemble, fy, bs=None, eval_metric=None, savename=None, settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]¶ Compute permutation importance of features used by an
Ensemble
on provided data using either loss or anEvalMetric
to quantify performance. Returns bootstrapped mean importance from sample constructed by computing importance for eachModel
in ensemble.- Parameters
ensemble (
AbsEnsemble
) –Ensemble
to use to evaluate feature importancefy (
FoldYielder
) –FoldYielder
interfacing to data used to train models in ensemblebs (
Optional
[int
]) – If set, will evaluate model in batches of data, rather than all at onceeval_metric (
Optional
[EvalMetric
]) – OptionalEvalMetric
to use to quantify performance in place of losssavename (
Optional
[str
]) – Optional name of file to which to save the plot of feature importancessettings (
PlotSettings
) –PlotSettings
class to control figure appearance
- Return type
DataFrame
- Returns
Pandas DataFrame containing mean importance and associated uncertainty for each feature
- Examples::
>>> fi = get_ensemble_feat_importance(ensemble, train_fy) >>> >>> fi = get_ensemble_feat_importance(ensemble, train_fy ... savename='feat_import') >>> >>> fi = get_ensemble_feat_importance(ensemble, train_fy, ... eval_metric=AMS(n_total=100000))