lumin.nn.interpretation package¶
Submodules¶
lumin.nn.interpretation.features module¶
-
lumin.nn.interpretation.features.
get_nn_feat_importance
(model, fy, eval_metric=None, pb_parent=None, plot=True, savename=None, settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]¶ Compute permutation importance of features used by a
Model
on provided data using either loss or anEvalMetric
to quantify performance. Returns bootstrapped mean importance from sample constructed by computing importance for each fold in fy.- Parameters
model (
AbsModel
) –Model
to use to evaluate feature importancefy (
FoldYielder
) –FoldYielder
interfacing to data used to train modeleval_metric (
Optional
[EvalMetric
]) – OptionalEvalMetric
to use to quantify performance in place of losspb_parent (
Optional
[ConsoleMasterBar
]) – Not used if calling method directlyplot (
bool
) – whetehr to plot resulting feature importancessavename (
Optional
[str
]) – Optional name of file to which to save the plot of feature importancessettings (
PlotSettings
) –PlotSettings
class to control figure appearance
- Return type
DataFrame
- Returns
Pandas DataFrame containing mean importance and associated uncertainty for each feature
- Examples::
>>> fi = get_nn_feat_importance(model, train_fy) >>> >>> fi = get_nn_feat_importance(model, train_fy, savename='feat_import') >>> >>> fi = get_nn_feat_importance(model, train_fy, ... eval_metric=AMS(n_total=100000))
-
lumin.nn.interpretation.features.
get_ensemble_feat_importance
(ensemble, fy, eval_metric=None, savename=None, settings=<lumin.plotting.plot_settings.PlotSettings object>)[source]¶ Compute permutation importance of features used by an
Ensemble
on provided data using either loss or anEvalMetric
to quantify performance. Returns bootstrapped mean importance from sample constructed by computing importance for eachModel
in ensemble.- Parameters
ensemble (
AbsEnsemble
) –Ensemble
to use to evaluate feature importancefy (
FoldYielder
) –FoldYielder
interfacing to data used to train models in ensembleeval_metric (
Optional
[EvalMetric
]) – OptionalEvalMetric
to use to quantify performance in place of losssavename (
Optional
[str
]) – Optional name of file to which to save the plot of feature importancessettings (
PlotSettings
) –PlotSettings
class to control figure appearance
- Return type
DataFrame
- Returns
Pandas DataFrame containing mean importance and associated uncertainty for each feature
- Examples::
>>> fi = get_ensemble_feat_importance(ensemble, train_fy) >>> >>> fi = get_ensemble_feat_importance(ensemble, train_fy ... savename='feat_import') >>> >>> fi = get_ensemble_feat_importance(ensemble, train_fy, ... eval_metric=AMS(n_total=100000))