The package afs
contains several methods for alternative feature selection.
This document provides:
- Steps for setting up the package.
- A short overview of the (feature-selection) functionality.
- A demo of the functionality.
- Guidelines for developers who want to modify or extend the code base.
If you use this package for a scientific publication, please cite our paper
@article{bach2024alternative,
title={Alternative feature selection with user control},
author={Bach, Jakob and B{\"o}hm, Klemens},
journal={International Journal of Data Science and Analytics},
year={2024},
doi={10.1007/s41060-024-00527-8}
}
You can directly install this package from GitHub:
python -m pip install git+https://github.com/Jakob-Bach/Alternative-Feature-Selection.git#subdirectory=afs_package
If you already have the source code for the package (i.e., the directory in which this README
resides)
as a local directory on your computer (e.g., after cloning the project), you can also perform a local install:
python -m pip install afs_package/
afs.py
contains six feature-selection methods as classes:
FCBFSelector
: (adapted version of) FCBF, a multivariate filter methodGreedyWrapperSelector
: a wrapper method (by default, using a decision tree as prediction model)ManualUnivariateQualitySelector
: a univariate filter method where you can enter each feature's utility directly (instead of computing it from a dataset)MISelector
: a univariate filter method based on mutual informationModelImportanceSelector
: a univariate filter method using feature importances from a prediction model (by default, a decision tree)MRMRSelector
: mRMR, a multivariate filter method
Additionally, there are the following abstract superclasses:
AlternativeFeatureSelector
: highest superclass; defines solver, constraints for alternatives, and sequential/simultaneous searchLinearQualityFeatureSelector
: super-class for feature-selection methods with a linear objectiveWhiteBoxFeatureSelector
: superclass for feature-selection methods with a white-box objective, i.e., optimizing purely with a solver rather than using the solver in an algorithmic search routine
All feature-selection methods support sequential and simultaneous search for alternatives, as demonstrated next.
Running alternative feature selection only requires three steps:
- Create the feature selector (our code contains five different ones).
- Set the dataset (
set_data()
):- Four parameters: feature part and prediction target are separated, train-test split
- Data types:
DataFrame
(feature parts) andSeries
(targets) frompandas
- Run the search for alternatives:
- Method name (
search_sequentially()
/search_simultaneously()
) determines whether a sequential or a simultaneous search is run.LinearQualityFeatureSelector
s (like "MI" and model-based importance) also support the heuristic proceduressearch_greedy_replacement()
andsearch_greedy_balancing()
, which are described in the Appendix of the arXiv paper. k
determines the number of features to be selected.num_alternatives
determines ... you can guess what.tau_abs
determines by how many features the feature sets should differ. You can also provide a relative value (from the interval[0,1]
) viatau
, and change the dissimilarityd_name
to'jaccard'
(default is'dice'
).objective_agg
switches between min-aggregation and sum-aggregation in simultaneous search. Has no effect in sequential search (which only returns one feature set, so there is no need to aggregate feature-set quality over feature sets).
- Method name (
import afs
import sklearn.datasets
import sklearn.model_selection
dataset = sklearn.datasets.load_iris(as_frame=True)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(
dataset['data'], dataset['target'], train_size=0.8, random_state=25)
feature_selector = afs.MISelector()
feature_selector.set_data(X_train=X_train, X_test=X_test, y_train=y_train, y_test=y_test)
search_result = feature_selector.search_sequentially(k=3, num_alternatives=5, tau_abs=1)
print(search_result.drop(columns='optimization_time').round(2))
The search result is a DataFrame
containing the indices of the selected features (can be used to
subset the columns in X
), objective values on the training set and test set, optimization status,
and optimization time:
selected_idxs train_objective test_objective optimization_status
0 [0, 2, 3] 0.91 0.89 0
1 [1, 2, 3] 0.83 0.78 0
2 [0, 1, 3] 0.64 0.65 0
3 [0, 1, 2] 0.62 0.68 0
4 [] NaN NaN 2
5 [] NaN NaN 2
The search procedure ran out of features here, as the iris
dataset only has four features.
The optimization statuses are:
- 0:
Optimal
(optimal solution found) - 1:
Feasible
(a valid solution found till timeout, but may not be optimal) - 2:
Infeasible
(there is no valid solution) - 6:
Not solved
(no valid solution found till timeout, but there may be one)
If you don't want to provide a dataset but use manually defined univariate qualities (which result in the same optimization problem as "MI" and model importance), you can do so as well:
import afs
feature_selector = afs.ManualQualityUnivariateSelector()
feature_selector.set_data(q_train=[1, 2, 3, 7, 8, 9])
search_result = feature_selector.search_sequentially(k=3, num_alternatives=3, tau_abs=2)
print(search_result.drop(columns='optimization_time').round(2))
AlternativeFeatureSelector
is the topmost abstract superclass.
It contains code for solver handling, the dissimilarity-based definition of alternatives, and the
two search procedures, i.e., sequential as well as simultaneous (sum-aggregation and min-aggregation).
For defining a new feature-selection method, you should create a subclass of AlternativeFeatureSelector
.
In particular, you need to define how to solve the optimization problem of alternative feature selection
by overriding the abstract method select_and_evaluate()
.
To this end, you may want to define the optimization problem
(objective function, which expresses feature-set quality, and maybe further constraints)
by overriding initialize_solver()
.
You should also call the original implementation of this methods via super().initialize_solver()
to not override general initialization steps (solver configuration, cardinality constraints).
The sequential and simultaneous search procedures for alternatives implemented in AlternativeFeatureSelector
basically add further constraints (for alternatives) to the optimization problem and call select_and_evaluate()
.
Thus, if the latter method is implemented properly, you do not need to override the search procedures,
as they should work as-is in new subclasses as well.
There are further abstract superclasses extracting commonalities between feature-selection methods:
WhiteBoxFeatureSelector
is a good starting point if you want to optimize your objective with a solver (rather than only using the solver to check constraints while optimizing a black-box objective separately, like Greedy Wrapper does). When creating a subclass, you need to define the white-box objective by overriding the abstract methodcreate_objectives()
(define objectives separately for training set and test set, as they may use different constants for feature qualities).select_and_evaluate()
andinitialize_solver()
need not be overridden in your subclass anymore.LinearQualityFeatureSelector
is a good starting point if your objective is a plain sum of feature qualities. When creating a subclass, you need to provide these qualities by overriding the abstract methodcompute_qualities()
.select_and_evaluate()
andinitialize_solver()
need not be overridden in your subclass anymore.