A quick tour through UBayFS¶
Introduction¶
The UBayFS package implements the framework proposed in Jenul et al. (2022). UBayFS is an ensemble feature selection technique embedded in a Bayesian statistical framework. The method combines data and user knowledge, where the first is extracted via data-driven ensemble feature selection. The user can control the feature selection by assigning prior weights to features and penalizing specific feature combinations. In particular, the user can define a maximum number of selected features and must-link constraints (features must be selected together) or cannot-link constraints (features must not be selected together). A parameter \(\rho\) regulates the shape of a penalty term accounting for side constraints, where feature sets that violate constraints lead to a lower target value.
In this notebook, we use the Breast Cancer Wisconsin dataset for demonstration. Specifically, the dataset consists of 569 samples and 30 features. The dataset describes a classification problem, where the aim is to distinguish between malignant and benign cancer based on image data. Features are derived from 10 image characteristics, where each characteristic is represented by three features (summary statistics) in the dataset. For instance, the characteristic radius is represented by features radius mean, radius standard deviation, and radius worst.
Requirements and dependencies¶
numpy>=1.23.5
pandas>=1.5.3
scikit-learn>=1.2.2
scipy>=1.10.0
random
sklearn-features>=1.1.0
mrmr>=0.2.6
pygad>=3.0.1
math
To run UBayFS in Python we must import the classes UBaymodel and UBayconstraint.
[1]:
import pandas as pd
import numpy as np
import sys
sys.path.append("../../src/UBayFS")
from UBaymodel import UBaymodel
from UBayconstraint import UBayconstraint
[2]:
data = pd.read_csv("./data/data.csv")
labels = pd.read_csv("./data/labels.csv").replace(("M","B"),(0,1)).astype(int)
Background¶
This section summarizes the core parts of UBayFS, where a central part is Bayes’ Theorem for two random variables \(\boldsymbol{\theta}\) and \(\boldsymbol{y}\):
where \(\boldsymbol{\theta}\) represents an importance parameter of single features and \(\boldsymbol{y}\) collects evidence about \(\boldsymbol{\theta}\) from an ensemble of elementary feature selectors. In the following, the concept will be outlined.
Ensemble feature selection as likelihood¶
The first step in UBayFS is to build \(M\) ensembles of elementary feature selectors. Each elementary feature selector \(m=1,\dots,M\) selects features, denoted by a binary membership vector \(\boldsymbol{\delta}^{(m)} \in \{0,1\}^N\), based on a randomly selected training dataset, where \(N\) denotes the total number of features in the dataset. In the binary membership vector \(\boldsymbol{\delta}^{(m)}\), a component \(\delta_i^{(m)}=1\) indicates that feature \(i\in\{1,\dots,N\}\) is selected, and \(\delta_i^{(m)}=0\) otherwise. Statistically, we interpret the result from each elementary feature selector as a realization from a multinomial distribution with parameters \(\boldsymbol{\theta}\) and \(l\), where \(\boldsymbol{\theta}\in[0,1]^N\) defines the success probabilities of sampling each feature in an individual feature selection and \(l\) corresponds to the number of features selected in \(\boldsymbol{\delta}^{(m)}\). Therefore, the joint probability density of the observed data \(\boldsymbol{y} = \sum\limits_{m=1}^{M}\boldsymbol{\delta}^{(m)}\in\{0,\dots,M\}^N\) — the likelihood function — has the form
where \(f_{\text{mult}}\) is the probability density function of the multinomial distribution.
Expert knowledge as prior¶
UBayFS includes two types of expert knowledge: prior feature weights and feature set constraints.
Prior feature weights¶
To introduce expert knowledge about the importance of features, the user may define a vector \(\boldsymbol{\alpha} = (\alpha_1,\dots,\alpha_N)\), \(\alpha_i>0\) for all \(i=1,\dots,N\), assigning a weight to each feature. High weights indicate that a feature is important. By default, if all features are equally important or no prior weighting is used, \(\boldsymbol{\alpha}\) is set to the 1-vector of length \(N\). With the weighting in place, we assume the a-priori feature importance parameter \(\boldsymbol{\theta}\) follows a Dirichlet distribution [@R:DirichletReg]
where the probability density function of the Dirichlet distribution is given as
where \(\text{B}(.)\) denotes the multivariate Beta function. Generalizations of the Dirichlet distribution [@wong:gdirichlet,@hankin:hyperdirichlet] are also implemented in UBayFS.
Since the Dirichlet distribution is the conjugate prior with respect to a multivariate likelihood, the posterior density is given as
with
representing the posterior parameter vector \(\boldsymbol{\alpha}^\circ\).
Feature set constraints¶
In addition to the prior weighting of features, the UBayFS user can also add different types of constraints to the feature selection:
max-size constraint: Maximum number of features that shall be selected.
must-link constraint: For a pair of features, either both or none is selected (defined as pairwise constraints, one for each pair of features).
cannot-link constraint: Used if a pair of features must not be selected jointly.
All constraints can be defined block-wise between feature blocks (instead of individual features). Constraints are represented as a linear system of linear inequalities \(\boldsymbol{A}\boldsymbol{\delta}-\boldsymbol{b}\leq \boldsymbol{0}\), where \(\boldsymbol{A}\in\mathbb{R}^{K\times N}\) and \(\boldsymbol{b}\in\mathbb{R}^K\). \(K\) denotes the total number of constraints. For constraint \(k \in 1,..,K\), a feature set \(\boldsymbol{\delta}\) is admissible only if \(\left(\boldsymbol{a}^{(k)}\right)^T\boldsymbol{\delta} - b^{(k)} \leq 0\), leading to the inadmissibility function (penalty term)
where \(\rho\in\mathbb{R}^+ \cup \{\infty\}\) denotes a relaxation parameter and \(\xi_{k,\rho} = \exp\left(-\rho \left(\left( \boldsymbol{a}^{(k)}\right)^T\boldsymbol{\delta} - b^{(k)}\right)\right)\) defines the exponential term of a logistic function. To handle \(K\) different constraints for one feature selection problem, the joint inadmissibility function is given as
which originates from the idea that \(\kappa = 1\) (maximum penalization) if at least one \(\kappa_{k,\rho}=1\), while \(\kappa=0\) (no penalization) if all \(\kappa_{k,\rho}=0\).
To obtain an optimal feature set \(\boldsymbol{\delta}^\star\), we use a target function \(U(\boldsymbol{\delta}, \boldsymbol{\theta})\) which represents a posterior expected utility of feature sets \(\boldsymbol{\delta}\) given the posterior feature importance parameter \(\boldsymbol{\theta}\), regularized by the inadmissibility function \(\kappa(.)\).
Since an exact optimization is impossible due to the non-linear function \(\kappa\), we use a genetic algorithm to find an appropriate feature set. In detail, the genetic algorithm is initialized via a Greedy algorithm and computes combinations of the given feature sets with regard to a fitness function in each iteration.
Application of UBayFS¶
Ensemble Training¶
The class UBaymodel()
initializes the UBayFS model and trains an ensemble of elementary feature selectors. The training dataset and target are initialized with the arguments data
and target
. Although the UBayFS concept permits unsupervised, multiclass, or regression setups, the current implementation supports binary target and regression variables only. While M
defines the ensemble size (number of elementary feature selectors), the types of the elementary feature selectors is set
via method
. The mRMR feature selector is implemented as baseline and can be called directlz with “mrmr”. In general, the method
argument allows for each self-implemented feature selection function with the arguments X
(numpy array describing the data), y
(numpy array describing the target), and n
(describing the number of features that shall be selected. The function must return the indices of the selected features. Some examples are shown below.
Each ensemble model is trained on a random subset comprising tt_split
\(\cdot 100\) percent of the train data. Using the argument prior_model
the user specifies whether the standard Dirichlet distribution or a generalized variant should be used as prior model. Furthermore, the number of features selected in each ensemble can be controlled by the parameter nr_features
.
The class UBayconstraint
provides an easy way to define side constraints for the model. The generated object can be easily added to the UBaymodel by assigning it to the argument constraints
which is by default None
.
For the standard UBayFS initialization, all prior feature weights are set to 0.01, and only the required max_size
constraint is included. Ensemble counts indicate how often a feature was selected over the ensemble feature selections.
[3]:
model = UBaymodel(data = data,
target = labels,
feat_names = data.columns,
M = 100,tt_split = 0.75,
nr_features = 10,
method = ["mrmr"],
weights = [0.01],l = 1,
constraints = UBayconstraint(rho=np.array([1]),
constraint_types=["max_size"],
constraint_vars=[3],
num_elements=data.shape[1]),
optim_method = "GA",
popsize = 100,
maxiter = 100,
random_state = 0)
[4]:
np.shape(model.getWeights())
[4]:
(30,)
[5]:
pd.DataFrame(np.stack([np.sum(model.ensemble_matrix, axis=0), model.getWeights()], axis=0),
columns = model.feat_names, index = ["ensemble count", "prior weight"])
[5]:
mean.radius | mean.texture | mean.perimeter | mean.area | mean.smoothness | mean.compactness | mean.concavity | mean.concave.points | mean.symmetry | mean.fractal.dimension | ... | worst.radius | worst.texture | worst.perimeter | worst.area | worst.smoothness | worst.compactness | worst.concavity | worst.concave.points | worst.symmetry | worst.fractal.dimension | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ensemble count | 100 | 0 | 100 | 100 | 0 | 0 | 100 | 100 | 0 | 0 | ... | 100 | 0 | 100 | 100 | 0 | 0 | 98 | 100 | 0 | 0 |
prior weight | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | ... | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 |
2 rows × 30 columns
In addition to mrmr
, we add a function decision_tree()
, that computes features based on decision tree importances.
[6]:
from sklearn.tree import DecisionTreeClassifier
def decision_tree(X,y,n):
clf = DecisionTreeClassifier()
clf.fit(X, y)
feat_importance = clf.tree_.compute_feature_importances(normalize=False)
return np.flip(np.argsort(feat_importance))[:n]
[7]:
model = UBaymodel(data = data,
target = labels,
feat_names = data.columns,
M = 100,tt_split = 0.75,
nr_features = 10,
method = ["mrmr", decision_tree],
weights = [0.01],l = 1,
constraints = UBayconstraint(rho=np.array([1]),
constraint_types=["max_size"],
constraint_vars=[3],
num_elements=data.shape[1]),
optim_method = "GA",
popsize = 100,
maxiter = 100,
random_state = 0)
Examples for more feature selection methods are:
[8]:
# mrmr (is already implemented as baseline - this code is just to illustrate the concept of using own functions)
import mrmr
def mrmr_fs(X,y,n):
ranks = mrmr.mrmr_classif(pd.DataFrame(X), y, n, show_progress=False)
return ranks
# recursive feature elimination with logistic regression classifier
from sklearn.feature_selection import RFECV
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import StratifiedKFold
def rfe(X,y,n):
clf = LogisticRegression()
cv = StratifiedKFold(5)
rfecv = RFECV(
estimator=clf,
step=1,
cv=cv,
scoring="accuracy",
min_features_to_select=n,
)
rfecv.fit(X, y)
return np.where(rfecv.ranking_==1)[0]
# RENT feature selection with default setup - n is not used in this method
from RENT import RENT
def RENT_fs(X,y,n):
my_C_params = [0.1, 1]
my_l1_ratios = [0.5, 0.9]
model = RENT.RENT_Classification(data=pd.DataFrame(X),
target=y,
C=my_C_params,
l1_ratios=my_l1_ratios)
model.train()
selected_features = model.select_features()
return selected_features
User knowledge¶
Using the function setWeights()
the user is able to change the feature weights from the standard initialization to desired values. In our example, we assign equal weights to features originating from the same image characteristic. Weights can be on an arbitrary scale. As it is difficult to specify prior weights in real-life applications, we suggest to define them on a normalized scale.
[9]:
weights = np.tile(np.array([10,15,20,16,15,10,12,17,21,14]),3)
strength = 1
weights = weights * strength / np.sum(weights)
model.setWeights(weights)
In addition to prior weights, feature set constraints may be specified. Internally, constraints are implemented via the class UBayconstraint
. The input rho
corresponds to the relaxation parameter of the admissibility function. Further, constraint_types
consists of a list, where all constraint types are defined. Then, with constraint_vars
, the user specifies details about the constraint: for max-size, the number of features to select is provided, while for must-link and
cannot-link, the list of feature indices to be linked must be provided. Each list entry corresponds to one constraint in constraint_types
. For block constraints, information about the block structure is included either with block_list
or block_matrix
- if both arguments are None
, feature-wise constraints are generated.
Applying get_constraints()
demonstrates that, the matrix A
has ten rows to represent four constraints. While max-size and cannot-link can be expressed in one equation each, must-link is a pairwise constraint. In specific, the must-link constraint between \(n\) features produces \(\frac{n!}{(n-2)!}\) elementary constraints. Hence, six equations represent the must-link constraint. The UBaymodel
function setConstraints()
integrates the constraints into the UBayFS
model. With the argument append=True
, the constraints are added to existing constraints. Otherwise the constraints are overwritten. In this case we define a new max_size
constraint and overwrite the old one.
[10]:
constraints = UBayconstraint(rho=np.array([np.Inf, 0.1, 1, 1]),
constraint_types=["max_size", "must_link", "cannot_link", "cannot_link"],
constraint_vars=[10, [0,10,20], [0,9], [19,22,23]],
num_elements=data.shape[1])
[11]:
model.setConstraints(constraints, append=False)
[12]:
constraints.get_constraints()
[12]:
{'A': array([[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., -1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., -1., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[-1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., -1., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[-1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., -1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0.,
0., 0., 0., 0.]]),
'b': array([10., 0., 0., 0., 0., 0., 0., 1., 1.]),
'rho': array([inf, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 1. , 1. ]),
'block_matrix': array([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])}
Optimization and evaluation¶
A genetic algorithm, described by [@givens:compstat], searches for the optimal feature set in the UBayFS framework. Using setOptim()
we update the genetic algorithm parameters. Furthermore, popsize
indicates the number of candidate feature sets created in each iteration, and maxiter
is the number of iterations.
[13]:
model.setOptim(optim_method="GA",
popsize=100,
maxiter=200)
At this point, we have initialized prior weights, constraints, and the optimization procedure — we can now train the UBayFS model using the function train()
, relying on a genetic algorithm.
[14]:
result = model.train()
[15]:
result[1]
[15]:
['mean.radius',
'mean.area',
'mean.concave.points',
'radius.error',
'area.error',
'worst.radius',
'worst.area',
'worst.concavity',
'worst.concave.points',
'worst.fractal.dimension']
After training the model, we receive a feature selection result. The final feature set and its additional properties can be evaluated with evaluateFS()
:
[16]:
model.evaluateFS(result[0].values[:,0])
[16]:
{'cardinality': 10,
'total utility': 0.567,
'posterior feature utility': 0.567,
'admissibility': 1.0,
'number of violated constraints': 0,
'average feature correlation': 0.643}
The output contains the following information:
cardinality: number of selected features
log total utility: value of the target function for optimization
log posterior feature utility: cumulated importances of selected features before substracting a penalization term
log admissibility: if 0, all constraints are fulfilled, otherwise at least one constraint is violated
number violated constraints: number of violated constraints
avg feature correlation: average correlation between features in dataset