Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2018 Sep;85:189-203.
doi: 10.1016/j.jbi.2018.07.014. Epub 2018 Jul 18.

Relief-based Feature Selection: Introduction and Review

Affiliations
Free PMC article
Review

Relief-based Feature Selection: Introduction and Review

Ryan J Urbanowicz et al. J Biomed Inform. .
Free PMC article

Abstract

Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.

Keywords: Epistasis; Feature interaction; Feature selection; Feature weighting; Filter; ReliefF.

Figures

Figure 1:
Figure 1:
Typical stages of a data mining analysis pipeline. Feature selection is starred as it is the focus of this review. The dotted line indicates how model performance can be fed back into feature processing, iteratively removing irrelevant features or seeking to construct relevant ones.
Figure 2:
Figure 2:
Relief updating W[A] for a given target instance when it is compared to its nearest miss and hit. In this example, features are discrete with possible values of X, Y, or Z, and endpoint is binary with a value of 0 or 1. Notice that when the value of a feature is different, the corresponding feature weight increases by 1/m for the nearest miss, and reduces by 1/m for the nearest hit.
Figure 3:
Figure 3:
Illustrations of RBA neighbor selection and/or instance weighting schemes. Methods with a red/yellow gradient adopt an instance weighting scheme while other methods identify instances as ‘near’ or ‘far’ which then contribute fully to feature weight updates. These illustrations are conceptual and are not drawn to scale.
Figure 4:
Figure 4:
Illustrations of the basic concepts behind key iterative and efficiency approaches including TuRF, Iterative Relief/I-RELIEF, and VLSReliefF. Features are represented as squares, where darker shading indicates a lower feature weight/score.

Similar articles

See all similar articles

Cited by 13 articles

See all "Cited by" articles

Publication types

LinkOut - more resources

Feedback