A two-stage feature selection algorithm based on redundancy and relevance

College

College of Computer Studies

Department/Unit

Software Technology

Document Type

Conference Proceeding

Source Title

Proceedings of the International Joint Conference on Neural Networks

Publication Date

10-10-2018

Abstract

Resulting from technological advancements, it is now possible to regularly collect large volumes of data and to use these data for different applications. However, this obviously results in having very large numbers of samples as well as features. Dealing with high-volume and high-dimensional data is indeed a major challenge for machine learning algorithms, especially in terms of memory requirement and model training time. Fortunately, many of the features in the collected data are usually correlated, and some can be even be completely irrelevant for specific classification or pattern recognition tasks. By the nature of high-dimensional data, the large set of features can be reduced by removing redundant and irrelevant features. A two-stage feature selection algorithm based on feature redundancy and feature relevance is proposed in this paper. The proposed feature selection algorithm employs a hybrid model which combines filter and wrapper schemes to select the optimal feature subset. Five datasets from different domains are used to test the performance of the proposed feature selection algorithm based on three well-known machine learning algorithms, namely, k-Nearest Neighbor, Decision Trees, and Multilayer Perceptrons. Despite reducing the number of features using the proposed feature selection approach, the classification performances of the selected feature subsets are on par with or even significantly higher than the performance of the original feature set. Comparing with other state-of-the-art feature selection algorithms, the proposed method achieved higher classification accuracy with even lower number of features. © 2018 IEEE.

html

Digitial Object Identifier (DOI)

10.1109/IJCNN.2018.8489072

Keywords

Learning classifier systems; Big data; Document clustering; Nearest neighbor analysis (Statistics)

Upload File

wf_no

This document is currently not available here.

Share

COinS