Résumé:
Feature selection is a crucial process in the pre-processing of data for machine learning.
Its aim is to reduce the feature space, speed up the learning process and improve the
performance of classification algorithms, while avoiding over-learning. Various statistical
methods, such as Information Gain (IG), Chi-squared test (Ch2), Improved Gini Index
(IGI), etc., have proved effective in finding the most representative attributes in text
corpora, using a reduced execution time compared with methods based on information
theory.
However, these methods can generate a large number of redundant attributes, which can
adversely affect the performance of classification algorithms. In this work, we aim to
eliminate this redundancy by measuring the correlation between attributes that have similar
or close IG scores. Correlation can be assessed using the mutual information between
attributes. Thus, attributes that are strongly related to the target variable (class) and weakly
correlated with the other attributes are considered to be the most informative.