Please use this identifier to cite or link to this item: http://dspace.univ-guelma.dz/jspui/handle/123456789/15004
Title: Sélection et élimination des attributs redondants pour la classification des gros corpus textuels
Authors: Khaled Khodja, Anfel
Keywords: selection, feature, mutual information, correlation, redundancy, classification, text.
Issue Date: 2023
Publisher: University of Guelma
Abstract: Feature selection is a crucial process in the pre-processing of data for machine learning. Its aim is to reduce the feature space, speed up the learning process and improve the performance of classification algorithms, while avoiding over-learning. Various statistical methods, such as Information Gain (IG), Chi-squared test (Ch2), Improved Gini Index (IGI), etc., have proved effective in finding the most representative attributes in text corpora, using a reduced execution time compared with methods based on information theory. However, these methods can generate a large number of redundant attributes, which can adversely affect the performance of classification algorithms. In this work, we aim to eliminate this redundancy by measuring the correlation between attributes that have similar or close IG scores. Correlation can be assessed using the mutual information between attributes. Thus, attributes that are strongly related to the target variable (class) and weakly correlated with the other attributes are considered to be the most informative.
URI: http://dspace.univ-guelma.dz/jspui/handle/123456789/15004
Appears in Collections:Master

Files in This Item:
File Description SizeFormat 
KHALED KHODJA_ANFAL_F5.pdf8,32 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.