Nonnegative graph embedding induced unsupervised feature selection

Recently, many unsupervised feature selection (UFS) methods have been developed due to their effectiveness in selecting valuable features to improve and accelerate the subsequent learning tasks. However, most existing UFS methods suffer from the following three drawbacks: (1) They usually ignore the...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Expert systems with applications Ročník 282; s. 127664
Hlavní autori: Mi, Yong, Chen, Hongmei, Yuan, Zhong, Luo, Chuan, Horng, Shi-Jinn, Li, Tianrui
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Ltd 05.07.2025
Predmet:
ISSN:0957-4174
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Recently, many unsupervised feature selection (UFS) methods have been developed due to their effectiveness in selecting valuable features to improve and accelerate the subsequent learning tasks. However, most existing UFS methods suffer from the following three drawbacks: (1) They usually ignore the nonnegative attribute of feature when conducting feature selection, which inevitably loses partial information; (2) Most adopt a separate strategy to rank all features and then select the first k features, which introduces an additional parameter and often obtains suboptimal results; (3) Most generally confront the problem of high time-consuming. To tackle the previously mentioned shortage, we present a novel UFS method, i.e., Nonnegative Graph Embedding Induced Unsupervised Feature Selection, which considers nonnegative feature attributes and selects informative feature subsets in a one-step way. Specifically, the raw data are projected into a low-dimensional subspace, where the learned low-dimensional representation keeps a nonnegative attribute. Then, a novel scheme is designed to preserve the local geometric structure of the original data, and ℓ2,0 norm is introduced to guide feature selection without ranking and selecting processes. Finally, we design a high-efficiency solution strategy with low computational complexity, and experiments on real-life datasets verify the efficiency and advancement compared with advanced UFS methods. •A learning scheme is designed to preserve local structure and nonnegative attributes.•ℓ2,0-norm is adopted to avoid introducing additional parameters and selection steps.•A high-efficiency strategy with low computational complexity is designed.
ISSN:0957-4174
DOI:10.1016/j.eswa.2025.127664