Nonnegative graph embedding induced unsupervised feature selection

Recently, many unsupervised feature selection (UFS) methods have been developed due to their effectiveness in selecting valuable features to improve and accelerate the subsequent learning tasks. However, most existing UFS methods suffer from the following three drawbacks: (1) They usually ignore the...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications Vol. 282; p. 127664
Main Authors: Mi, Yong, Chen, Hongmei, Yuan, Zhong, Luo, Chuan, Horng, Shi-Jinn, Li, Tianrui
Format: Journal Article
Language:English
Published: Elsevier Ltd 05.07.2025
Subjects:
ISSN:0957-4174
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recently, many unsupervised feature selection (UFS) methods have been developed due to their effectiveness in selecting valuable features to improve and accelerate the subsequent learning tasks. However, most existing UFS methods suffer from the following three drawbacks: (1) They usually ignore the nonnegative attribute of feature when conducting feature selection, which inevitably loses partial information; (2) Most adopt a separate strategy to rank all features and then select the first k features, which introduces an additional parameter and often obtains suboptimal results; (3) Most generally confront the problem of high time-consuming. To tackle the previously mentioned shortage, we present a novel UFS method, i.e., Nonnegative Graph Embedding Induced Unsupervised Feature Selection, which considers nonnegative feature attributes and selects informative feature subsets in a one-step way. Specifically, the raw data are projected into a low-dimensional subspace, where the learned low-dimensional representation keeps a nonnegative attribute. Then, a novel scheme is designed to preserve the local geometric structure of the original data, and ℓ2,0 norm is introduced to guide feature selection without ranking and selecting processes. Finally, we design a high-efficiency solution strategy with low computational complexity, and experiments on real-life datasets verify the efficiency and advancement compared with advanced UFS methods. •A learning scheme is designed to preserve local structure and nonnegative attributes.•ℓ2,0-norm is adopted to avoid introducing additional parameters and selection steps.•A high-efficiency strategy with low computational complexity is designed.
ISSN:0957-4174
DOI:10.1016/j.eswa.2025.127664