Image classification with parallel KPCA‐PCA network

Principal component analysis (PCA) is widely used in computer vision for object detection. In this article, we take advantage of the algorithms of PCA and kernel principal component analysis (KPCA) to construct a deep learning model named parallel KPCA‐PCA network (PK‐PCANet). In the proposed model,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computational intelligence Jg. 38; H. 2; S. 397 - 415
Hauptverfasser: Yang, Feng, Ma, Zheng, Xie, Mei
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Hoboken Blackwell Publishing Ltd 01.04.2022
Schlagworte:
ISSN:0824-7935, 1467-8640
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Principal component analysis (PCA) is widely used in computer vision for object detection. In this article, we take advantage of the algorithms of PCA and kernel principal component analysis (KPCA) to construct a deep learning model named parallel KPCA‐PCA network (PK‐PCANet). In the proposed model, both of the given PCA and KPCA algorithm are aiming to calculate the filters that will be used in the following convolution layers. The extracted features from PCANet and KPCANet are fused by the strategy of parallel feature fusion. With the aim of reducing the dimensionality of the learned features, the algorithm of compressed sensing is incorporated in the proposed network. According to the cooperative advantages of deep learning network and compressed sensing, the proposed PK‐PCANet model obtains some improvements in several visual recognition tasks. Extensively experiments are performed on face recognition, hand‐written digit recognition and object classification, and the experimental results on various image classification benchmarks such as Extended Yale B, AR, MNIST, CIFAR‐10, and VOC 2007 validated the efficiency of the proposed method of PK‐PCANet.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0824-7935
1467-8640
DOI:10.1111/coin.12503