Efficient convolutional neural networks on Raspberry Pi for image classification

With the good performance of deep learning in the field of computer vision (CV), the convolutional neural network (CNN) architectures have become main backbones of image recognition tasks. With the widespread use of mobile devices, neural network models based on platforms with low computing power ar...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Journal of real-time image processing Ročník 20; číslo 2; s. 21
Hlavní autori: Ju, Rui-Yang, Lin, Ting-Yu, Jian, Jia-Hao, Chiang, Jen-Shiun
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Berlin/Heidelberg Springer Berlin Heidelberg 01.04.2023
Springer Nature B.V
Predmet:
ISSN:1861-8200, 1861-8219
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:With the good performance of deep learning in the field of computer vision (CV), the convolutional neural network (CNN) architectures have become main backbones of image recognition tasks. With the widespread use of mobile devices, neural network models based on platforms with low computing power are gradually being paid attention. However, due to the limitation of computing power, deep learning algorithms are usually not available on mobile devices. This paper proposes a lightweight convolutional neural network TripleNet, which can operate easily on Raspberry Pi. Adopted from the concept of block connections in ThreshNet, the newly proposed network model compresses and accelerates the network model, reduces the amount of parameters of the network, and shortens the inference time of each image while ensuring the accuracy. Our proposed TripleNet and other State-of-the-Art (SOTA) neural networks perform image classification experiments with the CIFAR-10 and SVHN datasets on Raspberry Pi. The experimental results show that, compared with GhostNet, MobileNet, ThreshNet, EfficientNet, and HarDNet, the inference time of TripleNet per image is shortened by 15%, 16%, 17%, 24%, and 30%, respectively. The detail codes of this work are available at https://github.com/RuiyangJu/TripleNet .
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1861-8200
1861-8219
DOI:10.1007/s11554-023-01271-1