IIRNet: A lightweight deep neural network using intensely inverted residuals for image recognition
Deep neural networks have achieved great success in many tasks of pattern recognition. However, large model size and high cost in computation limit their applications in resource-limited systems. In this paper, our focus is to design a lightweight and efficient convolutional neural network architect...
Saved in:
| Published in: | Image and vision computing Vol. 92; p. 103819 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier B.V
01.12.2019
|
| Subjects: | |
| ISSN: | 0262-8856, 1872-8138 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Deep neural networks have achieved great success in many tasks of pattern recognition. However, large model size and high cost in computation limit their applications in resource-limited systems. In this paper, our focus is to design a lightweight and efficient convolutional neural network architecture by directly training the compact network for image recognition. To achieve a good balance among classification accuracy, model size, and computation complexity, we propose a lightweight convolutional neural network architecture named IIRNet for resource-limited systems. The new architecture is built based on Intensely Inverted Residual block (IIR block) to decrease the redundancy of the convolutional blocks. By utilizing two new operations, intensely inverted residual and multi-scale low-redundancy convolutions, IIR block greatly reduces its model size and computational costs while matches the classification accuracy of the state-of-the-art networks. Experiments on CIFAR-10, CIFAR-100, and ImageNet datasets demonstrate the superior performance of IIRNet on the trade-offs among classification accuracy, computation complexity, and model size, compared to the mainstream compact network architectures.
•A lightweight and efficient convolutional neural network architecture is constructed.•Intensely inverted residual and multi-scale low-redundancy convolutions are used to reduce the model size and complexity.•The proposed network achieves comparable classification accuracy to the mainstream compact network architectures.•Balanced performance is obtained on three challenging datasets. |
|---|---|
| ISSN: | 0262-8856 1872-8138 |
| DOI: | 10.1016/j.imavis.2019.10.005 |