SlimDL: Deploying ultra-light deep learning model on sweeping robots

Advanced object detection methods have yielded impressive progress in recent years. However, the computational constraints of edge mobile devices present significant deployment challenges for state-of-the-art algorithms. We propose a deep learning deployment framework with two stages: model adaptati...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Engineering applications of artificial intelligence Ročník 149; s. 110415
Hlavní autoři: Sun, Xudong, Wang, Yu, Liu, Zhanglin, Gao, Shaoxuan, He, Wenbo, Tong, Chao
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.06.2025
Témata:
ISSN:0952-1976
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Advanced object detection methods have yielded impressive progress in recent years. However, the computational constraints of edge mobile devices present significant deployment challenges for state-of-the-art algorithms. We propose a deep learning deployment framework with two stages: model adaptation and compression. Our method enhance “You Only Look Once version 5” (YOLOv5) with lightweight modules, which improves detection performance while reducing computational load. Additionally, we present a pruning algorithm, employing adaptive batch normalization and iterative pruning. Our evaluation on “Microsoft Common Objects in Context” (MSCOCO) dataset and custom SweepRobot datasets demonstrates that our method consistently outperforms state-of-the-art approaches. On the SweepRobot dataset, our method doubled YOLOv5’s detection speed on the sweeping robot from 15.69 frames per second (FPS) to 30.77 FPS, maintaining 97.3% performance at 20% of the computational cost. Even on Graphics Processing Unit equipped devices, our method achieved 1.8% and 2.8% higher Average Precision compared to direct scaling and pruning with the original pruning algorithm.
AbstractList Advanced object detection methods have yielded impressive progress in recent years. However, the computational constraints of edge mobile devices present significant deployment challenges for state-of-the-art algorithms. We propose a deep learning deployment framework with two stages: model adaptation and compression. Our method enhance “You Only Look Once version 5” (YOLOv5) with lightweight modules, which improves detection performance while reducing computational load. Additionally, we present a pruning algorithm, employing adaptive batch normalization and iterative pruning. Our evaluation on “Microsoft Common Objects in Context” (MSCOCO) dataset and custom SweepRobot datasets demonstrates that our method consistently outperforms state-of-the-art approaches. On the SweepRobot dataset, our method doubled YOLOv5’s detection speed on the sweeping robot from 15.69 frames per second (FPS) to 30.77 FPS, maintaining 97.3% performance at 20% of the computational cost. Even on Graphics Processing Unit equipped devices, our method achieved 1.8% and 2.8% higher Average Precision compared to direct scaling and pruning with the original pruning algorithm.
ArticleNumber 110415
Author He, Wenbo
Sun, Xudong
Tong, Chao
Wang, Yu
Liu, Zhanglin
Gao, Shaoxuan
Author_xml – sequence: 1
  givenname: Xudong
  surname: Sun
  fullname: Sun, Xudong
  organization: School of Computer Science &, Engineering, Beihang University, Beijing, China
– sequence: 2
  givenname: Yu
  surname: Wang
  fullname: Wang, Yu
  organization: State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
– sequence: 3
  givenname: Zhanglin
  surname: Liu
  fullname: Liu, Zhanglin
  organization: Qfeeltech, Beijing, China
– sequence: 4
  givenname: Shaoxuan
  surname: Gao
  fullname: Gao, Shaoxuan
  organization: Qfeeltech, Beijing, China
– sequence: 5
  givenname: Wenbo
  surname: He
  fullname: He, Wenbo
  organization: Department of Computing and Software, McMaster University, Canada
– sequence: 6
  givenname: Chao
  orcidid: 0000-0003-4414-4965
  surname: Tong
  fullname: Tong, Chao
  email: tongchao@buaa.edu.cn
  organization: School of Computer Science &, Engineering, Beihang University, Beijing, China
BookMark eNqFj8tOwzAQRb0oEm3hF1B-IGHsxGnMCtTykiqxANaWY4-LKzeO7ADq35OosGY10h3dx1mQWRc6JOSKQkGB1tf7Arud6nvlCgaMF5RCRfmMzEFwllOxqs_JIqU9AJRNVc_J5tW7w2Z7k22w9-Houl326Yeocu92H0NmEPvMo4rd9DkEgz4LXZa-R31SYmjDkC7ImVU-4eXvXZL3h_u39VO-fXl8Xt9tc824GPLKWNSAojYVtcwqwUtYIViDrTFWKdbY0jZi7LSMosW2BcW1MNYA57pZlUtSn3J1DClFtLKP7qDiUVKQE7_cyz9-OfHLE_9ovD0ZcVz35TDKpB12Go2LqAdpgvsv4gfgOG16
Cites_doi 10.1109/CVPR.2017.690
10.1109/TPAMI.2015.2389824
10.1145/3447582
10.1109/ACCESS.2022.3182659
10.1109/CVPR42600.2020.01079
10.1007/s11263-021-01453-z
10.1109/ICCV48922.2021.00052
10.1109/ICCV48922.2021.00447
10.1109/CVPR52729.2023.01544
10.1109/ICCV.2019.00140
10.1109/CVPR52729.2023.01157
10.1007/978-3-030-01264-9_8
10.1109/CVPR52729.2023.00721
10.1109/ICCV.2017.324
10.1109/CVPR.2016.91
10.1109/CVPR.2018.00474
10.1109/CVPR.2018.00716
10.1109/ACCESS.2015.2494536
10.1109/CVPR42600.2020.00165
ContentType Journal Article
Copyright 2025 Elsevier Ltd
Copyright_xml – notice: 2025 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.engappai.2025.110415
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
ExternalDocumentID 10_1016_j_engappai_2025_110415
S0952197625004154
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
AABNK
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AATTM
AAXKI
AAXUO
AAYFN
AAYWO
ABBOA
ABJNI
ABMAC
ACDAQ
ACGFS
ACRLP
ACVFH
ACZNC
ADBBV
ADCNI
ADEZE
ADTZH
AEBSH
AECPX
AEIPS
AEKER
AENEX
AEUPX
AFJKZ
AFPUW
AFTJW
AGCQF
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIGII
AIIUN
AIKHN
AITUG
AKBMS
AKRWK
AKYEP
ALMA_UNASSIGNED_HOLDINGS
AMRAJ
ANKPU
AOUOD
APXCP
AXJTR
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFKBS
EO8
EO9
EP2
EP3
F5P
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
IHE
J1W
JJJVA
KOM
LG9
LY7
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
ROL
RPZ
SDF
SDG
SDP
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
~G-
29G
9DU
AAQXK
AAYXX
ABWVN
ABXDB
ACLOT
ACNNM
ACRPL
ADJOM
ADMUD
ADNMO
AGQPQ
ASPBG
AVWKF
AZFZN
CITATION
EFLBG
EJD
FEDTE
FGOYB
G-2
HLZ
HVGLF
HZ~
R2-
SBC
SET
UHS
WUQ
ZMT
~HD
ID FETCH-LOGICAL-c259t-4dfec0e96d41f2fa95307e0fdebddfaa28f3f89deef21efebb0a5c9dfd055c873
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001445530700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0952-1976
IngestDate Sat Nov 29 06:57:11 EST 2025
Sat Aug 23 17:11:43 EDT 2025
IsPeerReviewed true
IsScholarly true
Keywords Model compression
Model adaptation
Object detection
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c259t-4dfec0e96d41f2fa95307e0fdebddfaa28f3f89deef21efebb0a5c9dfd055c873
ORCID 0000-0003-4414-4965
ParticipantIDs crossref_primary_10_1016_j_engappai_2025_110415
elsevier_sciencedirect_doi_10_1016_j_engappai_2025_110415
PublicationCentury 2000
PublicationDate 2025-06-01
2025-06-00
PublicationDateYYYYMMDD 2025-06-01
PublicationDate_xml – month: 06
  year: 2025
  text: 2025-06-01
  day: 01
PublicationDecade 2020
PublicationTitle Engineering applications of artificial intelligence
PublicationYear 2025
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Ge, Liu, Wang, Li, Sun (b6) 2021
Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, Adam (b13) 2017
Vadera, Ameen (b36) 2022; 10
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4510–4520.
Cai, Shen (b2) 2023
Zhang, X., Zhou, X., Lin, M., Sun, J., 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6848–6856.
Li, Y., Chen, Y., Dai, X., Chen, D., Liu, M., Yuan, L., Liu, Z., Zhang, L., Vasconcelos, N., 2021. Micronet: Improving image recognition with extremely low flops. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 468–477.
Han, Pool, Tran, Dally (b8) 2015; 28
Li, Li, Geng, Jiang, Cheng, Zhang, Ke, Xu, Chu (b17) 2023
Zhou, Hou, Chen, Feng, Yan (b43) 2020
Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C., 2020. Ghostnet: More features from cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1580–1589.
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al., 2019. Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1314–1324.
Renda, Frankle, Carbin (b30) 2020
Tan, Le (b32) 2019
Ren, Xiao, Chang, Huang, Li, Chen, Wang (b29) 2021; 54
Wang, Qin, Bai, Fu (b38) 2023
Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 779–788.
Wang, C.-Y., Bochkovskiy, A., Liao, H.-Y.M., 2023a. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7464–7475.
Iandola, Han, Moskewicz, Ashraf, Dally, Keutzer (b14) 2016
Ma, N., Zhang, X., Zheng, H.-T., Sun, J., 2018. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision. ECCV, pp. 116–131.
Redmon, Farhadi (b27) 2018
Liu, Zhang, Kuang, Zhou, Xue, Wang, Chen, Yang, Liao, Zhang (b21) 2021
Luo, Wu (b22) 2017
Tan, Le (b33) 2021
He, Kang, Dong, Fu, Yang (b10) 2018
Krizhevsky, Sutskever, Hinton (b15) 2012; 25
Li, Wu, Su, Wang (b19) 2020
Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P., 2017. Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2980–2988.
Wang, Qin, Zhang, Fu (b39) 2020
Ren, He, Girshick, Sun (b28) 2015; 28
Chen, J., Kao, S.-h., He, H., Zhuo, W., Wen, S., Lee, C.-H., Chan, S.-H.G., 2023. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12021–12031.
Tang, Han, Guo, Xu, Xu, Wang (b35) 2022; 35
Wang, Yeh, Liao (b40) 2021
Redmon, J., Farhadi, A., 2017. YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7263–7271.
Li, Li, Jiang, Weng, Geng, Li, Ke, Li, Cheng, Nie (b18) 2022
Tan, M., Pang, R., Le, Q.V., 2020. Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10781–10790.
Polyak, Wolf (b24) 2015; 3
Zhang, Li, Li, Liu, Xue, Zhang, Jiang, Huang, Wang, Wang (b41) 2023
Fang, G., Ma, X., Song, M., Mi, M.B., Wang, X., 2023. Depgraph: Towards any structural pruning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 16091–16101.
He, Zhang, Ren, Sun (b11) 2015; 37
Ding, X., Hao, T., Tan, J., Liu, J., Han, J., Guo, Y., Ding, G., 2021. Resrep: Lossless cnn pruning via decoupling remembering and forgetting. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4510–4520.
Bochkovskiy, Wang, Liao (b1) 2020
Gou, Yu, Maybank, Tao (b7) 2021; 129
Han (10.1016/j.engappai.2025.110415_b8) 2015; 28
Bochkovskiy (10.1016/j.engappai.2025.110415_b1) 2020
Wang (10.1016/j.engappai.2025.110415_b39) 2020
Iandola (10.1016/j.engappai.2025.110415_b14) 2016
Krizhevsky (10.1016/j.engappai.2025.110415_b15) 2012; 25
Tan (10.1016/j.engappai.2025.110415_b33) 2021
Li (10.1016/j.engappai.2025.110415_b17) 2023
Li (10.1016/j.engappai.2025.110415_b18) 2022
10.1016/j.engappai.2025.110415_b20
10.1016/j.engappai.2025.110415_b42
10.1016/j.engappai.2025.110415_b23
Tan (10.1016/j.engappai.2025.110415_b32) 2019
Wang (10.1016/j.engappai.2025.110415_b40) 2021
10.1016/j.engappai.2025.110415_b25
10.1016/j.engappai.2025.110415_b26
10.1016/j.engappai.2025.110415_b4
10.1016/j.engappai.2025.110415_b3
Redmon (10.1016/j.engappai.2025.110415_b27) 2018
Ge (10.1016/j.engappai.2025.110415_b6) 2021
10.1016/j.engappai.2025.110415_b5
Gou (10.1016/j.engappai.2025.110415_b7) 2021; 129
Liu (10.1016/j.engappai.2025.110415_b21) 2021
10.1016/j.engappai.2025.110415_b9
Zhou (10.1016/j.engappai.2025.110415_b43) 2020
He (10.1016/j.engappai.2025.110415_b11) 2015; 37
He (10.1016/j.engappai.2025.110415_b10) 2018
Tang (10.1016/j.engappai.2025.110415_b35) 2022; 35
Ren (10.1016/j.engappai.2025.110415_b29) 2021; 54
Vadera (10.1016/j.engappai.2025.110415_b36) 2022; 10
Wang (10.1016/j.engappai.2025.110415_b38) 2023
Cai (10.1016/j.engappai.2025.110415_b2) 2023
10.1016/j.engappai.2025.110415_b31
Zhang (10.1016/j.engappai.2025.110415_b41) 2023
10.1016/j.engappai.2025.110415_b12
10.1016/j.engappai.2025.110415_b34
Howard (10.1016/j.engappai.2025.110415_b13) 2017
Renda (10.1016/j.engappai.2025.110415_b30) 2020
10.1016/j.engappai.2025.110415_b16
Ren (10.1016/j.engappai.2025.110415_b28) 2015; 28
Li (10.1016/j.engappai.2025.110415_b19) 2020
10.1016/j.engappai.2025.110415_b37
Polyak (10.1016/j.engappai.2025.110415_b24) 2015; 3
Luo (10.1016/j.engappai.2025.110415_b22) 2017
References_xml – year: 2020
  ident: b30
  article-title: Comparing rewinding and fine-tuning in neural network pruning
– reference: Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al., 2019. Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1314–1324.
– reference: Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 779–788.
– year: 2017
  ident: b13
  article-title: Mobilenets: Efficient convolutional neural networks for mobile vision applications
– start-page: 639
  year: 2020
  end-page: 654
  ident: b19
  article-title: Eagleeye: Fast sub-net evaluation for efficient neural network pruning
  publication-title: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16
– year: 2016
  ident: b14
  article-title: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and
– reference: Li, Y., Chen, Y., Dai, X., Chen, D., Liu, M., Yuan, L., Liu, Z., Zhang, L., Vasconcelos, N., 2021. Micronet: Improving image recognition with extremely low flops. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 468–477.
– reference: Tan, M., Pang, R., Le, Q.V., 2020. Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10781–10790.
– volume: 10
  start-page: 63280
  year: 2022
  end-page: 63300
  ident: b36
  article-title: Methods for pruning deep neural networks
  publication-title: IEEE Access
– volume: 28
  year: 2015
  ident: b28
  article-title: Faster r-cnn: Towards real-time object detection with region proposal networks
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 680
  year: 2020
  end-page: 697
  ident: b43
  article-title: Rethinking bottleneck structure for efficient mobile network design
  publication-title: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16
– reference: Redmon, J., Farhadi, A., 2017. YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7263–7271.
– reference: Ding, X., Hao, T., Tan, J., Liu, J., Han, J., Guo, Y., Ding, G., 2021. Resrep: Lossless cnn pruning via decoupling remembering and forgetting. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4510–4520.
– volume: 129
  start-page: 1789
  year: 2021
  end-page: 1819
  ident: b7
  article-title: Knowledge distillation: A survey
  publication-title: Int. J. Comput. Vis.
– volume: 28
  year: 2015
  ident: b8
  article-title: Learning both weights and connections for efficient neural network
  publication-title: Adv. Neural Inf. Process. Syst.
– reference: Ma, N., Zhang, X., Zheng, H.-T., Sun, J., 2018. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision. ECCV, pp. 116–131.
– start-page: 6105
  year: 2019
  end-page: 6114
  ident: b32
  article-title: Efficientnet: Rethinking model scaling for convolutional neural networks
  publication-title: International Conference on Machine Learning
– reference: Wang, C.-Y., Bochkovskiy, A., Liao, H.-Y.M., 2023a. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7464–7475.
– year: 2023
  ident: b2
  article-title: FalconNet: Factorization for the light-weight ConvNets
– reference: Fang, G., Ma, X., Song, M., Mi, M.B., Wang, X., 2023. Depgraph: Towards any structural pruning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 16091–16101.
– start-page: 10096
  year: 2021
  end-page: 10106
  ident: b33
  article-title: Efficientnetv2: Smaller models and faster training
  publication-title: International Conference on Machine Learning
– volume: 25
  year: 2012
  ident: b15
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2018
  ident: b27
  article-title: Yolov3: An incremental improvement
– reference: Chen, J., Kao, S.-h., He, H., Zhuo, W., Wen, S., Lee, C.-H., Chan, S.-H.G., 2023. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12021–12031.
– year: 2021
  ident: b6
  article-title: Yolox: Exceeding yolo series in 2021
– reference: Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4510–4520.
– year: 2021
  ident: b40
  article-title: You only learn one representation: Unified network for multiple tasks
– volume: 35
  start-page: 9969
  year: 2022
  end-page: 9982
  ident: b35
  article-title: GhostNetv2: enhance cheap operation with long-range attention
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2020
  ident: b1
  article-title: Yolov4: Optimal speed and accuracy of object detection
– reference: Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C., 2020. Ghostnet: More features from cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1580–1589.
– year: 2020
  ident: b39
  article-title: Neural pruning via growing regularization
– year: 2022
  ident: b18
  article-title: YOLOv6: A single-stage object detection framework for industrial applications
– reference: Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P., 2017. Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2980–2988.
– start-page: 7021
  year: 2021
  end-page: 7032
  ident: b21
  article-title: Group fisher pruning for practical network compression
  publication-title: International Conference on Machine Learning
– year: 2023
  ident: b41
  article-title: Rethinking mobile block for efficient neural models
– year: 2023
  ident: b17
  article-title: Yolov6 v3. 0: A full-scale reloading
– volume: 54
  start-page: 1
  year: 2021
  end-page: 34
  ident: b29
  article-title: A comprehensive survey of neural architecture search: Challenges and solutions
  publication-title: ACM Comput. Surv.
– reference: Zhang, X., Zhou, X., Lin, M., Sun, J., 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6848–6856.
– volume: 3
  start-page: 2163
  year: 2015
  end-page: 2175
  ident: b24
  article-title: Channel-level acceleration of deep face representations
  publication-title: IEEE Access
– year: 2018
  ident: b10
  article-title: Soft filter pruning for accelerating deep convolutional neural networks
– volume: 37
  start-page: 1904
  year: 2015
  end-page: 1916
  ident: b11
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– year: 2017
  ident: b22
  article-title: An entropy-based pruning method for cnn compression
– year: 2023
  ident: b38
  article-title: Why is the state of neural network pruning so confusing? on the fairness, comparison setup, and trainability in network pruning
– year: 2021
  ident: 10.1016/j.engappai.2025.110415_b40
– start-page: 680
  year: 2020
  ident: 10.1016/j.engappai.2025.110415_b43
  article-title: Rethinking bottleneck structure for efficient mobile network design
– year: 2020
  ident: 10.1016/j.engappai.2025.110415_b39
– ident: 10.1016/j.engappai.2025.110415_b26
  doi: 10.1109/CVPR.2017.690
– year: 2017
  ident: 10.1016/j.engappai.2025.110415_b22
– volume: 35
  start-page: 9969
  year: 2022
  ident: 10.1016/j.engappai.2025.110415_b35
  article-title: GhostNetv2: enhance cheap operation with long-range attention
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2023
  ident: 10.1016/j.engappai.2025.110415_b38
– start-page: 639
  year: 2020
  ident: 10.1016/j.engappai.2025.110415_b19
  article-title: Eagleeye: Fast sub-net evaluation for efficient neural network pruning
– volume: 25
  year: 2012
  ident: 10.1016/j.engappai.2025.110415_b15
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2023
  ident: 10.1016/j.engappai.2025.110415_b17
– volume: 37
  start-page: 1904
  issue: 9
  year: 2015
  ident: 10.1016/j.engappai.2025.110415_b11
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2015.2389824
– volume: 54
  start-page: 1
  issue: 4
  year: 2021
  ident: 10.1016/j.engappai.2025.110415_b29
  article-title: A comprehensive survey of neural architecture search: Challenges and solutions
  publication-title: ACM Comput. Surv.
  doi: 10.1145/3447582
– start-page: 7021
  year: 2021
  ident: 10.1016/j.engappai.2025.110415_b21
  article-title: Group fisher pruning for practical network compression
– volume: 10
  start-page: 63280
  year: 2022
  ident: 10.1016/j.engappai.2025.110415_b36
  article-title: Methods for pruning deep neural networks
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2022.3182659
– year: 2022
  ident: 10.1016/j.engappai.2025.110415_b18
– ident: 10.1016/j.engappai.2025.110415_b34
  doi: 10.1109/CVPR42600.2020.01079
– volume: 129
  start-page: 1789
  year: 2021
  ident: 10.1016/j.engappai.2025.110415_b7
  article-title: Knowledge distillation: A survey
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-021-01453-z
– ident: 10.1016/j.engappai.2025.110415_b16
  doi: 10.1109/ICCV48922.2021.00052
– volume: 28
  year: 2015
  ident: 10.1016/j.engappai.2025.110415_b8
  article-title: Learning both weights and connections for efficient neural network
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: 10.1016/j.engappai.2025.110415_b4
  doi: 10.1109/ICCV48922.2021.00447
– ident: 10.1016/j.engappai.2025.110415_b5
  doi: 10.1109/CVPR52729.2023.01544
– ident: 10.1016/j.engappai.2025.110415_b12
  doi: 10.1109/ICCV.2019.00140
– ident: 10.1016/j.engappai.2025.110415_b3
  doi: 10.1109/CVPR52729.2023.01157
– ident: 10.1016/j.engappai.2025.110415_b23
  doi: 10.1007/978-3-030-01264-9_8
– start-page: 10096
  year: 2021
  ident: 10.1016/j.engappai.2025.110415_b33
  article-title: Efficientnetv2: Smaller models and faster training
– ident: 10.1016/j.engappai.2025.110415_b37
  doi: 10.1109/CVPR52729.2023.00721
– ident: 10.1016/j.engappai.2025.110415_b20
  doi: 10.1109/ICCV.2017.324
– year: 2020
  ident: 10.1016/j.engappai.2025.110415_b1
– year: 2017
  ident: 10.1016/j.engappai.2025.110415_b13
– year: 2021
  ident: 10.1016/j.engappai.2025.110415_b6
– year: 2018
  ident: 10.1016/j.engappai.2025.110415_b10
– year: 2016
  ident: 10.1016/j.engappai.2025.110415_b14
– year: 2020
  ident: 10.1016/j.engappai.2025.110415_b30
– start-page: 6105
  year: 2019
  ident: 10.1016/j.engappai.2025.110415_b32
  article-title: Efficientnet: Rethinking model scaling for convolutional neural networks
– ident: 10.1016/j.engappai.2025.110415_b25
  doi: 10.1109/CVPR.2016.91
– ident: 10.1016/j.engappai.2025.110415_b31
  doi: 10.1109/CVPR.2018.00474
– year: 2018
  ident: 10.1016/j.engappai.2025.110415_b27
– ident: 10.1016/j.engappai.2025.110415_b42
  doi: 10.1109/CVPR.2018.00716
– year: 2023
  ident: 10.1016/j.engappai.2025.110415_b2
– volume: 3
  start-page: 2163
  year: 2015
  ident: 10.1016/j.engappai.2025.110415_b24
  article-title: Channel-level acceleration of deep face representations
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2015.2494536
– ident: 10.1016/j.engappai.2025.110415_b9
  doi: 10.1109/CVPR42600.2020.00165
– year: 2023
  ident: 10.1016/j.engappai.2025.110415_b41
– volume: 28
  year: 2015
  ident: 10.1016/j.engappai.2025.110415_b28
  article-title: Faster r-cnn: Towards real-time object detection with region proposal networks
  publication-title: Adv. Neural Inf. Process. Syst.
SSID ssj0003846
Score 2.433313
Snippet Advanced object detection methods have yielded impressive progress in recent years. However, the computational constraints of edge mobile devices present...
SourceID crossref
elsevier
SourceType Index Database
Publisher
StartPage 110415
SubjectTerms Model adaptation
Model compression
Object detection
Title SlimDL: Deploying ultra-light deep learning model on sweeping robots
URI https://dx.doi.org/10.1016/j.engappai.2025.110415
Volume 149
WOSCitedRecordID wos001445530700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 0952-1976
  databaseCode: AIEXJ
  dateStart: 19950201
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0003846
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELaWLQcupbxEC0U-cKtc8mzs3iq2vFRVSFvQciFy_Cip0mSV3ZT9-fiZRFAJEOISRaPE3p35NJkZzwOAl1Jiyngmke7mhpT_FaMio0coCXnIOJYyNaHsz2fZ-TleLMjHyeSrr4W5qbK6xpsNWf5XUSuaErYunf0LcfeLKoK6V0JXVyV2df0jwc-r8np2pj39mdDTfHUsoKvWLUWVdsQPuBBLPyzi0k7C0ScGq--KriltUzS2vVMfsR96Fh6MD7xNDkFrko3M6I9Rd8_hrMlotUXHG_eJNMF7q2C-dH06UNmZUxIdva7KHq9vqQnkzr_RZtM5GLsIRZQOmVQ2bOZLZ4Y8JRt_jFBI7PCXXhXb9qW_qHUbYbg6FPWl-pu0PNTb6AqGxNaC_tQye64X12sr-04_ktwBW1GWEjwFWyfvTxcf-m91jG0pl_8xoxry23e73XwZmSQXO2Db-RLwxGLgAZiI-iG47_wK6LT2SpH86A5PewRmFiXHsMcIHGEEaoxAjxFoMAKbGnqMQIuRx-DTm9OL1--QG6iBmPJy1yjhUrBAkCOehDKSlKRKw4tAclFwLimNsIwlJmoPGYVCiqIIaMoIlzxIU4az-AmY1k0tngIowjjWc5sEi4MkyXhBE1ZQogzCCHOc8l3wyvMpX9q-KblPKLzKPWdzzdnccnYXEM_O3Fl_1qrLFQp-8-7eP7z7DNwbQPscTNdtJ_bBXXazLlftCweYH_8phlM
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=SlimDL%3A+Deploying+ultra-light+deep+learning+model+on+sweeping+robots&rft.jtitle=Engineering+applications+of+artificial+intelligence&rft.au=Sun%2C+Xudong&rft.au=Wang%2C+Yu&rft.au=Liu%2C+Zhanglin&rft.au=Gao%2C+Shaoxuan&rft.date=2025-06-01&rft.pub=Elsevier+Ltd&rft.issn=0952-1976&rft.volume=149&rft_id=info:doi/10.1016%2Fj.engappai.2025.110415&rft.externalDocID=S0952197625004154
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0952-1976&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0952-1976&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0952-1976&client=summon