Lightweight object detection algorithm for robots with improved YOLOv5

Robot object detection is important for the realisation of robot intelligence. Currently, deep learning-based object detection algorithms are used for robotic object detection. However, it faces some challenges in practical applications, such as the fact that robots frequently use resource-constrain...

Full description

Saved in:
Bibliographic Details
Published in:Engineering applications of artificial intelligence Vol. 123; p. 106217
Main Authors: Liu, Gang, Hu, Yanxin, Chen, Zhiyu, Guo, Jianwei, Ni, Peng
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.08.2023
Subjects:
ISSN:0952-1976, 1873-6769
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Robot object detection is important for the realisation of robot intelligence. Currently, deep learning-based object detection algorithms are used for robotic object detection. However, it faces some challenges in practical applications, such as the fact that robots frequently use resource-constrained devices, resulting in detection algorithms with long computation times and undesired detection rates. In order to address these concerns, this paper proposes a lightweight object detection algorithm for robots with an improved YOLOv5. To reduce the amount of processing required for feature extraction and increase the speed of detection, the C3Ghost and GhostConv modules have been introduced into the YOLOv5 backbone. The DWConv module was used in conjunction with the C3Ghost module in the YOLOv5 neck network to further reduce the number of model parameters and maintain accuracy. The CA (Coordinated Attention) module is also introduced to improve the extraction of features from detected objects and suppress irrelevant features, thus improving the algorithm’s detection accuracy. To verify the performance of the method, we tested it with a self-built dataset (4561 robot images in total) and the PascalVOC dataset respectively. The results show that compared with the YOLOv5s on the self-built dataset, the algorithm has a 54% decrease in FLOPs and a 52.53% decrease in the number of model parameters without a decrease in mAP (0.5). The effectiveness and superiority of the algorithm is demonstrated through case studies and comparisons. •Lightweight C3Ghost and GhostConv modules are introduced in the YOLOv5 backbone network to achieve model compression and maintain detection accuracy and speed.•C3Ghost and DWConv modules are introduced in YOLOv5 neck network to further reduce model parameters and improve the speed of feature fusion.•CA (Coordinated attention) module is also introduced to enhance the extraction of relevant features and suppress irrelevant features to improve the detection accuracy of the algorithm.•To demonstrate the algorithm’s ability to solve real-world problems, we produced a dataset for the object detection task in the RoboMasterAI challenge hosted by DJI and experimentally verified that our proposed object detection algorithm is effective.
AbstractList Robot object detection is important for the realisation of robot intelligence. Currently, deep learning-based object detection algorithms are used for robotic object detection. However, it faces some challenges in practical applications, such as the fact that robots frequently use resource-constrained devices, resulting in detection algorithms with long computation times and undesired detection rates. In order to address these concerns, this paper proposes a lightweight object detection algorithm for robots with an improved YOLOv5. To reduce the amount of processing required for feature extraction and increase the speed of detection, the C3Ghost and GhostConv modules have been introduced into the YOLOv5 backbone. The DWConv module was used in conjunction with the C3Ghost module in the YOLOv5 neck network to further reduce the number of model parameters and maintain accuracy. The CA (Coordinated Attention) module is also introduced to improve the extraction of features from detected objects and suppress irrelevant features, thus improving the algorithm’s detection accuracy. To verify the performance of the method, we tested it with a self-built dataset (4561 robot images in total) and the PascalVOC dataset respectively. The results show that compared with the YOLOv5s on the self-built dataset, the algorithm has a 54% decrease in FLOPs and a 52.53% decrease in the number of model parameters without a decrease in mAP (0.5). The effectiveness and superiority of the algorithm is demonstrated through case studies and comparisons. •Lightweight C3Ghost and GhostConv modules are introduced in the YOLOv5 backbone network to achieve model compression and maintain detection accuracy and speed.•C3Ghost and DWConv modules are introduced in YOLOv5 neck network to further reduce model parameters and improve the speed of feature fusion.•CA (Coordinated attention) module is also introduced to enhance the extraction of relevant features and suppress irrelevant features to improve the detection accuracy of the algorithm.•To demonstrate the algorithm’s ability to solve real-world problems, we produced a dataset for the object detection task in the RoboMasterAI challenge hosted by DJI and experimentally verified that our proposed object detection algorithm is effective.
ArticleNumber 106217
Author Liu, Gang
Chen, Zhiyu
Hu, Yanxin
Guo, Jianwei
Ni, Peng
Author_xml – sequence: 1
  givenname: Gang
  surname: Liu
  fullname: Liu, Gang
  email: lg@ccut.edu.cn
  organization: School of Computer Science and Engineering, Changchun University of Technology, Changchun, 130102, China
– sequence: 2
  givenname: Yanxin
  orcidid: 0000-0002-0800-7560
  surname: Hu
  fullname: Hu, Yanxin
  email: 2202103079@stu.ccut.edu.cn
  organization: School of Computer Science and Engineering, Changchun University of Technology, Changchun, 130102, China
– sequence: 3
  givenname: Zhiyu
  orcidid: 0000-0001-8654-2369
  surname: Chen
  fullname: Chen, Zhiyu
  email: chenzhiyu@ccut.edu.cn
  organization: School of Computer Science and Engineering, Changchun University of Technology, Changchun, 130102, China
– sequence: 4
  givenname: Jianwei
  orcidid: 0000-0003-2658-8845
  surname: Guo
  fullname: Guo, Jianwei
  email: guojianwei@ccut.edu.cn
  organization: School of Computer Science and Engineering, Changchun University of Technology, Changchun, 130102, China
– sequence: 5
  givenname: Peng
  surname: Ni
  fullname: Ni, Peng
  email: nipeng@ccut.edu.cn
  organization: School of Applied Technology, Changchun University of Technology, Changchun, 130102, China
BookMark eNqFkM1KAzEUhYNUsK2-guQFpuavmQm4UIpVYaAbXbgKmcxNm6GdDJnQ4ts7Q3Xjppt74MJ34HwzNGlDCwjdU7KghMqHZgHt1nSd8QtGGB-ektH8Ck1pkfNM5lJN0JSoJcuoyuUNmvV9QwjhhZBTtC79dpdOMF4cqgZswjWkIXxosdlvQ_Rpd8AuRBxDFVKPT8MD-0MXwxFq_LUpN8flLbp2Zt_D3W_O0ef65WP1lpWb1_fVc5lZTlnKQFaOuJwrTqQQppKOOy6WBbha1FA5q2oAmavKMQHguGLEGC6qwvIcOAc-R_Lca2Po-whOd9EfTPzWlOjRhm70nw092tBnGwP4-A-0PplxZIrG7y_jT2cchnFHD1H31kNrofZxcKXr4C9V_ACbwIPB
CitedBy_id crossref_primary_10_1016_j_eswa_2024_124942
crossref_primary_10_1186_s13007_025_01353_0
crossref_primary_10_1007_s12008_024_01769_3
crossref_primary_10_1016_j_engappai_2024_108700
crossref_primary_10_1016_j_isatra_2025_08_045
crossref_primary_10_1016_j_cropro_2024_106912
crossref_primary_10_1016_j_engappai_2024_108903
crossref_primary_10_1016_j_measurement_2024_116291
crossref_primary_10_3390_jmse11091658
crossref_primary_10_1007_s10664_025_10679_1
crossref_primary_10_1109_ACCESS_2025_3551892
crossref_primary_10_1007_s11227_025_07771_0
crossref_primary_10_3390_jmse13010066
crossref_primary_10_1002_ese3_70255
crossref_primary_10_1016_j_neucom_2025_131501
crossref_primary_10_1007_s00138_024_01611_6
crossref_primary_10_1038_s41598_025_95959_y
crossref_primary_10_1016_j_jvcir_2024_104362
crossref_primary_10_1007_s11760_024_03369_w
crossref_primary_10_1016_j_array_2025_100459
crossref_primary_10_1016_j_ecoinf_2024_102467
crossref_primary_10_1088_1361_6501_ad86db
crossref_primary_10_1016_j_engappai_2025_111090
crossref_primary_10_1007_s11760_025_04276_4
crossref_primary_10_1016_j_engappai_2024_109077
crossref_primary_10_1088_1361_6501_ad3a05
crossref_primary_10_3389_fnbot_2023_1338104
crossref_primary_10_1016_j_engappai_2025_111247
crossref_primary_10_3390_act13090374
crossref_primary_10_1017_S0890060424000295
crossref_primary_10_3390_horticulturae10080852
crossref_primary_10_1016_j_neucom_2024_128670
crossref_primary_10_1016_j_foodp_2024_100042
crossref_primary_10_1016_j_imavis_2025_105441
crossref_primary_10_1016_j_compag_2023_108549
crossref_primary_10_1109_ACCESS_2024_3382817
crossref_primary_10_1016_j_fusengdes_2024_114532
crossref_primary_10_3389_fpls_2024_1490026
crossref_primary_10_1016_j_engappai_2024_108929
crossref_primary_10_1016_j_measurement_2024_116269
crossref_primary_10_3390_agronomy14081628
crossref_primary_10_1007_s11227_024_06319_y
crossref_primary_10_1109_ACCESS_2024_3396224
crossref_primary_10_1016_j_bspc_2024_106399
crossref_primary_10_1016_j_engappai_2023_107513
crossref_primary_10_1038_s41598_024_58146_z
crossref_primary_10_1109_TIM_2025_3551459
crossref_primary_10_1016_j_engappai_2024_110000
crossref_primary_10_1007_s40998_025_00834_1
crossref_primary_10_1109_LGRS_2024_3398727
crossref_primary_10_3390_s25175311
crossref_primary_10_3390_su151813550
crossref_primary_10_1016_j_atech_2025_101212
crossref_primary_10_1016_j_compag_2025_110061
crossref_primary_10_1016_j_engappai_2024_109145
crossref_primary_10_1016_j_mtcomm_2024_108710
crossref_primary_10_1016_j_snb_2025_138053
crossref_primary_10_1109_TIM_2024_3418082
crossref_primary_10_1007_s13042_024_02383_1
crossref_primary_10_3390_horticulturae11060585
crossref_primary_10_1016_j_compag_2024_109356
crossref_primary_10_1007_s11554_025_01695_x
crossref_primary_10_1016_j_engappai_2025_111617
crossref_primary_10_1007_s11760_025_03883_5
crossref_primary_10_1016_j_engappai_2025_110006
crossref_primary_10_1109_ACCESS_2023_3339560
crossref_primary_10_3390_s23156738
crossref_primary_10_3390_ai5010005
crossref_primary_10_1016_j_rcim_2024_102791
crossref_primary_10_1038_s41598_024_64289_w
crossref_primary_10_1109_JSEN_2024_3419806
crossref_primary_10_3390_w16111633
crossref_primary_10_1016_j_measurement_2024_114717
crossref_primary_10_1371_journal_pone_0324512
crossref_primary_10_3390_app14188187
crossref_primary_10_1007_s10586_025_05161_y
crossref_primary_10_3390_s24237621
crossref_primary_10_1109_TIM_2024_3385846
crossref_primary_10_1016_j_compeleceng_2024_109924
crossref_primary_10_1088_1361_6501_ad9f8b
crossref_primary_10_7717_peerj_cs_2932
crossref_primary_10_1007_s40031_024_01152_6
crossref_primary_10_1016_j_compag_2025_110194
crossref_primary_10_1016_j_neucom_2024_128775
crossref_primary_10_1109_ACCESS_2023_3282309
crossref_primary_10_1007_s43684_024_00080_y
crossref_primary_10_3390_jimaging9110240
crossref_primary_10_1016_j_advengsoft_2025_103976
Cites_doi 10.1109/CVPR.2018.00442
10.1109/ACCESS.2020.3011502
10.1109/CVPR46437.2021.01350
10.1109/CVPR.2017.195
10.1007/978-3-030-01234-2_1
10.30630/joiv.6.1.785
10.1109/CVPR.2018.00913
10.1109/CVPR42600.2020.01079
10.1109/CVPRW50498.2020.00203
10.1109/CVPR.2016.91
10.1109/CVPR.2017.106
10.1109/ACCESS.2020.2981823
10.1109/TPAMI.2015.2389824
10.1109/CVPR46437.2021.01283
10.1109/CVPR42600.2020.00165
10.3390/electronics10182292
10.3390/machines10050294
10.1109/CVPR.2016.90
10.1108/IR-07-2019-0150
10.3390/s21062180
10.1109/ICCV.2015.169
10.1016/j.procs.2022.01.135
10.1109/CVPR42600.2020.01155
10.1109/CVPR.2018.00745
10.3390/su141912274
10.1109/CVPR.2018.00474
10.1109/ICCV.2019.00140
10.1109/CVPR.2018.00716
10.1109/CVPR.2017.690
10.1016/j.compag.2022.107194
10.1007/s11119-020-09754-y
10.1109/CVPR.2014.81
10.1007/978-3-030-01264-9_8
ContentType Journal Article
Copyright 2023 Elsevier Ltd
Copyright_xml – notice: 2023 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.engappai.2023.106217
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1873-6769
ExternalDocumentID 10_1016_j_engappai_2023_106217
S0952197623004013
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
29G
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABMAC
ABXDB
ABYKQ
ACDAQ
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
LG9
LY7
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SDF
SDG
SDP
SES
SET
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UHS
WUQ
ZMT
~G-
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABJNI
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
CITATION
EFKBS
~HD
ID FETCH-LOGICAL-c312t-e6bf0f73930644ab6f3f3458efd4debfc9dee679bf24eef3920aa34b8c37e33e3
ISICitedReferencesCount 92
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000980625700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0952-1976
IngestDate Tue Nov 18 22:03:48 EST 2025
Sat Nov 29 07:10:04 EST 2025
Fri Feb 23 02:36:11 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Attention mechanisms
GhostBottleneck
Robot object detection
YOLOv5
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c312t-e6bf0f73930644ab6f3f3458efd4debfc9dee679bf24eef3920aa34b8c37e33e3
ORCID 0000-0001-8654-2369
0000-0002-0800-7560
0000-0003-2658-8845
ParticipantIDs crossref_primary_10_1016_j_engappai_2023_106217
crossref_citationtrail_10_1016_j_engappai_2023_106217
elsevier_sciencedirect_doi_10_1016_j_engappai_2023_106217
PublicationCentury 2000
PublicationDate August 2023
2023-08-00
PublicationDateYYYYMMDD 2023-08-01
PublicationDate_xml – month: 08
  year: 2023
  text: August 2023
PublicationDecade 2020
PublicationTitle Engineering applications of artificial intelligence
PublicationYear 2023
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Li, Du, Chen, Gong, Liu, Zhou, He (b22) 2021
Zhang, Cisse, Dauphin, Lopez-Paz (b50) 2017
Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q., 2020b. Supplementary material for ‘ECA-Net: Efficient channel attention for deep convolutional neural networks. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, WA, USA. pp. 13–19.
Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1251–1258.
Ren, He, Girshick, Sun (b37) 2015; 28
Krizhevsky, Sutskever, Hinton (b20) 2012; 25
Zhang, X., Zhou, X., Lin, M., Sun, J., 2018b. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6848–6856.
Redmon, J., Farhadi, A., 2017. YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7263–7271.
Liu, Anguelov, Erhan, Szegedy, Reed, Fu, Berg (b24) 2016
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4510–4520.
Liu, S., Qi, L., Qin, H., Shi, J., Jia, J., 2018. Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8759–8768.
Xu, Jia, Liu, Zhao, Sun (b48) 2020; 8
He, Zhang, Ren, Sun (b12) 2015; 37
Wang, C.-Y., Bochkovskiy, A., Liao, H.-Y.M., 2021. Scaled-yolov4: Scaling cross stage partial network. In: Proceedings of the IEEE/Cvf Conference on Computer Vision and Pattern Recognition. pp. 13029–13038.
Yue, Li, Shimizu, Kawamura, Meng (b49) 2022; 10
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al., 2019. Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1314–1324.
Kulshreshtha, Chandra, Randhawa, Tsaramirsis, Khadidos, Khadidos (b21) 2021; 10
Zhang, Li, Ren, Xu, Song, Liu (b52) 2019
Ramachandran, Zoph, Le (b33) 2017
Chen, Chen, Zhou (b4) 2021
.
Srivastava, Khari, Crespo, Chaudhary, Arora (b39) 2021
Rajagopal, Joshi, Ramachandran, Subhalakshmi, Khari, Jha, Shankar, You (b32) 2020; 8
Zhang, Guo, Wu, Tian, Tang, Guo (b51) 2022; 14
Redmon, Farhadi (b36) 2018
Woo, S., Park, J., Lee, J.-Y., Kweon, I.S., 2018. Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 3–19.
Adarsh, Rathi, Kumar (b1) 2020
Loshchilov, Hutter (b27) 2016
Wang, Cheng, Huang, Cai, Zhang, Yuan (b44) 2022; 199
Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S.Z., 2018a. Single-shot refinement neural network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4203–4212.
NVIDIA (b30) 2021
Zhang, Tan, Zhao, Liang, Liu, Zhong, Fan (b53) 2020
Cahyo, Utaminingrum (b3) 2022; 6
Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S., 2017. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2117–2125.
Fu, Feng, Wu, Liu, Gao, Majeed, Al-Mallahi, Zhang, Li, Cui (b7) 2021; 22
Hou, Q., Zhou, D., Feng, J., 2021. Coordinate attention for efficient mobile network design. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13713–13722, doi:.
Bochkovskiy, Wang, Liao (b2) 2020
Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 779–788.
Pillai, Chaudhary, Khari, Crespo (b31) 2021
Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, Adam (b16) 2017
Girshick, R., 2015. Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1440–1448.
Hu, J., Shen, L., Sun, G., 2018. Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7132–7141.
Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C., 2020. Ghostnet: More features from cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1580–1589.
Girshick, R., Donahue, J., Darrell, T., Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 580–587.
Ultralytics (b41) 2021
Tan, M., Pang, R., Le, Q.V., 2020. Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10781–10790.
Wang, C.-Y., Liao, H.-Y.M., Wu, Y.-H., Chen, P.-Y., Hsieh, J.-W., Yeh, I.-H., 2020a. CSPNet: A new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. pp. 390–391.
Liu, Szirányi (b26) 2021; 21
Wang, Bochkovskiy, Liao (b43) 2022
Ge, Liu, Wang, Li, Sun (b8) 2021
Jiang, Ergu, Liu, Cai, Ma (b19) 2022; 199
Mohammadi Kazaj (b29) 2021
Ma, N., Zhang, X., Zheng, H.-T., Sun, J., 2018. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 116–131.
Jian, Mingrui, Xifeng (b18) 2020
Dai, Li, He, Sun (b6) 2016; 29
10.1016/j.engappai.2023.106217_b34
Liu (10.1016/j.engappai.2023.106217_b26) 2021; 21
10.1016/j.engappai.2023.106217_b35
Zhang (10.1016/j.engappai.2023.106217_b51) 2022; 14
Ren (10.1016/j.engappai.2023.106217_b37) 2015; 28
10.1016/j.engappai.2023.106217_b38
Howard (10.1016/j.engappai.2023.106217_b16) 2017
Chen (10.1016/j.engappai.2023.106217_b4) 2021
10.1016/j.engappai.2023.106217_b5
Bochkovskiy (10.1016/j.engappai.2023.106217_b2) 2020
Li (10.1016/j.engappai.2023.106217_b22) 2021
10.1016/j.engappai.2023.106217_b9
Krizhevsky (10.1016/j.engappai.2023.106217_b20) 2012; 25
10.1016/j.engappai.2023.106217_b23
Wang (10.1016/j.engappai.2023.106217_b43) 2022
10.1016/j.engappai.2023.106217_b25
10.1016/j.engappai.2023.106217_b28
Srivastava (10.1016/j.engappai.2023.106217_b39) 2021
Zhang (10.1016/j.engappai.2023.106217_b53) 2020
Wang (10.1016/j.engappai.2023.106217_b44) 2022; 199
Xu (10.1016/j.engappai.2023.106217_b48) 2020; 8
Loshchilov (10.1016/j.engappai.2023.106217_b27) 2016
Ge (10.1016/j.engappai.2023.106217_b8) 2021
10.1016/j.engappai.2023.106217_b11
10.1016/j.engappai.2023.106217_b55
10.1016/j.engappai.2023.106217_b14
10.1016/j.engappai.2023.106217_b13
NVIDIA (10.1016/j.engappai.2023.106217_b30) 2021
10.1016/j.engappai.2023.106217_b15
Zhang (10.1016/j.engappai.2023.106217_b50) 2017
Dai (10.1016/j.engappai.2023.106217_b6) 2016; 29
10.1016/j.engappai.2023.106217_b17
Ultralytics (10.1016/j.engappai.2023.106217_b41) 2021
Fu (10.1016/j.engappai.2023.106217_b7) 2021; 22
Adarsh (10.1016/j.engappai.2023.106217_b1) 2020
Rajagopal (10.1016/j.engappai.2023.106217_b32) 2020; 8
Yue (10.1016/j.engappai.2023.106217_b49) 2022; 10
Jian (10.1016/j.engappai.2023.106217_b18) 2020
Redmon (10.1016/j.engappai.2023.106217_b36) 2018
10.1016/j.engappai.2023.106217_b10
10.1016/j.engappai.2023.106217_b54
10.1016/j.engappai.2023.106217_b45
10.1016/j.engappai.2023.106217_b47
10.1016/j.engappai.2023.106217_b46
Kulshreshtha (10.1016/j.engappai.2023.106217_b21) 2021; 10
He (10.1016/j.engappai.2023.106217_b12) 2015; 37
Mohammadi Kazaj (10.1016/j.engappai.2023.106217_b29) 2021
Pillai (10.1016/j.engappai.2023.106217_b31) 2021
Zhang (10.1016/j.engappai.2023.106217_b52) 2019
Liu (10.1016/j.engappai.2023.106217_b24) 2016
Ramachandran (10.1016/j.engappai.2023.106217_b33) 2017
Jiang (10.1016/j.engappai.2023.106217_b19) 2022; 199
Cahyo (10.1016/j.engappai.2023.106217_b3) 2022; 6
10.1016/j.engappai.2023.106217_b40
10.1016/j.engappai.2023.106217_b42
References_xml – year: 2020
  ident: b2
  article-title: Yolov4: Optimal speed and accuracy of object detection
– reference: Wang, C.-Y., Bochkovskiy, A., Liao, H.-Y.M., 2021. Scaled-yolov4: Scaling cross stage partial network. In: Proceedings of the IEEE/Cvf Conference on Computer Vision and Pattern Recognition. pp. 13029–13038.
– year: 2017
  ident: b33
  article-title: Searching for activation functions
– reference: Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4510–4520.
– reference: Hou, Q., Zhou, D., Feng, J., 2021. Coordinate attention for efficient mobile network design. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13713–13722, doi:.
– year: 2020
  ident: b53
  article-title: A fast detection and grasping method for mobile manipulator based on improved faster R-CNN
  publication-title: Ind. Robot: Int. J. Robot. Res. Appl.
– reference: Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al., 2019. Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1314–1324.
– year: 2021
  ident: b29
  article-title: yolov5-gradcam
– volume: 10
  year: 2021
  ident: b21
  article-title: OATCR: Outdoor autonomous trash-collecting robot design using YOLOv4-tiny
  publication-title: Electronics
– year: 2021
  ident: b39
  article-title: Concepts and Real-Time Applications of Deep Learning
– volume: 8
  start-page: 55289
  year: 2020
  end-page: 55299
  ident: b48
  article-title: Fast method of detecting tomatoes in a complex scene for picking robots
  publication-title: IEEE Access
– reference: Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S., 2017. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2117–2125.
– year: 2021
  ident: b41
  article-title: YOLOv5
– volume: 37
  start-page: 1904
  year: 2015
  end-page: 1916
  ident: b12
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– reference: Girshick, R., 2015. Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1440–1448.
– start-page: 21
  year: 2016
  end-page: 37
  ident: b24
  article-title: Ssd: Single shot multibox detector
  publication-title: European Conference on Computer Vision
– reference: Ma, N., Zhang, X., Zheng, H.-T., Sun, J., 2018. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 116–131.
– year: 2016
  ident: b27
  article-title: Sgdr: Stochastic gradient descent with warm restarts
– start-page: 1
  year: 2021
  end-page: 12
  ident: b31
  article-title: Real-time image enhancement for an automatic automobile accident detection through CCTV using deep learning
  publication-title: Soft Comput.
– volume: 22
  start-page: 754
  year: 2021
  end-page: 776
  ident: b7
  article-title: Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model
  publication-title: Precis. Agric.
– start-page: 487
  year: 2020
  end-page: 492
  ident: b18
  article-title: A fruit detection algorithm based on r-fcn in natural scene
  publication-title: 2020 Chinese Control and Decision Conference (CCDC)
– volume: 25
  year: 2012
  ident: b20
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2017
  ident: b16
  article-title: Mobilenets: Efficient convolutional neural networks for mobile vision applications
– year: 2021
  ident: b8
  article-title: Yolox: Exceeding yolo series in 2021
– reference: Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1251–1258.
– reference: Woo, S., Park, J., Lee, J.-Y., Kweon, I.S., 2018. Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 3–19.
– year: 2022
  ident: b43
  article-title: YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
– volume: 10
  start-page: 294
  year: 2022
  ident: b49
  article-title: YOLO-GD: a deep learning-based object detection algorithm for empty-dish recycling robots
  publication-title: Machines
– start-page: 11
  year: 2021
  end-page: 22
  ident: b4
  article-title: Object detection of basketball robot based on MobileNet-SSD
  publication-title: Intelligent Equipment, Robots, and Vehicles
– volume: 29
  year: 2016
  ident: b6
  article-title: R-fcn: Object detection via region-based fully convolutional networks
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 199
  start-page: 1066
  year: 2022
  end-page: 1073
  ident: b19
  article-title: A Review of Yolo algorithm developments
  publication-title: Procedia Comput. Sci.
– start-page: 272
  year: 2021
  end-page: 278
  ident: b22
  article-title: Design of multifunctional seedbed planting robot based on MobileNetV2-SSD
  publication-title: 2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)
– start-page: 118
  year: 2019
  end-page: 122
  ident: b52
  article-title: An expression recognition method on robots based on mobilenet V2-SSD
  publication-title: 2019 6th International Conference on Systems and Informatics (ICSAI)
– volume: 28
  year: 2015
  ident: b37
  article-title: Faster r-cnn: Towards real-time object detection with region proposal networks
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2021
  ident: b30
  article-title: NVIDIA tensorrt
– volume: 199
  year: 2022
  ident: b44
  article-title: A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed solanum rostratum dunal seedlings
  publication-title: Comput. Electron. Agric.
– reference: Wang, C.-Y., Liao, H.-Y.M., Wu, Y.-H., Chen, P.-Y., Hsieh, J.-W., Yeh, I.-H., 2020a. CSPNet: A new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. pp. 390–391.
– reference: Zhang, X., Zhou, X., Lin, M., Sun, J., 2018b. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6848–6856.
– reference: Tan, M., Pang, R., Le, Q.V., 2020. Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10781–10790.
– year: 2017
  ident: b50
  article-title: mixup: Beyond empirical risk minimization
– reference: Liu, S., Qi, L., Qin, H., Shi, J., Jia, J., 2018. Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8759–8768.
– reference: Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C., 2020. Ghostnet: More features from cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1580–1589.
– reference: Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 779–788.
– volume: 8
  start-page: 135383
  year: 2020
  end-page: 135393
  ident: b32
  article-title: A deep learning model based on multi-objective particle swarm optimization for scene classification in unmanned aerial vehicles
  publication-title: IEEE Access
– reference: Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q., 2020b. Supplementary material for ‘ECA-Net: Efficient channel attention for deep convolutional neural networks. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, WA, USA. pp. 13–19.
– volume: 6
  start-page: 117
  year: 2022
  end-page: 123
  ident: b3
  article-title: Autonomous robot system based on room nameplate recognition using YOLOv4 method on jetson nano 2GB
  publication-title: JOIV: Int. J. Inform. Vis.
– reference: .
– volume: 14
  start-page: 12274
  year: 2022
  ident: b51
  article-title: Real-time vehicle detection based on improved YOLO v5
  publication-title: Sustainability
– reference: He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778.
– reference: Redmon, J., Farhadi, A., 2017. YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7263–7271.
– reference: Girshick, R., Donahue, J., Darrell, T., Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 580–587.
– reference: Hu, J., Shen, L., Sun, G., 2018. Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7132–7141.
– start-page: 687
  year: 2020
  end-page: 694
  ident: b1
  article-title: YOLO v3-Tiny: Object Detection and Recognition using one stage improved model
  publication-title: 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS)
– volume: 21
  start-page: 2180
  year: 2021
  ident: b26
  article-title: Real-time human detection and gesture recognition for on-board uav rescue
  publication-title: Sensors
– reference: Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S.Z., 2018a. Single-shot refinement neural network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4203–4212.
– year: 2018
  ident: b36
  article-title: Yolov3: An incremental improvement
– volume: 29
  year: 2016
  ident: 10.1016/j.engappai.2023.106217_b6
  article-title: R-fcn: Object detection via region-based fully convolutional networks
  publication-title: Adv. Neural Inf. Process. Syst.
– year: 2017
  ident: 10.1016/j.engappai.2023.106217_b16
– year: 2016
  ident: 10.1016/j.engappai.2023.106217_b27
– start-page: 687
  year: 2020
  ident: 10.1016/j.engappai.2023.106217_b1
  article-title: YOLO v3-Tiny: Object Detection and Recognition using one stage improved model
– year: 2021
  ident: 10.1016/j.engappai.2023.106217_b30
– start-page: 11
  year: 2021
  ident: 10.1016/j.engappai.2023.106217_b4
  article-title: Object detection of basketball robot based on MobileNet-SSD
– ident: 10.1016/j.engappai.2023.106217_b54
  doi: 10.1109/CVPR.2018.00442
– volume: 8
  start-page: 135383
  year: 2020
  ident: 10.1016/j.engappai.2023.106217_b32
  article-title: A deep learning model based on multi-objective particle swarm optimization for scene classification in unmanned aerial vehicles
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2020.3011502
– start-page: 1
  year: 2021
  ident: 10.1016/j.engappai.2023.106217_b31
  article-title: Real-time image enhancement for an automatic automobile accident detection through CCTV using deep learning
  publication-title: Soft Comput.
– ident: 10.1016/j.engappai.2023.106217_b14
  doi: 10.1109/CVPR46437.2021.01350
– start-page: 118
  year: 2019
  ident: 10.1016/j.engappai.2023.106217_b52
  article-title: An expression recognition method on robots based on mobilenet V2-SSD
– ident: 10.1016/j.engappai.2023.106217_b5
  doi: 10.1109/CVPR.2017.195
– start-page: 272
  year: 2021
  ident: 10.1016/j.engappai.2023.106217_b22
  article-title: Design of multifunctional seedbed planting robot based on MobileNetV2-SSD
– ident: 10.1016/j.engappai.2023.106217_b47
  doi: 10.1007/978-3-030-01234-2_1
– year: 2017
  ident: 10.1016/j.engappai.2023.106217_b33
– volume: 6
  start-page: 117
  issue: 1
  year: 2022
  ident: 10.1016/j.engappai.2023.106217_b3
  article-title: Autonomous robot system based on room nameplate recognition using YOLOv4 method on jetson nano 2GB
  publication-title: JOIV: Int. J. Inform. Vis.
  doi: 10.30630/joiv.6.1.785
– ident: 10.1016/j.engappai.2023.106217_b25
  doi: 10.1109/CVPR.2018.00913
– ident: 10.1016/j.engappai.2023.106217_b40
  doi: 10.1109/CVPR42600.2020.01079
– ident: 10.1016/j.engappai.2023.106217_b45
  doi: 10.1109/CVPRW50498.2020.00203
– start-page: 487
  year: 2020
  ident: 10.1016/j.engappai.2023.106217_b18
  article-title: A fruit detection algorithm based on r-fcn in natural scene
– ident: 10.1016/j.engappai.2023.106217_b34
  doi: 10.1109/CVPR.2016.91
– ident: 10.1016/j.engappai.2023.106217_b23
  doi: 10.1109/CVPR.2017.106
– year: 2021
  ident: 10.1016/j.engappai.2023.106217_b8
– volume: 8
  start-page: 55289
  year: 2020
  ident: 10.1016/j.engappai.2023.106217_b48
  article-title: Fast method of detecting tomatoes in a complex scene for picking robots
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2020.2981823
– volume: 37
  start-page: 1904
  issue: 9
  year: 2015
  ident: 10.1016/j.engappai.2023.106217_b12
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2015.2389824
– year: 2017
  ident: 10.1016/j.engappai.2023.106217_b50
– ident: 10.1016/j.engappai.2023.106217_b42
  doi: 10.1109/CVPR46437.2021.01283
– ident: 10.1016/j.engappai.2023.106217_b11
  doi: 10.1109/CVPR42600.2020.00165
– volume: 10
  issue: 18
  year: 2021
  ident: 10.1016/j.engappai.2023.106217_b21
  article-title: OATCR: Outdoor autonomous trash-collecting robot design using YOLOv4-tiny
  publication-title: Electronics
  doi: 10.3390/electronics10182292
– volume: 10
  start-page: 294
  issue: 5
  year: 2022
  ident: 10.1016/j.engappai.2023.106217_b49
  article-title: YOLO-GD: a deep learning-based object detection algorithm for empty-dish recycling robots
  publication-title: Machines
  doi: 10.3390/machines10050294
– ident: 10.1016/j.engappai.2023.106217_b13
  doi: 10.1109/CVPR.2016.90
– year: 2020
  ident: 10.1016/j.engappai.2023.106217_b53
  article-title: A fast detection and grasping method for mobile manipulator based on improved faster R-CNN
  publication-title: Ind. Robot: Int. J. Robot. Res. Appl.
  doi: 10.1108/IR-07-2019-0150
– volume: 21
  start-page: 2180
  issue: 6
  year: 2021
  ident: 10.1016/j.engappai.2023.106217_b26
  article-title: Real-time human detection and gesture recognition for on-board uav rescue
  publication-title: Sensors
  doi: 10.3390/s21062180
– year: 2021
  ident: 10.1016/j.engappai.2023.106217_b29
– year: 2022
  ident: 10.1016/j.engappai.2023.106217_b43
– volume: 28
  year: 2015
  ident: 10.1016/j.engappai.2023.106217_b37
  article-title: Faster r-cnn: Towards real-time object detection with region proposal networks
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: 10.1016/j.engappai.2023.106217_b9
  doi: 10.1109/ICCV.2015.169
– volume: 199
  start-page: 1066
  year: 2022
  ident: 10.1016/j.engappai.2023.106217_b19
  article-title: A Review of Yolo algorithm developments
  publication-title: Procedia Comput. Sci.
  doi: 10.1016/j.procs.2022.01.135
– ident: 10.1016/j.engappai.2023.106217_b46
  doi: 10.1109/CVPR42600.2020.01155
– ident: 10.1016/j.engappai.2023.106217_b17
  doi: 10.1109/CVPR.2018.00745
– year: 2018
  ident: 10.1016/j.engappai.2023.106217_b36
– volume: 14
  start-page: 12274
  issue: 19
  year: 2022
  ident: 10.1016/j.engappai.2023.106217_b51
  article-title: Real-time vehicle detection based on improved YOLO v5
  publication-title: Sustainability
  doi: 10.3390/su141912274
– ident: 10.1016/j.engappai.2023.106217_b38
  doi: 10.1109/CVPR.2018.00474
– ident: 10.1016/j.engappai.2023.106217_b15
  doi: 10.1109/ICCV.2019.00140
– volume: 25
  year: 2012
  ident: 10.1016/j.engappai.2023.106217_b20
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 21
  year: 2016
  ident: 10.1016/j.engappai.2023.106217_b24
  article-title: Ssd: Single shot multibox detector
– ident: 10.1016/j.engappai.2023.106217_b55
  doi: 10.1109/CVPR.2018.00716
– year: 2021
  ident: 10.1016/j.engappai.2023.106217_b39
– ident: 10.1016/j.engappai.2023.106217_b35
  doi: 10.1109/CVPR.2017.690
– volume: 199
  year: 2022
  ident: 10.1016/j.engappai.2023.106217_b44
  article-title: A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed solanum rostratum dunal seedlings
  publication-title: Comput. Electron. Agric.
  doi: 10.1016/j.compag.2022.107194
– year: 2020
  ident: 10.1016/j.engappai.2023.106217_b2
– volume: 22
  start-page: 754
  issue: 3
  year: 2021
  ident: 10.1016/j.engappai.2023.106217_b7
  article-title: Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model
  publication-title: Precis. Agric.
  doi: 10.1007/s11119-020-09754-y
– ident: 10.1016/j.engappai.2023.106217_b10
  doi: 10.1109/CVPR.2014.81
– year: 2021
  ident: 10.1016/j.engappai.2023.106217_b41
– ident: 10.1016/j.engappai.2023.106217_b28
  doi: 10.1007/978-3-030-01264-9_8
SSID ssj0003846
Score 2.6197824
Snippet Robot object detection is important for the realisation of robot intelligence. Currently, deep learning-based object detection algorithms are used for robotic...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 106217
SubjectTerms Attention mechanisms
Deep learning
GhostBottleneck
Robot object detection
YOLOv5
Title Lightweight object detection algorithm for robots with improved YOLOv5
URI https://dx.doi.org/10.1016/j.engappai.2023.106217
Volume 123
WOSCitedRecordID wos000980625700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1873-6769
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0003846
  issn: 0952-1976
  databaseCode: AIEXJ
  dateStart: 19950201
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELbQlgMX3ojykg_cVilZOw_7WFWtAK1aJApauER5jNtUJamy2bL8e8avbAqVCkJcrMiS48TzZTyefDNDyOsId01Z5DKoYiaDSMUiEBU2ZZgwWcoqLE1Jls_z9PBQLBbyg6MOLU05gbRpxHotL_6rqLEPha1DZ_9C3MNNsQOvUejYotix_SPBz_Vx-7vxeE7bQrtZphX04EqCn5-0Xd2ffjP0wq4tWh_fVhvvApqfX47mR5fxFY_9JmfhdPzD23AIOkM2MqU_Rtk9B55PvTKe99xtkAZARu3nzboegLnnYkS-ntY_VgMlaGX_CiGA8XXG7gnGB3Kc85n5uJkNSck6H1kwk6lLgm1Vr0h5oAm3V3SzDUb-Tc9bl8PZDjQn-N55vaOnxu6E2UjQX3Jof9QT6vmYTjAW6irHWyyNpZiQrd13-4v3w-bNhY3t8g84Ciq_frbr7ZmRjXJ8n9x1hwu6a0HxgNyC5iG55w4a1KnxJXb5Wh6-7xE5GMGGWtjQATZ0gA1F2FALG6phQz1sqIXNY_LpYP94723gamwEJZ-xPoCkUKHSaRHRNo3yIlFc8SgWoKqogkLh5wqQpLJQLAJQaE2Hec6jQpQ8Bc6BPyGTpm3gKaGCzwAEU2mRVmiWh1LJGQi0CPNZCUyqbRL7lcpKl4Be10E5zzzT8CzzK5zpFc7sCm-TN8O4C5uC5cYR0gsic4akNRAzxM8NY5_9w9jn5M7mE3hBJn23gpfkdnnZ18vulYPaT3jOnYs
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lightweight+object+detection+algorithm+for+robots+with+improved+YOLOv5&rft.jtitle=Engineering+applications+of+artificial+intelligence&rft.au=Liu%2C+Gang&rft.au=Hu%2C+Yanxin&rft.au=Chen%2C+Zhiyu&rft.au=Guo%2C+Jianwei&rft.date=2023-08-01&rft.pub=Elsevier+Ltd&rft.issn=0952-1976&rft.eissn=1873-6769&rft.volume=123&rft_id=info:doi/10.1016%2Fj.engappai.2023.106217&rft.externalDocID=S0952197623004013
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0952-1976&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0952-1976&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0952-1976&client=summon