MobileCount: An efficient encoder-decoder framework for real-time crowd counting
In this work, we propose a computation-efficient encoder-decoder architecture, named MobileCount, which is specifically designed for high-accuracy real-time crowd counting on mobile or embedded devices with limited computation resources. For the encoder part, MobileNetV2 is tailored in order to sign...
Saved in:
| Published in: | Neurocomputing (Amsterdam) Vol. 407; pp. 292 - 299 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier B.V
24.09.2020
|
| Subjects: | |
| ISSN: | 0925-2312, 1872-8286 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | In this work, we propose a computation-efficient encoder-decoder architecture, named MobileCount, which is specifically designed for high-accuracy real-time crowd counting on mobile or embedded devices with limited computation resources. For the encoder part, MobileNetV2 is tailored in order to significantly reduce FLOPs at a little cost of performance drop, which has 4 bottleneck blocks preceded by a max pooling layer of stride 2. The design of decoder is motivated by Light-weight RefineNet, which further boosts counting performance with only a 10% increase of FLOPs. In comparison with state-of-the-arts, our proposed network is able to achieve comparable counting performance with 1/10 FLOPs on a number of benchmarks. At last, we propose a multi-layer knowledge distillation method to further boost the performance of MobileCount without increasing its FLOPs. |
|---|---|
| AbstractList | In this work, we propose a computation-efficient encoder-decoder architecture, named MobileCount, which is specifically designed for high-accuracy real-time crowd counting on mobile or embedded devices with limited computation resources. For the encoder part, MobileNetV2 is tailored in order to significantly reduce FLOPs at a little cost of performance drop, which has 4 bottleneck blocks preceded by a max pooling layer of stride 2. The design of decoder is motivated by Light-weight RefineNet, which further boosts counting performance with only a 10% increase of FLOPs. In comparison with state-of-the-arts, our proposed network is able to achieve comparable counting performance with 1/10 FLOPs on a number of benchmarks. At last, we propose a multi-layer knowledge distillation method to further boost the performance of MobileCount without increasing its FLOPs. |
| Author | Wang, Yang Wang, Peng Li, Hui Gao, Ye Gao, Chenyu |
| Author_xml | – sequence: 1 givenname: Peng surname: Wang fullname: Wang, Peng organization: School of Computer Science, Northwestern Polytechnical University, Xi’an, China – sequence: 2 givenname: Chenyu surname: Gao fullname: Gao, Chenyu organization: School of Software, Northwestern Polytechnical University, Xi’an, China – sequence: 3 givenname: Yang surname: Wang fullname: Wang, Yang organization: School of Computer Science, Northwestern Polytechnical University, Xi’an, China – sequence: 4 givenname: Hui surname: Li fullname: Li, Hui email: hui.li02@adelaide.edu.au organization: University of Adelaide, Australia – sequence: 5 givenname: Ye surname: Gao fullname: Gao, Ye organization: School of Computer Science, Northwestern Polytechnical University, Xi’an, China |
| BookMark | eNqFkM1KxDAUhYOM4Dj6Bi7yAq1J2qTpLIRh8A9GdDH7kKY3krFNJM04-Pa2jisXCgfO6juX-52jmQ8eELqiJKeEiutd7mFvQp8zwkhO-BhxguZUViyTTIoZmpOa8YwVlJ2h82HYEUIryuo5enkKjetgHfY-LfHKY7DWGQc-YfAmtBCzFr4b26h7OIT4hm2IOILusuR6wCaGQ4vNtOD86wU6tbob4PKnF2h7d7tdP2Sb5_vH9WqTmYKIlMmayoa03ILVDdekEURzQ1lhiSiZtmVFoeRac06ZlI0B0RTE1rwpRcUlLRZoeZwdrw9DBKuMSzq54FPUrlOUqEmN2qmjGjWpUYSPESNc_oLfo-t1_PwPuzliMP714SCqYTJloHURTFJtcH8PfAFA2oMU |
| CitedBy_id | crossref_primary_10_1007_s11554_024_01531_8 crossref_primary_10_1007_s00530_024_01318_8 crossref_primary_10_3390_electronics13040723 crossref_primary_10_1016_j_neucom_2023_01_059 crossref_primary_10_1111_exsy_70098 crossref_primary_10_1016_j_future_2023_05_013 crossref_primary_10_1007_s10845_022_02024_w crossref_primary_10_1016_j_cosrev_2024_100645 crossref_primary_10_1016_j_neucom_2023_126914 crossref_primary_10_1016_j_asoc_2024_111366 crossref_primary_10_1016_j_knosys_2023_110541 crossref_primary_10_1002_aisy_202300185 crossref_primary_10_1061_JTEPBS_TEENG_8944 crossref_primary_10_1109_ACCESS_2021_3078481 crossref_primary_10_1109_TIP_2023_3343609 crossref_primary_10_1109_TAES_2021_3087821 crossref_primary_10_1016_j_neucom_2022_08_037 crossref_primary_10_1109_JIOT_2025_3542534 crossref_primary_10_4218_etrij_2023_0426 crossref_primary_10_3390_rs14246363 crossref_primary_10_1109_TCSVT_2022_3171235 crossref_primary_10_1016_j_displa_2024_102686 crossref_primary_10_1109_ACCESS_2022_3171328 crossref_primary_10_1016_j_neucom_2021_11_099 crossref_primary_10_1007_s00371_024_03572_3 crossref_primary_10_1016_j_engappai_2022_105698 crossref_primary_10_1109_TGRS_2023_3238185 crossref_primary_10_32604_cmc_2023_035974 crossref_primary_10_3390_s21103464 crossref_primary_10_1088_1361_6501_adea12 crossref_primary_10_1080_21681163_2023_2188971 crossref_primary_10_1088_1361_6501_ada6f1 crossref_primary_10_1109_ACCESS_2022_3144607 crossref_primary_10_1016_j_eswa_2022_116662 crossref_primary_10_1007_s10489_023_04499_3 crossref_primary_10_1016_j_neucom_2023_126662 crossref_primary_10_3233_JIFS_224081 crossref_primary_10_32604_cmc_2024_048928 crossref_primary_10_1016_j_neucom_2021_02_047 crossref_primary_10_1007_s00500_021_05993_x crossref_primary_10_1016_j_eswa_2023_122069 crossref_primary_10_1007_s10586_025_05226_y crossref_primary_10_1117_1_JEI_33_4_043002 crossref_primary_10_1109_TGRS_2021_3125249 crossref_primary_10_1016_j_neucom_2021_06_086 crossref_primary_10_1109_TCSVT_2024_3469933 crossref_primary_10_1007_s00371_023_03099_z crossref_primary_10_1109_TIM_2025_3588967 crossref_primary_10_1007_s11831_022_09772_1 |
| Cites_doi | 10.1145/1150402.1150464 10.1109/TITS.2017.2750080 10.1007/s11263-005-6644-8 |
| ContentType | Journal Article |
| Copyright | 2020 Elsevier B.V. |
| Copyright_xml | – notice: 2020 Elsevier B.V. |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.neucom.2020.05.056 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1872-8286 |
| EndPage | 299 |
| ExternalDocumentID | 10_1016_j_neucom_2020_05_056 S0925231220308912 |
| GroupedDBID | --- --K --M .DC .~1 0R~ 123 1B1 1~. 1~5 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JM 9JN AABNK AACTN AADPK AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAXLA AAXUO AAYFN ABBOA ABCQJ ABFNM ABJNI ABMAC ABYKQ ACDAQ ACGFS ACRLP ACZNC ADBBV ADEZE AEBSH AEKER AENEX AFKWA AFTJW AFXIZ AGHFR AGUBO AGWIK AGYEJ AHHHB AHZHX AIALX AIEXJ AIKHN AITUG AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD AXJTR BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EO8 EO9 EP2 EP3 F5P FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ IHE J1W KOM LG9 M41 MO0 MOBAO N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 ROL RPZ SDF SDG SDP SES SPC SPCBC SSN SSV SSZ T5K ZMT ~G- 29N 9DU AAQXK AATTM AAXKI AAYWO AAYXX ABWVN ABXDB ACLOT ACNNM ACRPL ACVFH ADCNI ADJOM ADMUD ADNMO AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP ASPBG AVWKF AZFZN CITATION EFKBS EJD FEDTE FGOYB HLZ HVGLF HZ~ R2- SBC SEW WUQ XPP ~HD |
| ID | FETCH-LOGICAL-c306t-8918b0d5fefab5a0b60a5c123f0642af471e45aa551288bce6b30f95b4675813 |
| ISICitedReferencesCount | 61 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000555461000011&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0925-2312 |
| IngestDate | Tue Nov 18 22:35:27 EST 2025 Sat Nov 29 07:10:58 EST 2025 Fri Feb 23 02:46:21 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Fully convolutional networks Light-weight Neural Networks Knowledge distillation Crowd counting |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c306t-8918b0d5fefab5a0b60a5c123f0642af471e45aa551288bce6b30f95b4675813 |
| PageCount | 8 |
| ParticipantIDs | crossref_citationtrail_10_1016_j_neucom_2020_05_056 crossref_primary_10_1016_j_neucom_2020_05_056 elsevier_sciencedirect_doi_10_1016_j_neucom_2020_05_056 |
| PublicationCentury | 2000 |
| PublicationDate | 2020-09-24 |
| PublicationDateYYYYMMDD | 2020-09-24 |
| PublicationDate_xml | – month: 09 year: 2020 text: 2020-09-24 day: 24 |
| PublicationDecade | 2020 |
| PublicationTitle | Neurocomputing (Amsterdam) |
| PublicationYear | 2020 |
| Publisher | Elsevier B.V |
| Publisher_xml | – name: Elsevier B.V |
| References | A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, Mobilenets: Efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv:1704.04861. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in: Proc. Advances in Neural Inf. Process. Syst., 2012. V. Nekrasov, C. Shen, I. Reid, Light-weight refinenet for real-time semantic segmentations, arXiv preprint arXiv:1810.03272. Hubara, Courbariaux, Soudry, El-Yaniv, Bengio (b0175) 2017; 18 Lan, Zhu, Gong (b0210) 2018 A. Paszke, A. Chaurasia, S. Kim, E. Culurciello, Enet: A deep neural network architecture for real-time semantic segmentation, arXiv preprint arXiv:1606.02147. Redmon, Divvala, Girshick, Farhadi (b0125) 2016 S. Han, J. Pool, J. Tran, W. Dally, Learning both weights and connections for efficient neural network, in: Proc. Advances in Neural Inf. Process. Syst., 2015, pp. 1135–1143. Zhang, Li, Wang, Yang (b0055) 2015 Ren, He, Girshick, Sun (b0120) 2015 Cao, Wang, Zhao, Su (b0235) 2018 V. Nekrasov, H. Chen, C. Shen, I. Reid, Fast neural architecture search of compact semantic segmentation models via auxiliary cells, arXiv preprint arXiv:1810.10804. Lin, Dollár, Girshick, He, Hariharan, Belongie (b0230) 2017 Sandler, Howard, Zhu, Zhmoginov, Chen (b0100) 2018 Romera, Alvarez, Bergasa, Arroyo (b0110) 2018; 19 Shi, Zhang, Liu, Cao, Ye, Cheng, Zheng (b0140) 2018 Sindagi, Patel (b0135) 2017 Zhang, Zhou, Chen, Gao, Ma (b0065) 2016 J. Gao, W. Lin, B. Zhao, D. Wang, C. Gao, J. Wen, C-3-Framework: An open-source pytorch code for crowd counting, arXiv preprint arXiv:1907.02724. He, Zhang, Ren, Sun (b0025) 2016 Veit, Belongie (b0170) 2018 M. Tan, B. Chen, R. Pang, V. Vasudevan, Q.V. Le, Mnasnet: Platform-aware neural architecture search for mobile, arXiv preprint arXiv:1807.11626. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs., IEEE Trans. Pattern Anal. Mach. Intell. C. Bucilu, R. Caruana, A. Niculescu-Mizil, Model compression, in: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2006, pp. 535–541. Girshick, Donahue, Darrell, Malik (b0005) 2014 Simonyan, Zisserman (b0020) 2015 Ba, Caruana (b0185) 2014 Sam, Sajjan, Babu, Srinivasan (b0075) 2018 Long, Shelhamer, Darrell (b0030) 2015 Zhang, Xiang, Hospedales, Lu (b0205) 2018 P. Viola, M.J. Jones, D. Snow, Detecting pedestrians using patterns of motion and appearance, in: Int. J. Comput. Vision, 2005. Chan, Vasconcelos (b0050) 2009 Cai, Vasconcelos (b0010) 2018 Zhang, Zhu, Ye (b0200) 2019 Lin, Shen, Van Den Hengel, Reid (b0035) 2016 Lin, Milan, Shen, Reid (b0145) 2017 Ma, Zhang, Zheng, Sun (b0160) 2018 Boominathan, Kruthiventi, Babu (b0060) 2016 A. Romero, N. Ballas, S.E. Kahou, A. Chassang, C. Gatta, Y. Bengio, Fitnets: Hints for thin deep nets, arXiv preprint arXiv:1412.6550. Sam, Surya, Babu (b0070) 2017 G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, arXiv preprint arXiv:1503.02531. Zhao, Li, Abu Alsheikh, Tian, Zhao, Torralba, Katabi (b0215) 2018 Zhang, Zhou, Lin, Sun (b0095) 2018 Li, Zhang, Chen (b0080) 2018 F.N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, K. Keutzer, Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size, arXiv preprint arXiv:1602.07360. Idrees, Tayyab, Athrey, Zhang, Al-Maadeed, Rajpoot, Shah (b0085) 2018 Idrees, Saleemi, Seibert, Shah (b0225) 2013 Sam (10.1016/j.neucom.2020.05.056_b0075) 2018 Zhang (10.1016/j.neucom.2020.05.056_b0095) 2018 Cao (10.1016/j.neucom.2020.05.056_b0235) 2018 Zhang (10.1016/j.neucom.2020.05.056_b0065) 2016 Zhang (10.1016/j.neucom.2020.05.056_b0055) 2015 10.1016/j.neucom.2020.05.056_b0130 Lin (10.1016/j.neucom.2020.05.056_b0035) 2016 10.1016/j.neucom.2020.05.056_b0155 10.1016/j.neucom.2020.05.056_b0115 10.1016/j.neucom.2020.05.056_b0015 10.1016/j.neucom.2020.05.056_b0090 Lin (10.1016/j.neucom.2020.05.056_b0230) 2017 10.1016/j.neucom.2020.05.056_b0190 Romera (10.1016/j.neucom.2020.05.056_b0110) 2018; 19 10.1016/j.neucom.2020.05.056_b0195 10.1016/j.neucom.2020.05.056_b0150 Zhao (10.1016/j.neucom.2020.05.056_b0215) 2018 Girshick (10.1016/j.neucom.2020.05.056_b0005) 2014 Simonyan (10.1016/j.neucom.2020.05.056_b0020) 2015 Lin (10.1016/j.neucom.2020.05.056_b0145) 2017 Sandler (10.1016/j.neucom.2020.05.056_b0100) 2018 Sam (10.1016/j.neucom.2020.05.056_b0070) 2017 Li (10.1016/j.neucom.2020.05.056_b0080) 2018 10.1016/j.neucom.2020.05.056_b0105 Shi (10.1016/j.neucom.2020.05.056_b0140) 2018 Lan (10.1016/j.neucom.2020.05.056_b0210) 2018 Sindagi (10.1016/j.neucom.2020.05.056_b0135) 2017 Chan (10.1016/j.neucom.2020.05.056_b0050) 2009 Ba (10.1016/j.neucom.2020.05.056_b0185) 2014 Idrees (10.1016/j.neucom.2020.05.056_b0225) 2013 Ren (10.1016/j.neucom.2020.05.056_b0120) 2015 10.1016/j.neucom.2020.05.056_b0045 Ma (10.1016/j.neucom.2020.05.056_b0160) 2018 10.1016/j.neucom.2020.05.056_b0165 10.1016/j.neucom.2020.05.056_b0220 10.1016/j.neucom.2020.05.056_b0180 Redmon (10.1016/j.neucom.2020.05.056_b0125) 2016 Zhang (10.1016/j.neucom.2020.05.056_b0205) 2018 Boominathan (10.1016/j.neucom.2020.05.056_b0060) 2016 Veit (10.1016/j.neucom.2020.05.056_b0170) 2018 Long (10.1016/j.neucom.2020.05.056_b0030) 2015 10.1016/j.neucom.2020.05.056_b0040 Idrees (10.1016/j.neucom.2020.05.056_b0085) 2018 He (10.1016/j.neucom.2020.05.056_b0025) 2016 Hubara (10.1016/j.neucom.2020.05.056_b0175) 2017; 18 Cai (10.1016/j.neucom.2020.05.056_b0010) 2018 Zhang (10.1016/j.neucom.2020.05.056_b0200) 2019 |
| References_xml | – start-page: 6848 year: 2018 end-page: 6856 ident: b0095 article-title: Shufflenet: an extremely efficient convolutional neural network for mobile devices publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2018 ident: b0140 article-title: Crowd counting with deep negative correlation learning publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn – start-page: 1925 year: 2017 end-page: 1934 ident: b0145 article-title: Refinenet: multi-path refinement networks for high-resolution semantic segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 580 year: 2014 end-page: 587 ident: b0005 article-title: Rich feature hierarchies for accurate object detection and semantic segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 91 year: 2015 end-page: 99 ident: b0120 article-title: Faster r-cnn: Towards real-time object detection with region proposal networks publication-title: Adv. Neural Inform. Process. Syst. – reference: M. Tan, B. Chen, R. Pang, V. Vasudevan, Q.V. Le, Mnasnet: Platform-aware neural architecture search for mobile, arXiv preprint arXiv:1807.11626. – start-page: 6154 year: 2018 end-page: 6162 ident: b0010 article-title: Cascade r-cnn: delving into high quality object detection publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 116 year: 2018 end-page: 131 ident: b0160 article-title: Shufflenet v2: practical guidelines for efficient cnn architecture design publication-title: Proc. Eur. Conf. Comp. Vis. – start-page: 7528 year: 2018 end-page: 7538 ident: b0210 article-title: Knowledge distillation by on-the-fly native ensemble publication-title: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Curran Associates Inc – reference: A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in: Proc. Advances in Neural Inf. Process. Syst., 2012. – start-page: 2654 year: 2014 end-page: 2662 ident: b0185 article-title: Do deep nets really need to be deep? publication-title: Adv. Neural Inform. Process. Syst. – start-page: 770 year: 2016 end-page: 778 ident: b0025 article-title: Deep residual learning for image recognition publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – reference: A. Paszke, A. Chaurasia, S. Kim, E. Culurciello, Enet: A deep neural network architecture for real-time semantic segmentation, arXiv preprint arXiv:1606.02147. – reference: A. Romero, N. Ballas, S.E. Kahou, A. Chassang, C. Gatta, Y. Bengio, Fitnets: Hints for thin deep nets, arXiv preprint arXiv:1412.6550. – reference: V. Nekrasov, C. Shen, I. Reid, Light-weight refinenet for real-time semantic segmentations, arXiv preprint arXiv:1810.03272. – year: 2013 ident: b0225 article-title: Multi-source multi-scale counting in extremely dense crowd images publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – volume: 18 start-page: 6869 year: 2017 end-page: 6898 ident: b0175 article-title: Quantized neural networks: training neural networks with low precision weights and activations publication-title: J. Mach. Learn. Res. – start-page: 7356 year: 2018 end-page: 7365 ident: b0215 article-title: Through-wall human pose estimation using radio signals publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition – start-page: 3 year: 2018 end-page: 18 ident: b0170 article-title: Convolutional networks with adaptive inference graphs publication-title: Proc. Eur. Conf. Comp. Vis. – start-page: 2117 year: 2017 end-page: 2125 ident: b0230 article-title: Feature pyramid networks for object detection publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2009 ident: b0050 article-title: Bayesian poisson regression for crowd counting publication-title: Proc. IEEE Int. Conf. Comp. Vis. – start-page: 4320 year: 2018 end-page: 4328 ident: b0205 article-title: Deep mutual learning publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition – year: 2017 ident: b0070 article-title: Switching convolutional neural network for crowd counting publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – reference: G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, arXiv preprint arXiv:1503.02531. – year: 2019 ident: b0200 article-title: Fast human pose estimation publication-title: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) – reference: S. Han, J. Pool, J. Tran, W. Dally, Learning both weights and connections for efficient neural network, in: Proc. Advances in Neural Inf. Process. Syst., 2015, pp. 1135–1143. – reference: J. Gao, W. Lin, B. Zhao, D. Wang, C. Gao, J. Wen, C-3-Framework: An open-source pytorch code for crowd counting, arXiv preprint arXiv:1907.02724. – year: 2017 ident: b0135 article-title: CNN-based cascaded multi-task learning of high-level prior and density estimation for crowd counting publication-title: IEEE International Conference on Advanced Video and Signal Based Surveillance – year: 2018 ident: b0235 article-title: Scale aggregation network for accurate and efficient crowd counting publication-title: Proc. Eur. Conf. Comp. Vis. – year: 2018 ident: b0080 article-title: CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 779 year: 2016 end-page: 788 ident: b0125 article-title: You only look once: unified, real-time object detection publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – reference: V. Nekrasov, H. Chen, C. Shen, I. Reid, Fast neural architecture search of compact semantic segmentation models via auxiliary cells, arXiv preprint arXiv:1810.10804. – year: 2016 ident: b0035 article-title: Efficient piecewise training of deep structured models for semantic segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – reference: P. Viola, M.J. Jones, D. Snow, Detecting pedestrians using patterns of motion and appearance, in: Int. J. Comput. Vision, 2005. – reference: L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs., IEEE Trans. Pattern Anal. Mach. Intell. – year: 2018 ident: b0100 article-title: Mobilenetv 2: inverted residuals and linear bottlenecks publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2018 ident: b0085 article-title: Composition loss for counting, density map estimation and localization in dense crowds publication-title: Proc. Eur. Conf. Comp. Vis. – reference: A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, Mobilenets: Efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv:1704.04861. – year: 2015 ident: b0020 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc. Int. Conf. Learn. Representations – year: 2015 ident: b0030 article-title: Fully Convolutional Networks for Semantic Segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – reference: F.N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, K. Keutzer, Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size, arXiv preprint arXiv:1602.07360. – reference: C. Bucilu, R. Caruana, A. Niculescu-Mizil, Model compression, in: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2006, pp. 535–541. – volume: 19 start-page: 263 year: 2018 end-page: 272 ident: b0110 article-title: Erfnet: efficient residual factorized convnet for real-time semantic segmentation publication-title: IEEE Trans. Intell. Transp. Syst. – year: 2015 ident: b0055 article-title: Cross-scene crowd counting via deep convolutional neural networks publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2016 ident: b0065 article-title: Single-image crowd counting via multi-column convolutional neural network publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2016 ident: b0060 article-title: Crowdnet: a deep convolutional network for dense crowd counting publication-title: Proc. Conf. ACM Multimedia – year: 2018 ident: b0075 article-title: Divide and Grow: Capturing Huge Diversity in Crowd Images with Incrementally Growing, CNN publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2017 ident: 10.1016/j.neucom.2020.05.056_b0135 article-title: CNN-based cascaded multi-task learning of high-level prior and density estimation for crowd counting – start-page: 2117 year: 2017 ident: 10.1016/j.neucom.2020.05.056_b0230 article-title: Feature pyramid networks for object detection publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – ident: 10.1016/j.neucom.2020.05.056_b0105 – ident: 10.1016/j.neucom.2020.05.056_b0130 – ident: 10.1016/j.neucom.2020.05.056_b0155 – ident: 10.1016/j.neucom.2020.05.056_b0180 – ident: 10.1016/j.neucom.2020.05.056_b0015 – ident: 10.1016/j.neucom.2020.05.056_b0040 – start-page: 116 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0160 article-title: Shufflenet v2: practical guidelines for efficient cnn architecture design publication-title: Proc. Eur. Conf. Comp. Vis. – year: 2009 ident: 10.1016/j.neucom.2020.05.056_b0050 article-title: Bayesian poisson regression for crowd counting publication-title: Proc. IEEE Int. Conf. Comp. Vis. – start-page: 4320 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0205 article-title: Deep mutual learning – year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0140 article-title: Crowd counting with deep negative correlation learning publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn – start-page: 770 year: 2016 ident: 10.1016/j.neucom.2020.05.056_b0025 article-title: Deep residual learning for image recognition publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2016 ident: 10.1016/j.neucom.2020.05.056_b0060 article-title: Crowdnet: a deep convolutional network for dense crowd counting publication-title: Proc. Conf. ACM Multimedia – ident: 10.1016/j.neucom.2020.05.056_b0190 – year: 2015 ident: 10.1016/j.neucom.2020.05.056_b0055 article-title: Cross-scene crowd counting via deep convolutional neural networks publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 91 year: 2015 ident: 10.1016/j.neucom.2020.05.056_b0120 article-title: Faster r-cnn: Towards real-time object detection with region proposal networks publication-title: Adv. Neural Inform. Process. Syst. – start-page: 6848 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0095 article-title: Shufflenet: an extremely efficient convolutional neural network for mobile devices publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 3 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0170 article-title: Convolutional networks with adaptive inference graphs publication-title: Proc. Eur. Conf. Comp. Vis. – year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0235 article-title: Scale aggregation network for accurate and efficient crowd counting publication-title: Proc. Eur. Conf. Comp. Vis. – year: 2017 ident: 10.1016/j.neucom.2020.05.056_b0070 article-title: Switching convolutional neural network for crowd counting publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – ident: 10.1016/j.neucom.2020.05.056_b0195 doi: 10.1145/1150402.1150464 – start-page: 2654 year: 2014 ident: 10.1016/j.neucom.2020.05.056_b0185 article-title: Do deep nets really need to be deep? publication-title: Adv. Neural Inform. Process. Syst. – year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0085 article-title: Composition loss for counting, density map estimation and localization in dense crowds publication-title: Proc. Eur. Conf. Comp. Vis. – ident: 10.1016/j.neucom.2020.05.056_b0150 – year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0075 article-title: Divide and Grow: Capturing Huge Diversity in Crowd Images with Incrementally Growing, CNN publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 6154 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0010 article-title: Cascade r-cnn: delving into high quality object detection publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2015 ident: 10.1016/j.neucom.2020.05.056_b0030 article-title: Fully Convolutional Networks for Semantic Segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0100 article-title: Mobilenetv 2: inverted residuals and linear bottlenecks publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 7356 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0215 article-title: Through-wall human pose estimation using radio signals – year: 2013 ident: 10.1016/j.neucom.2020.05.056_b0225 article-title: Multi-source multi-scale counting in extremely dense crowd images publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 779 year: 2016 ident: 10.1016/j.neucom.2020.05.056_b0125 article-title: You only look once: unified, real-time object detection publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 7528 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0210 article-title: Knowledge distillation by on-the-fly native ensemble – year: 2016 ident: 10.1016/j.neucom.2020.05.056_b0035 article-title: Efficient piecewise training of deep structured models for semantic segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – year: 2015 ident: 10.1016/j.neucom.2020.05.056_b0020 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc. Int. Conf. Learn. Representations – volume: 19 start-page: 263 issue: 1 year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0110 article-title: Erfnet: efficient residual factorized convnet for real-time semantic segmentation publication-title: IEEE Trans. Intell. Transp. Syst. doi: 10.1109/TITS.2017.2750080 – ident: 10.1016/j.neucom.2020.05.056_b0115 – ident: 10.1016/j.neucom.2020.05.056_b0045 doi: 10.1007/s11263-005-6644-8 – start-page: 580 year: 2014 ident: 10.1016/j.neucom.2020.05.056_b0005 article-title: Rich feature hierarchies for accurate object detection and semantic segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – ident: 10.1016/j.neucom.2020.05.056_b0165 – year: 2018 ident: 10.1016/j.neucom.2020.05.056_b0080 article-title: CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – start-page: 1925 year: 2017 ident: 10.1016/j.neucom.2020.05.056_b0145 article-title: Refinenet: multi-path refinement networks for high-resolution semantic segmentation publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – ident: 10.1016/j.neucom.2020.05.056_b0090 – year: 2016 ident: 10.1016/j.neucom.2020.05.056_b0065 article-title: Single-image crowd counting via multi-column convolutional neural network publication-title: Proc. IEEE Conf. Comp. Vis. Patt. Recogn. – ident: 10.1016/j.neucom.2020.05.056_b0220 – volume: 18 start-page: 6869 issue: 1 year: 2017 ident: 10.1016/j.neucom.2020.05.056_b0175 article-title: Quantized neural networks: training neural networks with low precision weights and activations publication-title: J. Mach. Learn. Res. – year: 2019 ident: 10.1016/j.neucom.2020.05.056_b0200 article-title: Fast human pose estimation publication-title: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |
| SSID | ssj0017129 |
| Score | 2.5521626 |
| Snippet | In this work, we propose a computation-efficient encoder-decoder architecture, named MobileCount, which is specifically designed for high-accuracy real-time... |
| SourceID | crossref elsevier |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 292 |
| SubjectTerms | Crowd counting Fully convolutional networks Knowledge distillation Light-weight Neural Networks |
| Title | MobileCount: An efficient encoder-decoder framework for real-time crowd counting |
| URI | https://dx.doi.org/10.1016/j.neucom.2020.05.056 |
| Volume | 407 |
| WOSCitedRecordID | wos000555461000011&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1872-8286 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017129 issn: 0925-2312 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1ba9swFBYh3cNetu7GurZDD3srGo4s2VLfwmjZxlYCCyx7MpIls5bULVnSdbAfv6ObY5KxGwyMHYzkCJ3Px5-OzgWhFxxIfGmYJDmVlDBrFNHKFMTImunSqMwY7YtNlGdnYjaTk8Hge4qFuZmXbStub-X1fxU13ANhu9DZvxB391C4Ab9B6HAGscP5jwT__krDm-5izZPVz_o0EW7T32WtNHZBjPXXoya5ZnlvQ-CPc-KKzR-Bdv7qwt1CHYk-gfXJPGpfCiIaGcaXLteCccDqjAofoxV6YmNv5-GjglX2s22_rTYbflLrhu9CIe3Ved8iActPv0mzNpNthcoEeyPlBMhkUL02aFtRUh_H3lfHLFTBTQpV0t63mYZiSltqP1ggLl62duV8gNygfD5WvpFl23-3P7ihuJFQl6tHuhLVO7TkUgzRzvjNyexttwtVjmjI1RiHnkIvvX_g9n_9nNr06Mp0F92L6ww8Dvh4gAa2fYjupxoeOKr0R2jSg8sxHre4AwveAAvuwIIBLLgDC_ZgwQksj9H09GT66jWJVTZIDcvFJYEZEDozvLGN0lxlusgUr4HQNG5tqhpgL5ZxpYBaUyF0bQudZ43kmrm15ih_gobtVWufIiwsFyZnuS6MYFZQKRrajHJWjEwjmLF7KE_zU9UxA70rhDKvkqvhRRVmtXKzWmUcjmIPka7XdcjA8pv2ZZr6KrLIwA4rQMsvez7755776O76RThAw-ViZQ_Rnfpmef5l8TzC6gewPpf8 |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MobileCount%3A+An+efficient+encoder-decoder+framework+for+real-time+crowd+counting&rft.jtitle=Neurocomputing+%28Amsterdam%29&rft.au=Wang%2C+Peng&rft.au=Gao%2C+Chenyu&rft.au=Wang%2C+Yang&rft.au=Li%2C+Hui&rft.date=2020-09-24&rft.pub=Elsevier+B.V&rft.issn=0925-2312&rft.eissn=1872-8286&rft.volume=407&rft.spage=292&rft.epage=299&rft_id=info:doi/10.1016%2Fj.neucom.2020.05.056&rft.externalDocID=S0925231220308912 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0925-2312&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0925-2312&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0925-2312&client=summon |