Training Compact DNNs with ℓ1/2 Regularization
•We propose a network compression model based on ℓ1/2 regularization. To the best of our knowledge, it is the first work utilizing non-Lipschitz continuous regularization to compress DNNs.•We strictly prove the correspondence between ℓp(0<p<1) and Hyper-Laplacian prior. Based on this prior, we...
Saved in:
| Published in: | Pattern recognition Vol. 136 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier Ltd
01.04.2023
|
| Subjects: | |
| ISSN: | 0031-3203, 1873-5142 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | •We propose a network compression model based on ℓ1/2 regularization. To the best of our knowledge, it is the first work utilizing non-Lipschitz continuous regularization to compress DNNs.•We strictly prove the correspondence between ℓp(0<p<1) and Hyper-Laplacian prior. Based on this prior, we suggest utilizing ℓ1/2, as the single regularizer, to sparsify the connections and neurons of the network simultaneously.•We give a closed-form, threshold solution to the proximal operator of ℓ1/2, and consequently design a stochastic proximal gradient algorithm to train the resulting model.•We conduct experiments to validate the performance of the proposed method. The results demonstrate that our method outperforms benchmark methods in terms of accuracy, computation and memory costs.
Deep neural network(DNN) has achieved unprecedented success in many fields. However, its large model parameters which bring a great burden on storage and calculation hinder the development and application of DNNs. It is worthy of compressing the model to reduce the complexity of the DNN. Sparsity-inducing regularizer is one of the most common tools for compression. In this paper, we propose utilizing the ℓ1/2 quasi-norm to zero out weights of neural networks and compressing the networks automatically during the learning process. To our knowledge, it is the first work applying the non-Lipschitz continuous regularizer for the compression of DNNs. The resulting sparse optimization problem is solved by stochastic proximal gradient algorithm. For further convenience of calculation, an approximation of the threshold-form solution to the proximal operator with ℓ1/2 is given at the same time. Extensive experiments with various datasets and baselines demonstrate the advantages of our new method. |
|---|---|
| AbstractList | •We propose a network compression model based on ℓ1/2 regularization. To the best of our knowledge, it is the first work utilizing non-Lipschitz continuous regularization to compress DNNs.•We strictly prove the correspondence between ℓp(0<p<1) and Hyper-Laplacian prior. Based on this prior, we suggest utilizing ℓ1/2, as the single regularizer, to sparsify the connections and neurons of the network simultaneously.•We give a closed-form, threshold solution to the proximal operator of ℓ1/2, and consequently design a stochastic proximal gradient algorithm to train the resulting model.•We conduct experiments to validate the performance of the proposed method. The results demonstrate that our method outperforms benchmark methods in terms of accuracy, computation and memory costs.
Deep neural network(DNN) has achieved unprecedented success in many fields. However, its large model parameters which bring a great burden on storage and calculation hinder the development and application of DNNs. It is worthy of compressing the model to reduce the complexity of the DNN. Sparsity-inducing regularizer is one of the most common tools for compression. In this paper, we propose utilizing the ℓ1/2 quasi-norm to zero out weights of neural networks and compressing the networks automatically during the learning process. To our knowledge, it is the first work applying the non-Lipschitz continuous regularizer for the compression of DNNs. The resulting sparse optimization problem is solved by stochastic proximal gradient algorithm. For further convenience of calculation, an approximation of the threshold-form solution to the proximal operator with ℓ1/2 is given at the same time. Extensive experiments with various datasets and baselines demonstrate the advantages of our new method. |
| ArticleNumber | 109206 |
| Author | Zhang, Peng Tang, Anda Miao, Jianyu Niu, Lingfeng |
| Author_xml | – sequence: 1 givenname: Anda surname: Tang fullname: Tang, Anda email: tanganda17@mails.ucas.ac.cn organization: School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, 100190, China – sequence: 2 givenname: Lingfeng surname: Niu fullname: Niu, Lingfeng email: niulf@ucas.ac.cn organization: Research Center on Fictitious Economy and Data Science, Chinese Academy of Sciences, Beijing,100190 China – sequence: 3 givenname: Jianyu surname: Miao fullname: Miao, Jianyu email: jymiao@haut.edu.cn organization: School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, China – sequence: 4 givenname: Peng surname: Zhang fullname: Zhang, Peng email: p.zhang@gzhu.edu.cn organization: Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, 511442, China |
| BookMark | eNotz91Kw0AQBeBFKphW38CLvEDSmdn8rDeCRK1CqSD1etlsZuOGmpQkKnjtG_iGPokt8WrgDJzDNxeztmtZiEuEGAGzZRPvzWi7OiYgOkRXBNmJCFDlMkoxoZkIACRGkkCeifkwNACYHx6BgG1vfOvbOiy6t72xY3i72Qzhpx9fw9_vH1xS-Mz1-870_suMvmvPxakzu4Ev_u9CvNzfbYuHaP20eixu1hETwhhViWWTJlUGhi2AxYqSTCquFLFDyiyW0ihwTiVWpZA6R6nJgRWVZcbWyYW4nnr5MPLhudeD9dxarnzPdtRV5zWCPvp1oye_Pvr15Jd_4dNS-w |
| ContentType | Journal Article |
| Copyright | 2022 Elsevier Ltd |
| Copyright_xml | – notice: 2022 Elsevier Ltd |
| DOI | 10.1016/j.patcog.2022.109206 |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1873-5142 |
| ExternalDocumentID | S0031320322006859 |
| GroupedDBID | --K --M -D8 -DT -~X .DC .~1 0R~ 123 1B1 1RT 1~. 1~5 29O 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JN AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABEFU ABFNM ABFRF ABHFT ABJNI ABMAC ABTAH ABXDB ABYKQ ACBEA ACDAQ ACGFO ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADMXK ADTZH AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FD6 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q G8K GBLVA GBOLZ HLZ HVGLF HZ~ H~9 IHE J1W JJJVA KOM KZ1 LG9 LMP LY1 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SBC SDF SDG SDP SDS SES SEW SPC SPCBC SST SSV SSZ T5K TN5 UNMZH VOH WUQ XJE XPP ZMT ZY4 ~G- |
| ID | FETCH-LOGICAL-e210t-d4cea54d60aec00c1d24638ed82ef126c1b3a80ff84c8505ff25a70e82bb6ecf3 |
| ISICitedReferencesCount | 4 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000900874600002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0031-3203 |
| IngestDate | Fri Feb 23 02:39:24 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Sparse optimization Deep neural networks Model compression Non-Lipschitz regularization ℓ1/2 Quasi-norm |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-e210t-d4cea54d60aec00c1d24638ed82ef126c1b3a80ff84c8505ff25a70e82bb6ecf3 |
| ParticipantIDs | elsevier_sciencedirect_doi_10_1016_j_patcog_2022_109206 |
| PublicationCentury | 2000 |
| PublicationDate | April 2023 |
| PublicationDateYYYYMMDD | 2023-04-01 |
| PublicationDate_xml | – month: 04 year: 2023 text: April 2023 |
| PublicationDecade | 2020 |
| PublicationTitle | Pattern recognition |
| PublicationYear | 2023 |
| Publisher | Elsevier Ltd |
| Publisher_xml | – name: Elsevier Ltd |
| References | Yoon, Hwang (bib0004) 2017 Hassibi, Stork (bib0011) 1993 Ma, Miao, Niu, Zhang (bib0023) 2019; 119 Fan, Li (bib0040) 2001; 96 Xu, Zhang, Wang, Chang, Liang (bib0024) 2010; 53 Xue, Xin (bib0042) 2019 Han (bib0007) 2015 Aghasi, Abdi, Nguyen, Romberg (bib0018) 2017 Diederik P. Kingma (bib0034) 2015 Krishnan, Fergus (bib0028) 2009 Glorot, Bordes, Bengio (bib0039) 2011 Wu, Leng, Wang, Hu, Cheng (bib0016) 2016 Christos Louizos (bib0008) 2017 Bengio, Roux, Vincent, Delalleau, Marcotte (bib0036) 2006 Chartrand, Staneva (bib0025) 2008; 24 Li, Grandvalet, Davoine (bib0038) 2019; 98 Tang, Ma, Miao, Niu (bib0043) 2019 Chartrand, Yin (bib0033) 2008 Chen, Ng, Zhang (bib0029) 2012; 21 Molchanov, Vetrov (bib0014) 2017 Denil, Shakibi, Dinh, Ranzato, De Freitas (bib0006) 2013 Xu, Guo, Wang, ZHANG (bib0030) 2012; 38 Niu, Zhou, Tian, Qi, Zhang (bib0027) 2016; 47 Shi, Miao, Wang, Zhang, Niu (bib0046) 2018; 29 Zeng, Lin, Wang, Xu (bib0047) 2014; 62 Xu, Chang, Xu, Zhang (bib0031) 2012; 23 Tibshirani (bib0048) 1996; 58 Dauphin, Fan, Auli, Grangier (bib0003) 2017 Yin, Lou, He, Xin (bib0045) 2015; 37 Hanson, Pratt (bib0009) 1989 Gupta, Agrawal, Gopalakrishnan, Narayanan (bib0015) 2015 Geoffrey, Li, Dong, George, Mohamed (bib0001) 2012; 29 Chartrand (bib0049) 2009 Zhou, Alvarez, Porikli (bib0019) 2016 Yu, Rui, Tao (bib0022) 2014; 23 Alvarez, Salzmann (bib0005) 2016 Lebedev, Lempitsky (bib0020) 2016 Krizhevsky, Sutskever, Hinton (bib0002) 2012 Cheng, Wang, Zhou, Zhang (bib0017) 2018; 35 Zhang, Xin (bib0041) 2017; 15 LeCun, Denker, Solla (bib0010) 1990 Gu, Wang, Kuen, Ma, Shahroudy, Shuai, Liu, Wang, Wang, Cai (bib0037) 2018; 77 Wen, Wu, Wang, Chen, Li (bib0021) 2016 Aslan, Zhang, Schuurmans (bib0035) 2014 Zhang (bib0044) 2010; 38 Srivastava, Hinton, Krizhevsky, Sutskever, Salakhutdinov (bib0012) 2014; 15 Poernomo, Kang (bib0013) 2018; 104 Xu (bib0032) 2010 Cao, Cai, Tan, Zhao (bib0026) 2016; 27 |
| References_xml | – volume: 37 start-page: A536 year: 2015 end-page: A563 ident: bib0045 article-title: Minimization of publication-title: SIAM Journal on Scientific Computing – start-page: 3151 year: 2010 end-page: 3184 ident: bib0032 article-title: Data modeling: Visual psychology approach and publication-title: Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes) Vol. I: Plenary Lectures and Ceremonies Vols. II–IV: Invited Lectures – year: 2015 ident: bib0034 article-title: Variational dropout and the local reparameterization trick publication-title: Advances in Neural Information Processing Systems – volume: 27 start-page: 1550 year: 2016 end-page: 1561 ident: bib0026 article-title: Image super-resolution via adaptive publication-title: IEEE Transactions on Neural Networks and Learning Systems – volume: 38 start-page: 1225 year: 2012 end-page: 1228 ident: bib0030 article-title: Representative of publication-title: Acta Automatica Sinica – volume: 62 start-page: 2317 year: 2014 end-page: 2329 ident: bib0047 article-title: regularization: Convergence of iterative half thresholding algorithm publication-title: IEEE Transactions on Signal Processing – volume: 35 start-page: 126 year: 2018 end-page: 136 ident: bib0017 article-title: Model compression and acceleration for deep neural networks: The principles, progress, and challenges publication-title: IEEE Signal Processing Magazine – volume: 47 start-page: 1423 year: 2016 end-page: 1433 ident: bib0027 article-title: Nonsmooth penalized clustering via publication-title: IEEE Transactions on Cybernetics – start-page: 3958 year: 2017 end-page: 3966 ident: bib0004 article-title: Combined group and exclusive sparsity for deep neural networks publication-title: Proceedings of the 34th International Conference on Machine Learning-Volume 70 – volume: 98 start-page: 107049 year: 2019 ident: bib0038 article-title: A baseline regularization scheme for transfer learning with convolutional neural networks publication-title: Pattern Recognition – start-page: 1135 year: 2015 end-page: 1143 ident: bib0007 article-title: Learning both weights and connections for efficient neural network publication-title: Advances in Neural Information Processing Systems – volume: 15 start-page: 1929 year: 2014 end-page: 1958 ident: bib0012 article-title: Dropout: a simple way to prevent neural networks from overfitting publication-title: The Journal of Machine Learning Research – volume: 29 start-page: 4967 year: 2018 end-page: 4982 ident: bib0046 article-title: Feature selection with publication-title: IEEE transactions on neural networks and learning systems – volume: 53 start-page: 1159 year: 2010 end-page: 1169 ident: bib0024 article-title: regularization publication-title: Science China Information Sciences – start-page: 158 year: 2019 end-page: 166 ident: bib0043 article-title: Sparse optimization based on non-convex publication-title: International Conference on Data Science – start-page: 123 year: 2006 end-page: 130 ident: bib0036 article-title: Convex neural networks publication-title: Advances in Neural Information Processing Systems – start-page: 164 year: 1993 end-page: 171 ident: bib0011 article-title: Second order derivatives for network pruning: Optimal brain surgeon publication-title: Advances in Neural Information Processing Systems – volume: 15 start-page: 511 year: 2017 end-page: 537 ident: bib0041 article-title: Minimization of transformed publication-title: Communications in Mathematical Sciences – volume: 29 start-page: 82 year: 2012 end-page: 97 ident: bib0001 article-title: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups publication-title: IEEE Signal Processing Magazine – volume: 24 start-page: 035020 year: 2008 ident: bib0025 article-title: Restricted isometry properties and nonconvex compressive sensing publication-title: Inverse Problems – year: 2017 ident: bib0014 article-title: Variational dropout sparsifies deep neural networks publication-title: ICML – start-page: 3275 year: 2014 end-page: 3283 ident: bib0035 article-title: Convex deep learning via normalized kernels publication-title: Advances in Neural Information Processing Systems – start-page: 1737 year: 2015 end-page: 1746 ident: bib0015 article-title: Deep learning with limited numerical precision publication-title: International Conference on Machine Learning – volume: 77 start-page: 354 year: 2018 end-page: 377 ident: bib0037 article-title: Recent advances in convolutional neural networks publication-title: Pattern Recognition – start-page: 262 year: 2009 end-page: 265 ident: bib0049 article-title: Fast algorithms for nonconvex compressive sensing: Mri reconstruction from very few data publication-title: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro – start-page: 2554 year: 2016 end-page: 2564 ident: bib0020 article-title: Fast convnets using group-wise brain damage publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition – volume: 104 start-page: 60 year: 2018 end-page: 67 ident: bib0013 article-title: Biased dropout and crossmap dropout: learning towards effective dropout regularization in convolutional neural network publication-title: Neural Networks – start-page: 3177 year: 2017 end-page: 3186 ident: bib0018 article-title: Net-trim: Convex pruning of deep neural networks with performance guarantee publication-title: Advances in Neural Information Processing Systems – volume: 23 start-page: 2019 year: 2014 end-page: 2032 ident: bib0022 article-title: Click prediction for web image reranking using multimodal sparse coding publication-title: IEEE Transactions on Image Processing – start-page: 3869 year: 2008 end-page: 3872 ident: bib0033 article-title: Iteratively reweighted algorithms for compressive sensing publication-title: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing – start-page: 598 year: 1990 end-page: 605 ident: bib0010 article-title: Optimal brain damage publication-title: Advances in Neural Information Processing Systems – start-page: 800 year: 2019 end-page: 809 ident: bib0042 article-title: Learning sparse neural networks via publication-title: World Congress on Global Optimization – volume: 58 start-page: 267 year: 1996 end-page: 288 ident: bib0048 article-title: Regression shrinkage and selection via the lasso publication-title: Journal of the Royal Statistical Society: Series B (Methodological) – start-page: 2148 year: 2013 end-page: 2156 ident: bib0006 article-title: Predicting parameters in deep learning publication-title: Advances in Neural Information Processing Systems – start-page: 177 year: 1989 end-page: 185 ident: bib0009 article-title: Comparing biases for minimal network construction with back-propagation publication-title: Advances in Neural Information Processing Systems – volume: 23 start-page: 1013 year: 2012 end-page: 1027 ident: bib0031 article-title: regularization: A thresholding representation theory and a fast solver publication-title: IEEE Transactions on Neural Networks and Learning Systems – start-page: 3288 year: 2017 end-page: 3298 ident: bib0008 article-title: Bayesian compression for deep learning publication-title: Advances in Neural Information Processing Systems – volume: 119 start-page: 286 year: 2019 end-page: 298 ident: bib0023 article-title: Transformed publication-title: Neural Networks – start-page: 315 year: 2011 end-page: 323 ident: bib0039 article-title: Deep sparse rectifier neural networks publication-title: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics – volume: 96 start-page: 1348 year: 2001 end-page: 1360 ident: bib0040 article-title: Variable selection via nonconcave penalized likelihood and its oracle properties publication-title: Journal of the American statistical Association – start-page: 933 year: 2017 end-page: 941 ident: bib0003 article-title: Language modeling with gated convolutional networks publication-title: Proceedings of the 34th International Conference on Machine Learning-Volume 70 – start-page: 2074 year: 2016 end-page: 2082 ident: bib0021 article-title: Learning structured sparsity in deep neural networks publication-title: Advances in Neural Information Processing Systems – volume: 38 start-page: 894 year: 2010 end-page: 942 ident: bib0044 article-title: Nearly unbiased variable selection under minimax concave penalty publication-title: The Annals of statistics – start-page: 1097 year: 2012 end-page: 1105 ident: bib0002 article-title: Imagenet classification with deep convolutional neural networks publication-title: Advances in Neural Information Processing Systems – start-page: 4820 year: 2016 end-page: 4828 ident: bib0016 article-title: Quantized convolutional neural networks for mobile devices publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition – volume: 21 start-page: 4709 year: 2012 end-page: 4721 ident: bib0029 article-title: Non-lipschitz publication-title: IEEE Transactions on Image Processing – start-page: 1033 year: 2009 end-page: 1041 ident: bib0028 article-title: Fast image deconvolution using hyper-laplacian priors publication-title: Advances in Neural Information Processing Systems – start-page: 2270 year: 2016 end-page: 2278 ident: bib0005 article-title: Learning the number of neurons in deep networks publication-title: Advances in Neural Information Processing Systems – start-page: 662 year: 2016 end-page: 677 ident: bib0019 article-title: Less is more: Towards compact cnns publication-title: European Conference on Computer Vision |
| SSID | ssj0017142 |
| Score | 2.4627278 |
| Snippet | •We propose a network compression model based on ℓ1/2 regularization. To the best of our knowledge, it is the first work utilizing non-Lipschitz continuous... |
| SourceID | elsevier |
| SourceType | Publisher |
| SubjectTerms | [formula omitted] Quasi-norm Deep neural networks Model compression Non-Lipschitz regularization Sparse optimization |
| Title | Training Compact DNNs with ℓ1/2 Regularization |
| URI | https://dx.doi.org/10.1016/j.patcog.2022.109206 |
| Volume | 136 |
| WOSCitedRecordID | wos000900874600002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1873-5142 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017142 issn: 0031-3203 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9tAEF6VhAMXoBREoSAfyqkyrHft7O4xolRtD1akplJulr2PKBwMCkkFd_4B_5BfwuzDxGkugMTFilaJH_s5387MzjeD0FfDMRM9LmIqfLRKx5WiLC7BRDJlWtqiWa7ZBMtzPhqJQejSeePaCbC65re34vpdoYYxANtKZ18B9_NJYQA-A-hwBNjh-DLgQ9MH_1eXs2_f8zxo2FxmQ3oiqFNowNTaPvTToMRsm6kDV3XTKl1CetFis34YAsz9li-fT-bBvx8bHZZCpzMs_b4OvIJ385UQ9aD5agg6ENrKVXGRsEYNs0g9cuxKk5gS7AlLe0LljMZglC0zrq95ssLePpBweXoNq9DVGJx3Qmy5K4L_K5btlt8_vuwkBkayOpdMrKEuYZngHdTt_7oY_X7eTGJJ6ovGh9trFJQuzW_1Wi2zpGVqDLfRZvARor7H9iP6oOsdtNX034gCHX9CuIE6ClBHFurIQh093j8kZyRahngX_f1xMTz_GYcOGLEGV3wWq1TqMktVD5daYiwTRVIgTK040SYhPZlUtOTYGJ5KDrasMSQrGdacVFVPS0P3UKe-qvU-iqjkLIXRzCqdlWRCZZJmwnK8MrRSnxFrnroIxpc3qgpAp2hyAS8LP1-Fna_Cz9fBm395iDYWL9cX1JlN5_oIrct_s8nN9DjA-ARkqVF3 |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Training+Compact+DNNs+with+%E2%84%931%2F2+Regularization&rft.jtitle=Pattern+recognition&rft.au=Tang%2C+Anda&rft.au=Niu%2C+Lingfeng&rft.au=Miao%2C+Jianyu&rft.au=Zhang%2C+Peng&rft.date=2023-04-01&rft.pub=Elsevier+Ltd&rft.issn=0031-3203&rft.eissn=1873-5142&rft.volume=136&rft_id=info:doi/10.1016%2Fj.patcog.2022.109206&rft.externalDocID=S0031320322006859 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon |