Text feature extraction based on stacked variational autoencoder
This paper presents a text feature extraction model based on stacked variational autoencoder (SVAE). A noise reduction mechanism is designed for variational autoencoder in input layer of text feature extraction to reduce noise interference and improve robustness and feature discrimination of the mod...
Uloženo v:
| Vydáno v: | Microprocessors and microsystems Ročník 76; s. 103063 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Kidlington
Elsevier B.V
01.07.2020
Elsevier BV |
| Témata: | |
| ISSN: | 0141-9331, 1872-9436 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | This paper presents a text feature extraction model based on stacked variational autoencoder (SVAE). A noise reduction mechanism is designed for variational autoencoder in input layer of text feature extraction to reduce noise interference and improve robustness and feature discrimination of the model. Three kinds of deep SVAE network architectures are constructed to improve ability of representing learning to mine feature intension in depth. Experiments are carried out in several aspects, including comparative analysis of text feature extraction model, sparse performance, parameter selection and stacking. Results show that text feature extraction model of SVAE has good performance and effect. The highest accuracy of SVAE models of Fudan and Reuters datasets is 13.50% and 8.96% higher than that of PCA, respectively. |
|---|---|
| AbstractList | This paper presents a text feature extraction model based on stacked variational autoencoder (SVAE). A noise reduction mechanism is designed for variational autoencoder in input layer of text feature extraction to reduce noise interference and improve robustness and feature discrimination of the model. Three kinds of deep SVAE network architectures are constructed to improve ability of representing learning to mine feature intension in depth. Experiments are carried out in several aspects, including comparative analysis of text feature extraction model, sparse performance, parameter selection and stacking. Results show that text feature extraction model of SVAE has good performance and effect. The highest accuracy of SVAE models of Fudan and Reuters datasets is 13.50% and 8.96% higher than that of PCA, respectively. |
| ArticleNumber | 103063 |
| Author | Che, Lei Wang, Liang Yang, Xiaoping |
| Author_xml | – sequence: 1 givenname: Lei surname: Che fullname: Che, Lei organization: School of Information Renmin University of China, Beijing 100872, China – sequence: 2 givenname: Xiaoping surname: Yang fullname: Yang, Xiaoping organization: School of Information Renmin University of China, Beijing 100872, China – sequence: 3 givenname: Liang surname: Wang fullname: Wang, Liang email: wangliang@ruc.edu.cn organization: School of Information Renmin University of China, Beijing 100872, China |
| BookMark | eNqFkE1LAzEQhoNUsFb_gYcFz1snH022HkQpfkHBSz2HbJKFrO2mJtmi_96s68mDnuZlZt5h3ucUTTrfWYQuMMwxYH7VzndO74OfEyBDiwKnR2iKK0HKJaN8gqaAGS6XlOITdBpjCwAL4GSKbjf2IxWNVakPtsg6KJ2c74paRWuKLGJS-i3LgwpODSO1LVSfvO20NzacoeNGbaM9_6kz9Ppwv1k9leuXx-fV3brUlLJUqgo3otGGAhNGN2qhsDCgNbOCE1HTStRKLRXhNVsC1zUzBghmQgOphTANnaHL8W7O-d7bmGTr-5CfiZIwhqtKcBB563rc0sHHGGwjtUvfX-dgbisxyIGYbOVITA7E5Egsm9kv8z64nQqf_9luRpvN8Q_OBhm1y3SsccHqJI13fx_4AgW6ifM |
| CitedBy_id | crossref_primary_10_1016_j_micpro_2020_103421 crossref_primary_10_1016_j_micpro_2020_103542 crossref_primary_10_4018_JGIM_344835 crossref_primary_10_1016_j_micpro_2020_103574 crossref_primary_10_32604_cmc_2024_051598 crossref_primary_10_3390_rs16162990 crossref_primary_10_1007_s11042_024_19983_2 crossref_primary_10_3390_e24010036 crossref_primary_10_1016_j_eswa_2022_118006 crossref_primary_10_1186_s13040_024_00379_9 crossref_primary_10_1007_s10462_022_10272_8 crossref_primary_10_1007_s10462_023_10662_6 crossref_primary_10_1109_JSEN_2021_3128562 crossref_primary_10_1016_j_conengprac_2024_106181 |
| Cites_doi | 10.1016/j.ipm.2013.08.006 10.1016/j.patrec.2016.12.009 10.1016/j.patcog.2016.05.018 10.1016/S1005-8885(10)60196-3 10.1016/j.csl.2006.09.003 10.18653/v1/K16-1002 |
| ContentType | Journal Article |
| Copyright | 2020 Elsevier B.V. Copyright Elsevier BV Jul 2020 |
| Copyright_xml | – notice: 2020 Elsevier B.V. – notice: Copyright Elsevier BV Jul 2020 |
| DBID | AAYXX CITATION 7SC 7SP 8FD F28 FR3 JQ2 L7M L~C L~D |
| DOI | 10.1016/j.micpro.2020.103063 |
| DatabaseName | CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Engineering Research Database Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1872-9436 |
| ExternalDocumentID | 10_1016_j_micpro_2020_103063 S0141933120300879 |
| GroupedDBID | --K --M -~X .DC .~1 0R~ 123 1B1 1~. 1~5 29M 4.4 457 4G. 5VS 7-5 71M 8P~ 9JN AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAXUO AAYFN ABBOA ABJNI ABMAC ABXDB ABYKQ ACDAQ ACGFS ACIWK ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADTZH AEBSH AECPX AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD AXJTR BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q G8K GBLVA GBOLZ HLZ HVGLF HZ~ IHE J1W JJJVA KOM LG9 LY7 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. PQQKQ Q38 RIG ROL RPZ SBC SDF SDG SDP SES SET SEW SPC SPCBC SST SSV SSZ T5K T9H TN5 UHS WUQ XOL XPP ZMT ~G- 9DU AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS ~HD 7SC 7SP 8FD AFXIZ AGCQF AGRNS BNPGV F28 FR3 JQ2 L7M L~C L~D SSH |
| ID | FETCH-LOGICAL-c334t-a81f7fcd3047dcfa5a17d0cc4e7627b387baa9a26b4906cb4dd02147c02b77df3 |
| ISICitedReferencesCount | 14 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000538093000024&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0141-9331 |
| IngestDate | Fri Jul 25 03:11:53 EDT 2025 Sat Nov 29 07:24:21 EST 2025 Tue Nov 18 22:37:15 EST 2025 Fri Feb 23 02:46:28 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Variational autoencoder Deep stack Text feature extraction Noise reduction |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c334t-a81f7fcd3047dcfa5a17d0cc4e7627b387baa9a26b4906cb4dd02147c02b77df3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| PQID | 2441887607 |
| PQPubID | 2045426 |
| ParticipantIDs | proquest_journals_2441887607 crossref_citationtrail_10_1016_j_micpro_2020_103063 crossref_primary_10_1016_j_micpro_2020_103063 elsevier_sciencedirect_doi_10_1016_j_micpro_2020_103063 |
| PublicationCentury | 2000 |
| PublicationDate | July 2020 2020-07-00 20200701 |
| PublicationDateYYYYMMDD | 2020-07-01 |
| PublicationDate_xml | – month: 07 year: 2020 text: July 2020 |
| PublicationDecade | 2020 |
| PublicationPlace | Kidlington |
| PublicationPlace_xml | – name: Kidlington |
| PublicationTitle | Microprocessors and microsystems |
| PublicationYear | 2020 |
| Publisher | Elsevier B.V Elsevier BV |
| Publisher_xml | – name: Elsevier B.V – name: Elsevier BV |
| References | Deng, Zhong (bib0006) 2013; 5 Javed, Maruf, Babri (bib0007) 2015; 157 Srivastava A., Sutton C.Autoencoding variational inference for topic models [J]. arXiv Bhattacharya, Ghosh, Chowdhury (bib0008) 2018 Bachman (bib0009) 2016 Li, Wang, Lam (bib0021) 2017 Bowman S.R., Vilnis L., Vinyals O., et al. Generating sentences from a continuous space[c]Proceedings of the SIGNLL Conference on Computational Natural Language Learning, 2016:10–21. 2013. Yang Z.C., Hu Z.T., Salakhutdinov R., Berg-Kirkpatrick T. Improved variational autoencoders for text modeling using dilated convolutions[c]Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia, 2017:3881–3890. Shi, He, Liu (bib0005) 2011; 18 Fraccaro, Sønderby, Paquet (bib0011) 2016 Zhou (bib0003) 2019 Takahashi, Iwata, Yamanaka (bib0019) 2019 Schwenk (bib0022) 2007; 21 2017. Semeniuta, Severyn, Barth (bib0018) 2017 Wolf-Sonkin, Naradowsky, Mielke (bib0017) 2018 Gulrajani I., Kumar K., Ahmed F., et al. PixelVAE: a latent variable model for natural images[j]. arXiv Paul, Magdon-Ismail, Drineas (bib0023) 2016 Li (bib0016) 2018 Xu, Sun, Deng (bib0015) 2017 Kingma D.P., Welling M. Auto-Encoding variational bayes[j]. arXiv Uysal, Gunal (bib0004) 2014; 50 He, Gong, Marino (bib0020) 2019 2016. Bandhakavi, Wiratunga (bib0001) 2017; 93 10.1016/j.micpro.2020.103063_bib0002 10.1016/j.micpro.2020.103063_bib0013 10.1016/j.micpro.2020.103063_bib0012 Semeniuta (10.1016/j.micpro.2020.103063_bib0018) 2017 10.1016/j.micpro.2020.103063_bib0014 Li (10.1016/j.micpro.2020.103063_bib0021) 2017 Paul (10.1016/j.micpro.2020.103063_bib0023) 2016 He (10.1016/j.micpro.2020.103063_bib0020) 2019 Uysal (10.1016/j.micpro.2020.103063_bib0004) 2014; 50 Javed (10.1016/j.micpro.2020.103063_bib0007) 2015; 157 Bachman (10.1016/j.micpro.2020.103063_bib0009) 2016 Shi (10.1016/j.micpro.2020.103063_bib0005) 2011; 18 Takahashi (10.1016/j.micpro.2020.103063_bib0019) 2019 Wolf-Sonkin (10.1016/j.micpro.2020.103063_bib0017) 2018 Schwenk (10.1016/j.micpro.2020.103063_bib0022) 2007; 21 Bhattacharya (10.1016/j.micpro.2020.103063_bib0008) 2018 Xu (10.1016/j.micpro.2020.103063_bib0015) 2017 Fraccaro (10.1016/j.micpro.2020.103063_bib0011) 2016 Zhou (10.1016/j.micpro.2020.103063_bib0003) 2019 Li (10.1016/j.micpro.2020.103063_bib0016) 2018 Bandhakavi (10.1016/j.micpro.2020.103063_bib0001) 2017; 93 Deng (10.1016/j.micpro.2020.103063_bib0006) 2013; 5 10.1016/j.micpro.2020.103063_bib0010 |
| References_xml | – start-page: 205 year: 2016 end-page: 214 ident: bib0023 article-title: Feature selection for linear SVM with provable guarantees[j] publication-title: Pattern Recognit. – start-page: 2199 year: 2016 end-page: 2207 ident: bib0011 article-title: Sequential neural models with stochastic layers[c] publication-title: Proceedings of Thirtieth Conference on Neural Information Processing Systems – volume: 93 start-page: 133 year: 2017 end-page: 142 ident: bib0001 article-title: al. lexicon based feature extraction for emotion text classification[j] publication-title: Pattern Recognit. Lett. – reference: Bowman S.R., Vilnis L., Vinyals O., et al. Generating sentences from a continuous space[c]Proceedings of the SIGNLL Conference on Computational Natural Language Learning, 2016:10–21. – volume: 21 start-page: 492 year: 2007 end-page: 518 ident: bib0022 article-title: Continuous space language models[J] publication-title: Comput. Speech Lang. – volume: 157 start-page: 91 year: 2015 end-page: 104 ident: bib0007 article-title: A two-stage markov blanket based feature selection algorithm for text classification[j] publication-title: Neuro Comput. – start-page: 3497 year: 2017 end-page: 3503 ident: bib0021 article-title: Salience estimation via variational auto-encoders for multi-document summarization[c] publication-title: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence – start-page: 8 year: 2019 ident: bib0003 article-title: "deep" why is it important, and what deep network [C]. publication-title: Proceedings of IJCAI 2019 – start-page: 1 year: 2019 end-page: 16 ident: bib0020 article-title: Variational autoencoders with jointly optimized latent dependency structure[c] publication-title: Proceedings of Nineth International Conference on Learning Representations – reference: , 2017. – reference: Yang Z.C., Hu Z.T., Salakhutdinov R., Berg-Kirkpatrick T. Improved variational autoencoders for text modeling using dilated convolutions[c]Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia, 2017:3881–3890. – reference: Kingma D.P., Welling M. Auto-Encoding variational bayes[j]. arXiv: – volume: 50 start-page: 104 year: 2014 end-page: 112 ident: bib0004 article-title: The impact of preprocessing on text classification[j] publication-title: Inf. Process. Manag. – reference: 2013. – start-page: 4826 year: 2016 end-page: 4834 ident: bib0009 article-title: An architecture for deep, hierarchical generative models [C] publication-title: Proceedings of Thirtieth Conference on Neural Information Processing Systems – volume: 18 start-page: 131 year: 2011 end-page: 135 ident: bib0005 article-title: Efficient text classification method based on improved term reduction and term weighting[j] publication-title: J. China Univ. Posts Telecommun. – year: 2018 ident: bib0016 article-title: Research And Application Of RL Based On Variational Auto-Encoder [D] – start-page: 8049 year: 2018 end-page: 8050 ident: bib0008 article-title: Training autoencoders in sparse domain[c] publication-title: The Thirty-Second AAAI Conference on Artificial Intelligence – volume: 5 start-page: 668 year: 2013 end-page: 677 ident: bib0006 article-title: A kind of text classification design on the basis of natural language processing[j] publication-title: Int. J. Adv. Comput. Technol. (IJACT) – reference: Srivastava A., Sutton C.Autoencoding variational inference for topic models [J]. arXiv: – reference: , 2016. – start-page: 3358 year: 2017 end-page: 3364 ident: bib0015 article-title: Variational autoencoder for semi-supervised text classification[c] publication-title: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence – year: 2019 ident: bib0019 article-title: Variational autoencoder with implicit optimal priors[c] publication-title: Proceedings of the AAAI Conference on Artificial Intelligence – reference: Gulrajani I., Kumar K., Ahmed F., et al. PixelVAE: a latent variable model for natural images[j]. arXiv: – start-page: 627 year: 2017 end-page: 637 ident: bib0018 article-title: A hybrid convolutional variational autoencoder for text generation[c] publication-title: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing – start-page: 2631 year: 2018 end-page: 2641 ident: bib0017 article-title: A structured variational autoencoder for contextual morphological inflection[c] publication-title: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics – volume: 50 start-page: 104 issue: 1 year: 2014 ident: 10.1016/j.micpro.2020.103063_bib0004 article-title: The impact of preprocessing on text classification[j] publication-title: Inf. Process. Manag. doi: 10.1016/j.ipm.2013.08.006 – volume: 93 start-page: 133 issue: 1 year: 2017 ident: 10.1016/j.micpro.2020.103063_bib0001 article-title: al. lexicon based feature extraction for emotion text classification[j] publication-title: Pattern Recognit. Lett. doi: 10.1016/j.patrec.2016.12.009 – ident: 10.1016/j.micpro.2020.103063_bib0002 – start-page: 205 year: 2016 ident: 10.1016/j.micpro.2020.103063_bib0023 article-title: Feature selection for linear SVM with provable guarantees[j] publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2016.05.018 – volume: 5 start-page: 668 issue: 1 year: 2013 ident: 10.1016/j.micpro.2020.103063_bib0006 article-title: A kind of text classification design on the basis of natural language processing[j] publication-title: Int. J. Adv. Comput. Technol. (IJACT) – volume: 18 start-page: 131 issue: 18 year: 2011 ident: 10.1016/j.micpro.2020.103063_bib0005 article-title: Efficient text classification method based on improved term reduction and term weighting[j] publication-title: J. China Univ. Posts Telecommun. doi: 10.1016/S1005-8885(10)60196-3 – start-page: 3497 year: 2017 ident: 10.1016/j.micpro.2020.103063_bib0021 article-title: Salience estimation via variational auto-encoders for multi-document summarization[c] – volume: 21 start-page: 492 issue: 3 year: 2007 ident: 10.1016/j.micpro.2020.103063_bib0022 article-title: Continuous space language models[J] publication-title: Comput. Speech Lang. doi: 10.1016/j.csl.2006.09.003 – start-page: 3358 year: 2017 ident: 10.1016/j.micpro.2020.103063_bib0015 article-title: Variational autoencoder for semi-supervised text classification[c] – volume: 157 start-page: 91 issue: 1 year: 2015 ident: 10.1016/j.micpro.2020.103063_bib0007 article-title: A two-stage markov blanket based feature selection algorithm for text classification[j] publication-title: Neuro Comput. – start-page: 4826 year: 2016 ident: 10.1016/j.micpro.2020.103063_bib0009 article-title: An architecture for deep, hierarchical generative models [C] – ident: 10.1016/j.micpro.2020.103063_bib0013 – start-page: 2631 year: 2018 ident: 10.1016/j.micpro.2020.103063_bib0017 article-title: A structured variational autoencoder for contextual morphological inflection[c] – ident: 10.1016/j.micpro.2020.103063_bib0010 – start-page: 8 year: 2019 ident: 10.1016/j.micpro.2020.103063_bib0003 article-title: "deep" why is it important, and what deep network [C]. – start-page: 627 year: 2017 ident: 10.1016/j.micpro.2020.103063_bib0018 article-title: A hybrid convolutional variational autoencoder for text generation[c] – ident: 10.1016/j.micpro.2020.103063_bib0014 – ident: 10.1016/j.micpro.2020.103063_bib0012 doi: 10.18653/v1/K16-1002 – start-page: 1 year: 2019 ident: 10.1016/j.micpro.2020.103063_bib0020 article-title: Variational autoencoders with jointly optimized latent dependency structure[c] – year: 2019 ident: 10.1016/j.micpro.2020.103063_bib0019 article-title: Variational autoencoder with implicit optimal priors[c] – start-page: 8049 year: 2018 ident: 10.1016/j.micpro.2020.103063_bib0008 article-title: Training autoencoders in sparse domain[c] – start-page: 2199 year: 2016 ident: 10.1016/j.micpro.2020.103063_bib0011 article-title: Sequential neural models with stochastic layers[c] – year: 2018 ident: 10.1016/j.micpro.2020.103063_bib0016 |
| SSID | ssj0005062 |
| Score | 2.2760046 |
| Snippet | This paper presents a text feature extraction model based on stacked variational autoencoder (SVAE). A noise reduction mechanism is designed for variational... |
| SourceID | proquest crossref elsevier |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 103063 |
| SubjectTerms | Computer architecture Deep stack Feature extraction Model accuracy Noise reduction Text feature extraction Variational autoencoder |
| Title | Text feature extraction based on stacked variational autoencoder |
| URI | https://dx.doi.org/10.1016/j.micpro.2020.103063 https://www.proquest.com/docview/2441887607 |
| Volume | 76 |
| WOSCitedRecordID | wos000538093000024&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1872-9436 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0005062 issn: 0141-9331 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtR3LTtww0FoeBy6llCJerXzobZVVEiexcwMhqoIqxIFKe7McP9AiCKvdZcXndxzbSQAVygEpiiwrGY0845nxeB4I_aA6qdK0LKKS2WtGI3QkCsmiKjeJ1KyK4WmaTdCLCzYel5eDwXnIhVne0rpmj4_l9ENJDXNAbJs6-w5yt0BhAsZAdHgD2eH9f4QHcTs0uinYOYTxzHcDt_pK2bsBsAdh66rhEo7JwRUoHhb3tqSl8sG6oceTDdebulwC25XHetnv7Ny8V-i8CQ5w_Zr1pJUi3g89nogmJatz3Lv538CW132XA5wvQ3hq54VMojKkWnkxSvty0DYvc4LrhYh23oKbEWAL-I8s_FH3-dOK2M80VRs_GELTbriDwi0U7qCsoLWU5iVIuLXjs9PxeRfwEzftZVvkQyJlE-33Ept_GSrPVHZjh1x9Rp_8AQIfO8JvoYGuv6DN0JwDe1m9jY4sH2DPB7jjA9zwAYaB5wPc4wPc44Ov6M_P06uTX5FvlxFJQrJFJFhiqJHKXqQqaUQuEqpiKTMNCo9WhNFKiFKkRZWVcSGrTClbMI_KOK0oVYbsoNX6vta7CCdSUiVEnBtBMmaYUKkhJJc0zjKZaLKHSFgbLn0tedvS5Ja_Rpk9FLV_TV0tlTe-p2HZubcHnZ3HgZfe-PMwUIn7rTnnYMgmoFKLmO6_E5EDtNFtg0O0upg96G9oXS4Xk_nsu-ezv4F1jyE |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Text+feature+extraction+based+on+stacked+variational+autoencoder&rft.jtitle=Microprocessors+and+microsystems&rft.au=Che%2C+Lei&rft.au=Yang%2C+Xiaoping&rft.au=Wang%2C+Liang&rft.date=2020-07-01&rft.issn=0141-9331&rft.volume=76&rft.spage=103063&rft_id=info:doi/10.1016%2Fj.micpro.2020.103063&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_micpro_2020_103063 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0141-9331&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0141-9331&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0141-9331&client=summon |