Weighted automata extraction and explanation of recurrent neural networks for natural language tasks
Recurrent Neural Networks (RNNs) have achieved tremendous success in processing sequential data, yet understanding and analyzing their behaviours remains a significant challenge. To this end, many efforts have been made to extract finite automata from RNNs, which are more amenable for analysis and e...
Uložené v:
| Vydané v: | Journal of logical and algebraic methods in programming Ročník 136; s. 100907 |
|---|---|
| Hlavní autori: | , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Elsevier Inc
01.01.2024
|
| Predmet: | |
| ISSN: | 2352-2208 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Recurrent Neural Networks (RNNs) have achieved tremendous success in processing sequential data, yet understanding and analyzing their behaviours remains a significant challenge. To this end, many efforts have been made to extract finite automata from RNNs, which are more amenable for analysis and explanation. However, existing approaches like exact learning and compositional approaches for model extraction have limitations in either scalability or precision. In this paper, we propose a novel framework of Weighted Finite Automata (WFA) extraction and explanation to tackle the limitations for natural language tasks. First, to address the transition sparsity and context loss problems we identified in WFA extraction for natural language tasks, we propose an empirical method to complement missing rules in the transition diagram, and adjust transition matrices to enhance the context-awareness of the WFA. We also propose two data augmentation tactics to track more dynamic behaviours of RNN, which further allows us to improve the extraction precision. Based on the extracted model, we propose an explanation method for RNNs including a word embedding method – Transition Matrix Embeddings (TME) and TME-based task oriented explanation for the target RNN. Our evaluation demonstrates the advantage of our method in extraction precision than existing approaches, and the effectiveness of TME-based explanation method in applications to pretraining and adversarial example generation. |
|---|---|
| AbstractList | Recurrent Neural Networks (RNNs) have achieved tremendous success in processing sequential data, yet understanding and analyzing their behaviours remains a significant challenge. To this end, many efforts have been made to extract finite automata from RNNs, which are more amenable for analysis and explanation. However, existing approaches like exact learning and compositional approaches for model extraction have limitations in either scalability or precision. In this paper, we propose a novel framework of Weighted Finite Automata (WFA) extraction and explanation to tackle the limitations for natural language tasks. First, to address the transition sparsity and context loss problems we identified in WFA extraction for natural language tasks, we propose an empirical method to complement missing rules in the transition diagram, and adjust transition matrices to enhance the context-awareness of the WFA. We also propose two data augmentation tactics to track more dynamic behaviours of RNN, which further allows us to improve the extraction precision. Based on the extracted model, we propose an explanation method for RNNs including a word embedding method – Transition Matrix Embeddings (TME) and TME-based task oriented explanation for the target RNN. Our evaluation demonstrates the advantage of our method in extraction precision than existing approaches, and the effectiveness of TME-based explanation method in applications to pretraining and adversarial example generation. |
| ArticleNumber | 100907 |
| Author | Zhang, Xiyue Zhang, Yihao Sun, Meng Wei, Zeming |
| Author_xml | – sequence: 1 givenname: Zeming surname: Wei fullname: Wei, Zeming organization: School of Mathematical Sciences, Peking University, Beijing, China – sequence: 2 givenname: Xiyue surname: Zhang fullname: Zhang, Xiyue organization: Department of Computer Science, University of Oxford, Oxford, United Kingdom – sequence: 3 givenname: Yihao surname: Zhang fullname: Zhang, Yihao organization: School of Mathematical Sciences, Peking University, Beijing, China – sequence: 4 givenname: Meng orcidid: 0000-0001-6550-7396 surname: Sun fullname: Sun, Meng email: sunm@pku.edu.cn organization: School of Mathematical Sciences, Peking University, Beijing, China |
| BookMark | eNqFkMtOwzAQRb0oEgX6BWz8Ayl-NImzYIEqXlIlNiCWlmtPitPUrmyHx9_jpqxYwGo0V3NGuucMTZx3gNAlJXNKaHXVzbte7fZzRhjPCWlIPUFTxktWMEbEKZrF2BGST0UtOJ0i8wp285bAYDUkv1NJYfhMQelkvcPKmbzue-XUuPsWB9BDCOASdjAE1eeRPnzYRtz6gPPdGGZiM6gN4KTiNl6gk1b1EWY_8xy93N0-Lx-K1dP94_JmVWhOeCoYBc551RDNG2IEYWZRUt4Y1vKKE7oWZV3SCqDUtG014xpqUQnBYLFYrxtR8nPEj3918DEGaOU-2J0KX5ISefAjOzn6kQc_8ugnU80vSts09s0ebP8Pe31kIdd6txBk1BacBmOzqCSNt3_y33yuh4E |
| CitedBy_id | crossref_primary_10_3390_a17050210 crossref_primary_10_1016_j_eswa_2025_128474 |
| Cites_doi | 10.1016/0890-5401(87)90052-6 10.1162/0899766053630350 10.35940/ijeat.D7637.049420 10.1016/0893-6080(95)00086-0 10.1162/neco.1993.5.6.976 10.1038/s41598-018-24271-9 10.1109/69.485647 10.1016/S0016-0032(96)00063-4 10.1109/TASLP.2014.2339736 10.1162/neco.1997.9.8.1735 10.1162/neco_a_01111 |
| ContentType | Journal Article |
| Copyright | 2023 Elsevier Inc. |
| Copyright_xml | – notice: 2023 Elsevier Inc. |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.jlamp.2023.100907 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| ExternalDocumentID | 10_1016_j_jlamp_2023_100907 S2352220823000615 |
| GroupedDBID | --M 0R~ 4.4 457 4G. 7-5 8P~ AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAXUO AAYFN ABBOA ABMAC ABVKL ABXDB ABYKQ ACDAQ ACGFS ACRLP ADBBV ADEZE AEBSH AEKER AENEX AFKWA AFTJW AGHFR AGUBO AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD AXJTR BKOJK BLXMC EBS EFJIC EFLBG EJD FDB FIRID FYGXN GBLVA GBOLZ HZ~ KOM M41 NCXOZ O9- OAUVE RIG ROL SPC SPCBC SSV SSZ T5K ~G- AATTM AAXKI AAYWO AAYXX ABJNI ACLOT ACVFH ADCNI ADVLN AEIPS AEUPX AFJKZ AFPUW AIGII AIIUN AKBMS AKRWK AKYEP ANKPU CITATION EFKBS |
| ID | FETCH-LOGICAL-c303t-21e333690c390d802d45139d2f36301b857516ee5c1ffc23ce786882e44bb9853 |
| ISICitedReferencesCount | 4 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001336366500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2352-2208 |
| IngestDate | Tue Nov 18 21:59:36 EST 2025 Thu Nov 13 04:37:26 EST 2025 Fri Feb 23 02:36:12 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Weighted finite automata Recurrent neural networks Abstraction Natural languages Explanation |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c303t-21e333690c390d802d45139d2f36301b857516ee5c1ffc23ce786882e44bb9853 |
| ORCID | 0000-0001-6550-7396 |
| ParticipantIDs | crossref_primary_10_1016_j_jlamp_2023_100907 crossref_citationtrail_10_1016_j_jlamp_2023_100907 elsevier_sciencedirect_doi_10_1016_j_jlamp_2023_100907 |
| PublicationCentury | 2000 |
| PublicationDate | January 2024 2024-01-00 |
| PublicationDateYYYYMMDD | 2024-01-01 |
| PublicationDate_xml | – month: 01 year: 2024 text: January 2024 |
| PublicationDecade | 2020 |
| PublicationTitle | Journal of logical and algebraic methods in programming |
| PublicationYear | 2024 |
| Publisher | Elsevier Inc |
| Publisher_xml | – name: Elsevier Inc |
| References | Krakovna, Doshi-Velez (br0320) Omlin, Giles (br0280) 1996; 9 Datta, David, Mittal, Jain (br0060) 2020; 9 Zhang, Du, Xie, Ma, Liu, Sun (br0210) 2021 Guo, Lin, Lu (br0350) Wang, Zhang, Liu, Giles (br0120) Van der Maaten, Hinton (br0250) 2008; 9 Dong, Wang, Sun, Zhang, Wang, Dai, Dong, Wang (br0140) 2020 Angluin (br0070) 1987; 75 Che, Purushotham, Cho, Sontag, Liu (br0040) 2018; 8 Weiss, Goldberg, Yahav (br0080) 2018 Hochreiter, Schmidhuber (br0170) 1997; 9 Abdel-Hamid, Mohamed, Jiang, Deng, Penn, Yu (br0020) 2014; 22 Okudono, Waga, Sekiyama, Hasuo (br0100) 2020 Pennington, Socher, Manning (br0180) 2014 He, Zhang, Ren, Sun (br0010) 2016 Hou, Zhou (br0330) 2020; 31 Goldberg (br0030) 2017; 10 Menéndez, Pardo, Pardo, Pardo (br0230) 1997; 334 Du, Xie, Li, Ma, Liu, Zhao (br0130) 2019 Du, Li, Xie, Ma, Liu, Zhao (br0150) 2020 Jacobsson (br0260) 2005; 17 Li, Roth (br0200) 2002 Mikolov, Chen, Corrado, Dean (br0240) Jigsaw (br0220) Zeng, Goodman, Smyth (br0300) 1993; 5 Omlin, Giles (br0290) 1996; 8 Powers (br0190) 1998 Weiss, Goldberg, Yahav (br0090) 2019; vol. 32 Jiang, Zhao, Chu, Shen, Tu (br0340) 2020 Wang, Zhang, Ororbia, Xing, Liu, Giles (br0110) 2018; 30 Cechin, Regina, Simon, Stertz (br0310) 2003 Wang, Li, Cao, Chen, Wang (br0050) 2019 Xie, Guo, Ma, Le, Wang, Zhou, Liu, Xing (br0160) 2021 Omlin, Giles, Miller (br0270) 1992; vol. 1 Wang (10.1016/j.jlamp.2023.100907_br0110) 2018; 30 Angluin (10.1016/j.jlamp.2023.100907_br0070) 1987; 75 Wang (10.1016/j.jlamp.2023.100907_br0120) Dong (10.1016/j.jlamp.2023.100907_br0140) 2020 Abdel-Hamid (10.1016/j.jlamp.2023.100907_br0020) 2014; 22 Weiss (10.1016/j.jlamp.2023.100907_br0080) 2018 Datta (10.1016/j.jlamp.2023.100907_br0060) 2020; 9 Jacobsson (10.1016/j.jlamp.2023.100907_br0260) 2005; 17 Cechin (10.1016/j.jlamp.2023.100907_br0310) 2003 Du (10.1016/j.jlamp.2023.100907_br0150) 2020 Zhang (10.1016/j.jlamp.2023.100907_br0210) 2021 Omlin (10.1016/j.jlamp.2023.100907_br0270) 1992; vol. 1 Pennington (10.1016/j.jlamp.2023.100907_br0180) 2014 Li (10.1016/j.jlamp.2023.100907_br0200) 2002 Omlin (10.1016/j.jlamp.2023.100907_br0290) 1996; 8 Krakovna (10.1016/j.jlamp.2023.100907_br0320) Menéndez (10.1016/j.jlamp.2023.100907_br0230) 1997; 334 Che (10.1016/j.jlamp.2023.100907_br0040) 2018; 8 Powers (10.1016/j.jlamp.2023.100907_br0190) 1998 Weiss (10.1016/j.jlamp.2023.100907_br0090) 2019; vol. 32 Xie (10.1016/j.jlamp.2023.100907_br0160) 2021 Jiang (10.1016/j.jlamp.2023.100907_br0340) 2020 Hochreiter (10.1016/j.jlamp.2023.100907_br0170) 1997; 9 Guo (10.1016/j.jlamp.2023.100907_br0350) Goldberg (10.1016/j.jlamp.2023.100907_br0030) 2017; 10 Jigsaw (10.1016/j.jlamp.2023.100907_br0220) Wang (10.1016/j.jlamp.2023.100907_br0050) 2019 Zeng (10.1016/j.jlamp.2023.100907_br0300) 1993; 5 Okudono (10.1016/j.jlamp.2023.100907_br0100) 2020 Omlin (10.1016/j.jlamp.2023.100907_br0280) 1996; 9 He (10.1016/j.jlamp.2023.100907_br0010) 2016 Mikolov (10.1016/j.jlamp.2023.100907_br0240) Van der Maaten (10.1016/j.jlamp.2023.100907_br0250) 2008; 9 Hou (10.1016/j.jlamp.2023.100907_br0330) 2020; 31 Du (10.1016/j.jlamp.2023.100907_br0130) 2019 |
| References_xml | – start-page: 151 year: 1998 end-page: 160 ident: br0190 article-title: Applications and explanations of Zipf's law publication-title: New Methods in Language Processing and Computational Natural Language Learning – volume: vol. 32 start-page: 8558 year: 2019 end-page: 8569 ident: br0090 article-title: Learning deterministic weighted automata with queries and counterexamples publication-title: Advances in Neural Information Processing Systems – ident: br0120 article-title: Verification of recurrent neural networks through rule extraction – start-page: 11383 year: 2021 end-page: 11392 ident: br0160 article-title: RNNRepair: automatic RNN repair via model-based analysis publication-title: International Conference on Machine Learning – volume: 334 start-page: 307 year: 1997 end-page: 318 ident: br0230 article-title: The Jensen-Shannon divergence publication-title: J. Franklin Inst. – ident: br0350 article-title: An interpretable LSTM neural network for autoregressive exogenous model – volume: 22 start-page: 1533 year: 2014 end-page: 1545 ident: br0020 article-title: Convolutional neural networks for speech recognition publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. – volume: 9 start-page: 41 year: 1996 end-page: 52 ident: br0280 article-title: Extraction of rules from discrete-time recurrent neural networks publication-title: Neural Netw. – start-page: 423 year: 2020 end-page: 435 ident: br0150 article-title: Marble: model-based robustness analysis of stateful deep learning systems publication-title: Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering – start-page: 11699 year: 2021 end-page: 11707 ident: br0210 article-title: Decision-guided weighted automata extraction from recurrent neural networks publication-title: Thirty-Fifth AAAI Conference on Artificial Intelligence – ident: br0320 article-title: Increasing the interpretability of recurrent neural networks using hidden Markov models – start-page: 5306 year: 2020 end-page: 5314 ident: br0100 article-title: Weighted automata extraction from recurrent neural networks via regression on state spaces publication-title: Proceedings of the AAAI Conference on Artificial Intelligence – ident: br0240 article-title: Efficient estimation of word representations in vector space – year: 2002 ident: br0200 article-title: Learning question classifiers publication-title: COLING 2002: The 19th International Conference on Computational Linguistics – start-page: 5247 year: 2018 end-page: 5256 ident: br0080 article-title: Extracting automata from recurrent neural networks using queries and counterexamples publication-title: International Conference on Machine Learning – start-page: 770 year: 2016 end-page: 778 ident: br0010 article-title: Deep residual learning for image recognition publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition – volume: 5 start-page: 976 year: 1993 end-page: 990 ident: br0300 article-title: Learning finite state machines with self-clustering recurrent networks publication-title: Neural Comput. – start-page: 1 year: 2019 end-page: 6 ident: br0050 article-title: Convolutional recurrent neural networks for text classification publication-title: 2019 International Joint Conference on Neural Networks – volume: vol. 1 start-page: 33 year: 1992 end-page: 38 ident: br0270 article-title: Heuristics for the extraction of rules from discrete-time recurrent neural networks publication-title: [Proceedings 1992] IJCNN International Joint Conference on Neural Networks – volume: 75 start-page: 87 year: 1987 end-page: 106 ident: br0070 article-title: Learning regular sets from queries and counterexamples publication-title: Inf. Comput. – volume: 31 start-page: 2267 year: 2020 end-page: 2279 ident: br0330 article-title: Learning with interpretable structure from gated RNN publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 9 start-page: 1735 year: 1997 end-page: 1780 ident: br0170 article-title: Long short-term memory publication-title: Neural Comput. – start-page: 73 year: 2003 end-page: 78 ident: br0310 article-title: State automata extraction from recurrent neural nets using k-means and fuzzy clustering publication-title: Proceedings of the 23rd International Conference of the Chilean Computer Science Society, 2003 – volume: 8 start-page: 183 year: 1996 end-page: 188 ident: br0290 article-title: Rule revision with recurrent neural networks publication-title: IEEE Trans. Knowl. Data Eng. – start-page: 477 year: 2019 end-page: 487 ident: br0130 article-title: Deepstellar: model-based quantitative analysis of stateful deep learning systems publication-title: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering – volume: 8 start-page: 1 year: 2018 end-page: 12 ident: br0040 article-title: Recurrent neural networks for multivariate time series with missing values publication-title: Sci. Rep. – ident: br0220 article-title: Toxic comment classification challenge – volume: 10 start-page: 1 year: 2017 end-page: 309 ident: br0030 article-title: Neural network methods for natural language processing publication-title: Synth. Lect. Hum. Lang. Technol. – volume: 30 start-page: 2568 year: 2018 end-page: 2591 ident: br0110 article-title: An empirical evaluation of rule extraction from recurrent neural networks publication-title: Neural Comput. – volume: 9 start-page: 1395 year: 2020 end-page: 1400 ident: br0060 article-title: Neural machine translation using recurrent neural network publication-title: Int. J. Eng. Adv. Technol. – start-page: 499 year: 2020 end-page: 510 ident: br0140 article-title: Towards interpreting recurrent neural networks through probabilistic abstraction publication-title: 2020 35th IEEE/ACM International Conference on Automated Software Engineering – volume: 9 year: 2008 ident: br0250 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – start-page: 1532 year: 2014 end-page: 1543 ident: br0180 article-title: Glove: global vectors for word representation publication-title: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing – start-page: 3193 year: 2020 end-page: 3207 ident: br0340 article-title: Cold-start and interpretability: turning regular expressions into trainable recurrent neural networks publication-title: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing – volume: 17 start-page: 1223 year: 2005 end-page: 1263 ident: br0260 article-title: Rule extraction from recurrent neural networks: ataxonomy and review publication-title: Neural Comput. – volume: 75 start-page: 87 issue: 2 year: 1987 ident: 10.1016/j.jlamp.2023.100907_br0070 article-title: Learning regular sets from queries and counterexamples publication-title: Inf. Comput. doi: 10.1016/0890-5401(87)90052-6 – start-page: 5247 year: 2018 ident: 10.1016/j.jlamp.2023.100907_br0080 article-title: Extracting automata from recurrent neural networks using queries and counterexamples – volume: 17 start-page: 1223 issue: 6 year: 2005 ident: 10.1016/j.jlamp.2023.100907_br0260 article-title: Rule extraction from recurrent neural networks: ataxonomy and review publication-title: Neural Comput. doi: 10.1162/0899766053630350 – start-page: 499 year: 2020 ident: 10.1016/j.jlamp.2023.100907_br0140 article-title: Towards interpreting recurrent neural networks through probabilistic abstraction – volume: 9 start-page: 1395 issue: 4 year: 2020 ident: 10.1016/j.jlamp.2023.100907_br0060 article-title: Neural machine translation using recurrent neural network publication-title: Int. J. Eng. Adv. Technol. doi: 10.35940/ijeat.D7637.049420 – start-page: 477 year: 2019 ident: 10.1016/j.jlamp.2023.100907_br0130 article-title: Deepstellar: model-based quantitative analysis of stateful deep learning systems – ident: 10.1016/j.jlamp.2023.100907_br0240 – start-page: 11383 year: 2021 ident: 10.1016/j.jlamp.2023.100907_br0160 article-title: RNNRepair: automatic RNN repair via model-based analysis – year: 2002 ident: 10.1016/j.jlamp.2023.100907_br0200 article-title: Learning question classifiers – volume: 31 start-page: 2267 issue: 7 year: 2020 ident: 10.1016/j.jlamp.2023.100907_br0330 article-title: Learning with interpretable structure from gated RNN publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 9 start-page: 41 issue: 1 year: 1996 ident: 10.1016/j.jlamp.2023.100907_br0280 article-title: Extraction of rules from discrete-time recurrent neural networks publication-title: Neural Netw. doi: 10.1016/0893-6080(95)00086-0 – volume: 5 start-page: 976 issue: 6 year: 1993 ident: 10.1016/j.jlamp.2023.100907_br0300 article-title: Learning finite state machines with self-clustering recurrent networks publication-title: Neural Comput. doi: 10.1162/neco.1993.5.6.976 – volume: 8 start-page: 1 issue: 1 year: 2018 ident: 10.1016/j.jlamp.2023.100907_br0040 article-title: Recurrent neural networks for multivariate time series with missing values publication-title: Sci. Rep. doi: 10.1038/s41598-018-24271-9 – start-page: 1 year: 2019 ident: 10.1016/j.jlamp.2023.100907_br0050 article-title: Convolutional recurrent neural networks for text classification – volume: 9 issue: 11 year: 2008 ident: 10.1016/j.jlamp.2023.100907_br0250 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – volume: 8 start-page: 183 issue: 1 year: 1996 ident: 10.1016/j.jlamp.2023.100907_br0290 article-title: Rule revision with recurrent neural networks publication-title: IEEE Trans. Knowl. Data Eng. doi: 10.1109/69.485647 – volume: vol. 32 start-page: 8558 year: 2019 ident: 10.1016/j.jlamp.2023.100907_br0090 article-title: Learning deterministic weighted automata with queries and counterexamples – ident: 10.1016/j.jlamp.2023.100907_br0320 – ident: 10.1016/j.jlamp.2023.100907_br0350 – volume: 334 start-page: 307 issue: 2 year: 1997 ident: 10.1016/j.jlamp.2023.100907_br0230 article-title: The Jensen-Shannon divergence publication-title: J. Franklin Inst. doi: 10.1016/S0016-0032(96)00063-4 – ident: 10.1016/j.jlamp.2023.100907_br0120 – volume: 22 start-page: 1533 issue: 10 year: 2014 ident: 10.1016/j.jlamp.2023.100907_br0020 article-title: Convolutional neural networks for speech recognition publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2014.2339736 – volume: vol. 1 start-page: 33 year: 1992 ident: 10.1016/j.jlamp.2023.100907_br0270 article-title: Heuristics for the extraction of rules from discrete-time recurrent neural networks – start-page: 3193 year: 2020 ident: 10.1016/j.jlamp.2023.100907_br0340 article-title: Cold-start and interpretability: turning regular expressions into trainable recurrent neural networks – start-page: 423 year: 2020 ident: 10.1016/j.jlamp.2023.100907_br0150 article-title: Marble: model-based robustness analysis of stateful deep learning systems – volume: 10 start-page: 1 issue: 1 year: 2017 ident: 10.1016/j.jlamp.2023.100907_br0030 article-title: Neural network methods for natural language processing publication-title: Synth. Lect. Hum. Lang. Technol. – ident: 10.1016/j.jlamp.2023.100907_br0220 – start-page: 11699 year: 2021 ident: 10.1016/j.jlamp.2023.100907_br0210 article-title: Decision-guided weighted automata extraction from recurrent neural networks – start-page: 770 year: 2016 ident: 10.1016/j.jlamp.2023.100907_br0010 article-title: Deep residual learning for image recognition – start-page: 151 year: 1998 ident: 10.1016/j.jlamp.2023.100907_br0190 article-title: Applications and explanations of Zipf's law – start-page: 73 year: 2003 ident: 10.1016/j.jlamp.2023.100907_br0310 article-title: State automata extraction from recurrent neural nets using k-means and fuzzy clustering – volume: 9 start-page: 1735 issue: 8 year: 1997 ident: 10.1016/j.jlamp.2023.100907_br0170 article-title: Long short-term memory publication-title: Neural Comput. doi: 10.1162/neco.1997.9.8.1735 – start-page: 5306 year: 2020 ident: 10.1016/j.jlamp.2023.100907_br0100 article-title: Weighted automata extraction from recurrent neural networks via regression on state spaces – start-page: 1532 year: 2014 ident: 10.1016/j.jlamp.2023.100907_br0180 article-title: Glove: global vectors for word representation – volume: 30 start-page: 2568 issue: 9 year: 2018 ident: 10.1016/j.jlamp.2023.100907_br0110 article-title: An empirical evaluation of rule extraction from recurrent neural networks publication-title: Neural Comput. doi: 10.1162/neco_a_01111 |
| SSID | ssj0001687831 |
| Score | 2.3371503 |
| Snippet | Recurrent Neural Networks (RNNs) have achieved tremendous success in processing sequential data, yet understanding and analyzing their behaviours remains a... |
| SourceID | crossref elsevier |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 100907 |
| SubjectTerms | Abstraction Explanation Natural languages Recurrent neural networks Weighted finite automata |
| Title | Weighted automata extraction and explanation of recurrent neural networks for natural language tasks |
| URI | https://dx.doi.org/10.1016/j.jlamp.2023.100907 |
| Volume | 136 |
| WOSCitedRecordID | wos001336366500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 issn: 2352-2208 databaseCode: AIEXJ dateStart: 20211207 customDbUrl: isFulltext: true dateEnd: 99991231 titleUrlDefault: https://www.sciencedirect.com omitProxy: false ssIdentifier: ssj0001687831 providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3LTttAFB2l0EU3LfQhKG01i-5SI3vGj_ESVVSlC8SCqmk31nhm3CYEByUOop_Sv-XOyzYNRLBgY0Uje2LlnNw5vnN9LkIfyzKGdZjnQSVpGcQJVwGTLA04q0BcE5UwHptmE9nxMRuN8pPB4J9_F-ZymtU1u7rKLx4VahgDsPWrsw-Au50UBuAzgA5HgB2O9wL-h0l2go7ky2YGepQPIf7OfUvwWmpT_ymvW6k41xl349GkvS0BsdpWhhujhqEx_oRBn9ccNnxxtrhD0vo4agxgp7_1nvRYuCbVtl7dVoOd-_XSbAmZeoJfqj_YprFH479LtTL6c_yHz7rtLFuOrNzlLoVB4l4Kw0Q6AiowICRkN8Iy7QfWCLSgbY-7EvNt-mGyP4G_kHYgJXS_O_umw_Z_K19bj-hL3SaFmaTQkxR2kidok2RJDgFz8-DocPStS-ClLGOm32V7-97VytQPrtzO7cqnp2ZOt9Bzhxk-sPTZRgNVv0QvfIsP7CL-KyQ9m7BnE-7YhAFn3GMTnlW4ZRO2bMKeTRjYhB2bsGcTNmx6jb5_OTz9_DVwjTkCAYqnCUikKKVpHgqah5KFRMYJPElIUtEUFoxSd32NUqUSEVWVIFSojKXwKKfiuCxzEIhv0EY9q9UOwro_ZMZkwgmP4kxEnKlKlsY3L4dZwl1E_I9WCOdar5unTIs1mO2iT-1FF9a0Zf3pqUejcLrT6skCKLbuwrcP-5499Kxj_zu00cyX6j16Ki6b8WL-wfHrGlo9qGk |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Weighted+automata+extraction+and+explanation+of+recurrent+neural+networks+for+natural+language+tasks&rft.jtitle=Journal+of+logical+and+algebraic+methods+in+programming&rft.au=Wei%2C+Zeming&rft.au=Zhang%2C+Xiyue&rft.au=Zhang%2C+Yihao&rft.au=Sun%2C+Meng&rft.date=2024-01-01&rft.issn=2352-2208&rft.volume=136&rft.spage=100907&rft_id=info:doi/10.1016%2Fj.jlamp.2023.100907&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_jlamp_2023_100907 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2352-2208&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2352-2208&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2352-2208&client=summon |