Adversarial autoencoder for continuous sign language recognition
Summary Sign language serves as a vital communication medium for the deaf community, encompassing a diverse array of signs conveyed through distinct hand shapes along with non‐manual gestures like facial expressions and body movements. Accurate recognition of sign language is crucial for bridging th...
Uloženo v:
| Vydáno v: | Concurrency and computation Ročník 36; číslo 22 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Hoboken
Wiley Subscription Services, Inc
10.10.2024
|
| Témata: | |
| ISSN: | 1532-0626, 1532-0634 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Summary
Sign language serves as a vital communication medium for the deaf community, encompassing a diverse array of signs conveyed through distinct hand shapes along with non‐manual gestures like facial expressions and body movements. Accurate recognition of sign language is crucial for bridging the communication gap between deaf and hearing individuals, yet the scarcity of large‐scale datasets poses a significant challenge in developing robust recognition technologies. Existing works address this challenge by employing various strategies, such as enhancing visual modules, incorporating pretrained visual models, and leveraging multiple modalities to improve performance and mitigate overfitting. However, the exploration of the contextual module, responsible for modeling long‐term dependencies, remains limited. This work introduces an Adversarial Autoencoder for Continuous Sign Language Recognition, AA‐CSLR, to address the constraints imposed by limited data availability, leveraging the capabilities of generative models. The integration of pretrained knowledge, coupled with cross‐modal alignment, enhances the representation of sign language by effectively aligning visual and textual features. Through extensive experiments on publicly available datasets (PHOENIX‐2014, PHOENIX‐2014T, and CSL‐Daily), we demonstrate the effectiveness of our proposed method in achieving competitive performance in continuous sign language recognition. |
|---|---|
| AbstractList | Sign language serves as a vital communication medium for the deaf community, encompassing a diverse array of signs conveyed through distinct hand shapes along with non‐manual gestures like facial expressions and body movements. Accurate recognition of sign language is crucial for bridging the communication gap between deaf and hearing individuals, yet the scarcity of large‐scale datasets poses a significant challenge in developing robust recognition technologies. Existing works address this challenge by employing various strategies, such as enhancing visual modules, incorporating pretrained visual models, and leveraging multiple modalities to improve performance and mitigate overfitting. However, the exploration of the contextual module, responsible for modeling long‐term dependencies, remains limited. This work introduces an
A
dversarial
A
utoencoder for
C
ontinuous
S
ign
L
anguage
R
ecognition,
AA‐CSLR
, to address the constraints imposed by limited data availability, leveraging the capabilities of generative models. The integration of pretrained knowledge, coupled with cross‐modal alignment, enhances the representation of sign language by effectively aligning visual and textual features. Through extensive experiments on publicly available datasets (PHOENIX‐2014, PHOENIX‐2014T, and CSL‐Daily), we demonstrate the effectiveness of our proposed method in achieving competitive performance in continuous sign language recognition. Summary Sign language serves as a vital communication medium for the deaf community, encompassing a diverse array of signs conveyed through distinct hand shapes along with non‐manual gestures like facial expressions and body movements. Accurate recognition of sign language is crucial for bridging the communication gap between deaf and hearing individuals, yet the scarcity of large‐scale datasets poses a significant challenge in developing robust recognition technologies. Existing works address this challenge by employing various strategies, such as enhancing visual modules, incorporating pretrained visual models, and leveraging multiple modalities to improve performance and mitigate overfitting. However, the exploration of the contextual module, responsible for modeling long‐term dependencies, remains limited. This work introduces an Adversarial Autoencoder for Continuous Sign Language Recognition, AA‐CSLR, to address the constraints imposed by limited data availability, leveraging the capabilities of generative models. The integration of pretrained knowledge, coupled with cross‐modal alignment, enhances the representation of sign language by effectively aligning visual and textual features. Through extensive experiments on publicly available datasets (PHOENIX‐2014, PHOENIX‐2014T, and CSL‐Daily), we demonstrate the effectiveness of our proposed method in achieving competitive performance in continuous sign language recognition. Sign language serves as a vital communication medium for the deaf community, encompassing a diverse array of signs conveyed through distinct hand shapes along with non‐manual gestures like facial expressions and body movements. Accurate recognition of sign language is crucial for bridging the communication gap between deaf and hearing individuals, yet the scarcity of large‐scale datasets poses a significant challenge in developing robust recognition technologies. Existing works address this challenge by employing various strategies, such as enhancing visual modules, incorporating pretrained visual models, and leveraging multiple modalities to improve performance and mitigate overfitting. However, the exploration of the contextual module, responsible for modeling long‐term dependencies, remains limited. This work introduces an Adversarial Autoencoder for Continuous Sign Language Recognition, AA‐CSLR, to address the constraints imposed by limited data availability, leveraging the capabilities of generative models. The integration of pretrained knowledge, coupled with cross‐modal alignment, enhances the representation of sign language by effectively aligning visual and textual features. Through extensive experiments on publicly available datasets (PHOENIX‐2014, PHOENIX‐2014T, and CSL‐Daily), we demonstrate the effectiveness of our proposed method in achieving competitive performance in continuous sign language recognition. |
| Author | Li, Shaozi Kamal, Suhail Muhammad Chen, Yidong |
| Author_xml | – sequence: 1 givenname: Suhail Muhammad orcidid: 0000-0001-8019-6012 surname: Kamal fullname: Kamal, Suhail Muhammad organization: Bayero University Kano – sequence: 2 givenname: Yidong orcidid: 0000-0002-0243-7228 surname: Chen fullname: Chen, Yidong email: ydchen@xmu.edu.cn organization: Ministry of Culture and Tourism – sequence: 3 givenname: Shaozi surname: Li fullname: Li, Shaozi organization: Ministry of Culture and Tourism |
| BookMark | eNp1kE1LAzEQhoMo2FbBn7DgxcvWfDTp5mYp9QMKetBzmM3OLilrUpNdpf_erRUPoqeZw_O-MzxjcuyDR0IuGJ0ySvm13eK04JwekRGTgudUidnxz87VKRmntKGUMSrYiNwsqneMCaKDNoO-C-htqDBmdYiZDb5zvg99ypJrfNaCb3poMItoQ-Nd54I_Iyc1tAnPv-eEvNyunpf3-frx7mG5WOeWa0HzkgtRWVuVsqyQW1vqeUGt4LJgwGsBSipFoZ7NqShLpQFQVxx1YaVUs1oLMSGXh95tDG89ps5sQh_9cNIIxgsmC6HVQE0PlI0hpYi1sa6D_Z9dBNcaRs3ekhksmb2lIXD1K7CN7hXi7i80P6AfrsXdv5xZPq2--E9o-3hX |
| CitedBy_id | crossref_primary_10_1002_cpe_70136 crossref_primary_10_1002_cpe_8385 |
| Cites_doi | 10.1109/CVPR.2018.00812 10.1109/TMM.2018.2889563 10.1109/LSP.2018.2797228 10.1109/ICCV.2019.00756 10.1145/1143844.1143891 10.1145/3581783.3611745 10.1109/ICCV.2017.332 10.1109/COMST.2022.3205643 10.1007/978-3-030-58517-4_11 10.1109/JIOT.2022.3145845 10.1109/ICME.2019.00223 10.1109/TC.2023.3236900 10.1109/TPAMI.2019.2911077 10.1145/3394171.3413931 10.1109/LSP.2018.2817179 10.1109/CVPR42600.2020.01004 10.1109/LSP.2018.2877891 10.1109/ICCV48922.2021.01134 10.1007/978-3-030-58586-0_41 10.1109/CVPR.2016.90 10.1109/CVPR52688.2022.00507 10.1109/CVPR52729.2023.00249 10.1109/CVPR.2017.175 10.1016/j.cviu.2015.09.013 10.1109/CVPR.2019.00429 10.1109/LSP.2018.2883864 10.1109/LSP.2022.3199665 10.1109/LSP.2023.3289111 10.1109/ACCESS.2019.2929174 |
| ContentType | Journal Article |
| Copyright | 2024 John Wiley & Sons Ltd. |
| Copyright_xml | – notice: 2024 John Wiley & Sons Ltd. |
| DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
| DOI | 10.1002/cpe.8220 |
| DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | CrossRef Computer and Information Systems Abstracts |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1532-0634 |
| EndPage | n/a |
| ExternalDocumentID | 10_1002_cpe_8220 CPE8220 |
| Genre | article |
| GrantInformation_xml | – fundername: National Nature Science Foundation of China funderid: 62076211 |
| GroupedDBID | .3N .DC .GA 05W 0R~ 10A 1L6 1OC 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 702 7PT 8-0 8-1 8-3 8-4 8-5 8UM 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANLZ AAONW AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ACAHQ ACCFJ ACCZN ACPOU ACSCC ACXBN ACXQS ADBBV ADEOM ADIZJ ADKYN ADMGS ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AEQDE AEUQT AEUYR AFBPY AFFPM AFGKR AFPWT AFWVQ AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ATUGU AUFTA AZBYB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM EBS F00 F01 F04 F5P G-S G.N GNP GODZA HGLYW HHY HZ~ IX1 JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N04 N05 N9A O66 O9- OIG P2W P2X P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RWI RX1 SUPJJ TN5 UB1 V2E W8V W99 WBKPD WIH WIK WOHZO WQJ WRC WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAYXX ADMLS AEYWJ AGHNM AGYGG CITATION O8X 7SC 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c2930-b233dccdb5bde2ccb9780c32581a2f3a65660af4703bb69aae9d2e98c5564f933 |
| IEDL.DBID | DRFUL |
| ISICitedReferencesCount | 2 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001262065800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1532-0626 |
| IngestDate | Sat Jul 26 00:05:58 EDT 2025 Tue Nov 18 22:25:20 EST 2025 Sat Nov 29 03:49:55 EST 2025 Wed Jan 22 17:12:48 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 22 |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c2930-b233dccdb5bde2ccb9780c32581a2f3a65660af4703bb69aae9d2e98c5564f933 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-0243-7228 0000-0001-8019-6012 |
| PQID | 3128158396 |
| PQPubID | 2045170 |
| PageCount | 10 |
| ParticipantIDs | proquest_journals_3128158396 crossref_citationtrail_10_1002_cpe_8220 crossref_primary_10_1002_cpe_8220 wiley_primary_10_1002_cpe_8220_CPE8220 |
| PublicationCentury | 2000 |
| PublicationDate | 10 October 2024 |
| PublicationDateYYYYMMDD | 2024-10-10 |
| PublicationDate_xml | – month: 10 year: 2024 text: 10 October 2024 day: 10 |
| PublicationDecade | 2020 |
| PublicationPlace | Hoboken |
| PublicationPlace_xml | – name: Hoboken |
| PublicationTitle | Concurrency and computation |
| PublicationYear | 2024 |
| Publisher | Wiley Subscription Services, Inc |
| Publisher_xml | – name: Wiley Subscription Services, Inc |
| References | 2019; 7 2015; 141 2023; 73 2023; 30 2023 2019; 42 2022 2021 2020 2019; 21 2019 2022; 35 2018 2022; 25 2006 2017 2016 2022; 10 2018; 26 2018; 25 2022; 29 e_1_2_8_28_1 e_1_2_8_29_1 e_1_2_8_24_1 e_1_2_8_27_1 Huang J (e_1_2_8_10_1) 2018 e_1_2_8_3_1 e_1_2_8_2_1 e_1_2_8_5_1 e_1_2_8_4_1 Chen Y (e_1_2_8_34_1) 2022 e_1_2_8_7_1 e_1_2_8_6_1 e_1_2_8_9_1 e_1_2_8_8_1 e_1_2_8_20_1 e_1_2_8_43_1 Zheng J (e_1_2_8_25_1) 2023 e_1_2_8_42_1 e_1_2_8_22_1 e_1_2_8_41_1 e_1_2_8_40_1 e_1_2_8_17_1 e_1_2_8_18_1 Chen Y (e_1_2_8_26_1) 2022; 35 e_1_2_8_39_1 e_1_2_8_19_1 e_1_2_8_13_1 e_1_2_8_36_1 Koller O (e_1_2_8_38_1) 2017 e_1_2_8_14_1 e_1_2_8_35_1 e_1_2_8_16_1 e_1_2_8_37_1 Shen T (e_1_2_8_32_1) 2020 Zhou H (e_1_2_8_11_1) 2020 Zhou H (e_1_2_8_15_1) 2021 Hao A (e_1_2_8_21_1) 2021 e_1_2_8_31_1 Hu L (e_1_2_8_23_1) 2023 e_1_2_8_12_1 e_1_2_8_33_1 e_1_2_8_30_1 |
| References_xml | – volume: 25 start-page: 442 issue: 3 year: 2018 end-page: 446 article-title: A novel Chinese sign language recognition method based on keyframe‐centered clips publication-title: IEEE Signal Process Lett – start-page: 8719 year: 2020 end-page: 8729 – volume: 10 start-page: 6733 issue: 8 year: 2022 end-page: 6741 article-title: A numerical splitting and adaptive privacy budget‐allocation‐based LDP mechanism for privacy preservation in blockchain‐powered IoT publication-title: IEEE Internet Things J – start-page: 854 year: 2023 end-page: 862 – volume: 7 start-page: 96926 year: 2019 end-page: 96935 article-title: Technical approaches to Chinese sign language processing: a review publication-title: IEEE Access – start-page: 770 year: 2016 end-page: 778 – start-page: 11542 year: 2021 end-page: 11551 – volume: 25 start-page: 289 issue: 1 year: 2022 end-page: 318 article-title: Anomaly detection in blockchain networks: a comprehensive survey publication-title: IEEE Commun Surv Tutor – start-page: 709 year: 2023 end-page: 718 – start-page: 369 year: 2006 end-page: 376 – volume: 141 start-page: 108 year: 2015 end-page: 125 article-title: Continuous sign language recognition: towards large vocabulary statistical recognition systems handling multiple signers publication-title: Comput Vis Image Underst – start-page: 4165 year: 2019 end-page: 4174 – start-page: 2529 year: 2023 end-page: 2539 – start-page: 10023 year: 2020 end-page: 10033 – start-page: 1282 year: 2019 end-page: 1287 – start-page: 697 year: 2020 end-page: 714 – start-page: 5131 year: 2022 end-page: 5140 – volume: 30 start-page: 768 year: 2023 end-page: 772 article-title: QISampling: an effective sampling strategy for event‐based sign language recognition publication-title: IEEE Signal Process Lett – volume: 25 start-page: 645 issue: 5 year: 2018 end-page: 649 article-title: Training CNNs for 3‐D sign language recognition with color texture coded joint angular displacement maps publication-title: IEEE Signal Process Lett – start-page: 3056 year: 2017 end-page: 3065 – start-page: 1316 year: 2021 end-page: 1325 – start-page: 23141 year: 2023 end-page: 23150 – volume: 29 start-page: 1818 year: 2022 end-page: 1822 article-title: A cross‐attention BERT‐based framework for continuous sign language recognition publication-title: IEEE Signal Process Lett – start-page: 11303 year: 2021 end-page: 11312 – start-page: 5120 year: 2022 end-page: 5130 – start-page: 1497 year: 2020 end-page: 1505 – start-page: 7464 year: 2019 end-page: 7473 – volume: 26 start-page: 169 issue: 1 year: 2018 end-page: 173 article-title: S3DRGF: spatial 3‐D relational geometric features for 3‐D sign language representation and recognition publication-title: IEEE Signal Process Lett – start-page: 2257 year: 2018 end-page: 2264 – volume: 35 start-page: 17043 year: 2022 end-page: 17056 article-title: Two‐stream network for sign language recognition and translation publication-title: Adv Neural Inf Process Syst – start-page: 4297 year: 2017 end-page: 4305 – start-page: 172 year: 2020 end-page: 186 – start-page: 13009 year: 2020 end-page: 13016 – start-page: 7784 year: 2018 end-page: 7793 – volume: 42 start-page: 2306 issue: 9 year: 2019 end-page: 2320 article-title: Weakly supervised learning with multi‐stream CNN‐LSTM‐HMMs to discover sequential parallelism in sign language videos publication-title: IEEE Trans Pattern Anal Mach Intell – volume: 21 start-page: 1880 issue: 7 year: 2019 end-page: 1891 article-title: A deep neural framework for continuous sign language recognition by iterative training publication-title: IEEE Trans Multimed – volume: 25 start-page: 1860 issue: 12 year: 2018 end-page: 1864 article-title: Three‐dimensional sign language recognition with angular velocity maps and connived feature ResNet publication-title: IEEE Signal Process Lett – volume: 73 start-page: 970 year: 2023 end-page: 979 article-title: Vector‐indistinguishability: location dependency based privacy protection for successive location data publication-title: IEEE Trans Comput – start-page: 7361 year: 2017 end-page: 7369 – ident: e_1_2_8_14_1 doi: 10.1109/CVPR.2018.00812 – start-page: 23141 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2023 ident: e_1_2_8_25_1 – ident: e_1_2_8_40_1 doi: 10.1109/TMM.2018.2889563 – ident: e_1_2_8_31_1 – start-page: 4297 volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition year: 2017 ident: e_1_2_8_38_1 – ident: e_1_2_8_17_1 – start-page: 13009 volume-title: Proceedings of the AAAI Conference on Artificial Intelligence year: 2020 ident: e_1_2_8_11_1 – ident: e_1_2_8_3_1 doi: 10.1109/LSP.2018.2797228 – start-page: 2257 volume-title: Proceedings of the AAAI Conference on Artificial Intelligence year: 2018 ident: e_1_2_8_10_1 – ident: e_1_2_8_16_1 doi: 10.1109/ICCV.2019.00756 – ident: e_1_2_8_29_1 doi: 10.1145/1143844.1143891 – ident: e_1_2_8_22_1 doi: 10.1145/3581783.3611745 – ident: e_1_2_8_35_1 doi: 10.1109/ICCV.2017.332 – ident: e_1_2_8_27_1 doi: 10.1109/COMST.2022.3205643 – ident: e_1_2_8_39_1 doi: 10.1007/978-3-030-58517-4_11 – ident: e_1_2_8_24_1 doi: 10.1109/JIOT.2022.3145845 – ident: e_1_2_8_37_1 doi: 10.1109/ICME.2019.00223 – volume: 35 start-page: 17043 year: 2022 ident: e_1_2_8_26_1 article-title: Two‐stream network for sign language recognition and translation publication-title: Adv Neural Inf Process Syst – ident: e_1_2_8_20_1 doi: 10.1109/TC.2023.3236900 – start-page: 8719 volume-title: International Conference on Machine Learning year: 2020 ident: e_1_2_8_32_1 – ident: e_1_2_8_18_1 doi: 10.1109/TPAMI.2019.2911077 – ident: e_1_2_8_41_1 doi: 10.1145/3394171.3413931 – ident: e_1_2_8_7_1 doi: 10.1109/LSP.2018.2817179 – start-page: 854 volume-title: Proceedings of the AAAI Conference on Artificial Intelligence year: 2023 ident: e_1_2_8_23_1 – ident: e_1_2_8_42_1 doi: 10.1109/CVPR42600.2020.01004 – ident: e_1_2_8_6_1 doi: 10.1109/LSP.2018.2877891 – ident: e_1_2_8_9_1 doi: 10.1109/ICCV48922.2021.01134 – ident: e_1_2_8_30_1 – ident: e_1_2_8_19_1 doi: 10.1007/978-3-030-58586-0_41 – ident: e_1_2_8_33_1 doi: 10.1109/CVPR.2016.90 – ident: e_1_2_8_43_1 doi: 10.1109/CVPR52688.2022.00507 – ident: e_1_2_8_12_1 doi: 10.1109/CVPR52729.2023.00249 – ident: e_1_2_8_36_1 doi: 10.1109/CVPR.2017.175 – ident: e_1_2_8_13_1 doi: 10.1016/j.cviu.2015.09.013 – start-page: 11303 volume-title: Proceedings of the IEEE/CVF International Conference on Computer Vision year: 2021 ident: e_1_2_8_21_1 – ident: e_1_2_8_28_1 doi: 10.1109/CVPR.2019.00429 – start-page: 1316 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2021 ident: e_1_2_8_15_1 – start-page: 5120 volume-title: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition year: 2022 ident: e_1_2_8_34_1 – ident: e_1_2_8_5_1 doi: 10.1109/LSP.2018.2883864 – ident: e_1_2_8_8_1 doi: 10.1109/LSP.2022.3199665 – ident: e_1_2_8_4_1 doi: 10.1109/LSP.2023.3289111 – ident: e_1_2_8_2_1 doi: 10.1109/ACCESS.2019.2929174 |
| SSID | ssj0011031 |
| Score | 2.388951 |
| Snippet | Summary
Sign language serves as a vital communication medium for the deaf community, encompassing a diverse array of signs conveyed through distinct hand... Sign language serves as a vital communication medium for the deaf community, encompassing a diverse array of signs conveyed through distinct hand shapes along... |
| SourceID | proquest crossref wiley |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| SubjectTerms | adversarial autoencoder Availability continuous sign language recognition Datasets Deafness Knowledge representation Modules Performance enhancement Sign language vision‐language |
| Title | Adversarial autoencoder for continuous sign language recognition |
| URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcpe.8220 https://www.proquest.com/docview/3128158396 |
| Volume | 36 |
| WOSCitedRecordID | wos001262065800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVWIB databaseName: Wiley Online Library Full Collection 2020 customDbUrl: eissn: 1532-0634 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0011031 issn: 1532-0626 databaseCode: DRFUL dateStart: 20010101 isFulltext: true titleUrlDefault: https://onlinelibrary.wiley.com providerName: Wiley-Blackwell |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEB6k9eDF-sRqlQiip7Xb3STb3JTa4kFKEQu9hX1CQZLSNP5-d_OqgoLgKZdZCPPIfNmd_T6AGyy1wUwMEGHaR74xEkXYYOSHgnEeCcF8VYhNsOl0uFhEs2qq0t2FKfkhmg03VxnF99oVOBdZf0saKlf63nY3-7veJjZt_Ra0n14n85fmDMEJGJRsqQRhi9tr6llM-vXa781oizC_4tSi0Uw6_3nFA9iv4KX3WObDIezo5Ag6tXSDV1XyMTwUQswZd-nn8XyTOj5LZS0shvXc-PoyydM889x4h1fvaXrNtFGanMB8Mn4bPaNKTAFJ29ExEoRSJaUSgVCaSCkc9ZCkJBgOODGUO1yHufGtK4UII851pIiOhjIIQt9ElJ5CK0kTfQaeVjaEXDBBnb2hglHDmAmMYoJZBNKFu9qrsayYxp3gxXtcciST2Domdo7pwnVjuSrZNX6w6dWBiav6ymLqDgADC-7CLtwWIfh1fTyajd3z_K-GF7BHLHJxDWqAe9DarHN9CbvyY7PM1ldVln0Cwp3XmQ |
| linkProvider | Wiley-Blackwell |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1bS8MwFD6MTdAX5xWnUyuIPtVlSdus-KLMjYlzDNlgbyVJExhIN9bV32_S2xQUBJ_6cgLlXHK-nqTfB3CNhFSI8raNqXRsRylh-0gh2_E4ZcznnDphKjZBR6PObOaPK3Bf_AuT8UOUAzdTGel-bQrcDKRbG9ZQsZR3ur3p7_Wao7PIrULt6a0_HZaHCEbBIKNLxTbSwL3gnkW4Vaz93o02EPMrUE07Tb_-r3fcg90cYFqPWUbsQ0VGB1AvxBusvJYP4SGVYo6ZSUCLJeuFYbQMtYVGsZa5wD6PkkUSW-aCh1VMNa3yvtEiOoJpvzfpDuxcTsEWuqcjm2NCQiFC7vJQYiG4IR8SBLudNsOKMIPsEFOO3gM493zGpB9i6XeE63qO8gk5hmq0iOQJWDLUQWSccmLsFeGUKEqVq0LKqcYgDbgt3BqInGvcSF68BxlLMg60YwLjmAZclZbLjF_jB5tmEZkgr7A4IOYI0NXwzmvATRqDX9cH3XHPPE__angJ24PJ6zAYPo9ezmAHaxxj2lUbNaG6XiXyHLbEx3oery7ylPsErP7biQ |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1JS8NAFH6UVsSLdcVq1Qiip9jpTJJp8KJ0QbGUIhZ6C7NCQdLStP5-Z7JVQUHwlMsbCG-Z92Xm5fsArpFQGlHedjFVnutpLdwQaeR6AaeMhZxTT6ZiE3Q06kyn4bgC98W_MBk_RHngZisj3a9tgauF1K0Na6hYqDvT3sz3es3zw8BUZa33OpgMy0sEq2CQ0aViFxngXnDPItwq1n7vRhuI-RWopp1mUP_XO-7Bbg4wnccsI_ahouIDqBfiDU5ey4fwkEoxJ8wmoMPWq7lltJTGwqBYxw6wz-L1fJ04dsDDKU41nXLeaB4fwWTQf-s-ubmcgitMT0cux4RIIST3uVRYCG7JhwTBfqfNsCbMIjvEtGf2AM6DkDEVSqzCjvD9wNMhIcdQjeexOgFHSRNExikn1l4TTommVPtaUk4NBmnAbeHWSORc41by4j3KWJJxZBwTWcc04Kq0XGT8Gj_YNIvIRHmFJRGxV4C-gXdBA27SGPy6PuqO-_Z5-lfDS9ge9wbR8Hn0cgY72MAY263aqAnV1XKtzmFLfKxmyfIiz7hPbtHbBA |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adversarial+autoencoder+for+continuous+sign+language+recognition&rft.jtitle=Concurrency+and+computation&rft.au=Kamal%2C+Suhail+Muhammad&rft.au=Chen%2C+Yidong&rft.au=Li%2C+Shaozi&rft.date=2024-10-10&rft.issn=1532-0626&rft.eissn=1532-0634&rft.volume=36&rft.issue=22&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcpe.8220&rft.externalDBID=10.1002%252Fcpe.8220&rft.externalDocID=CPE8220 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1532-0626&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1532-0626&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1532-0626&client=summon |