First-Person Video Domain Adaptation with Multi-Scene Cross-Site Datasets and Attention-Based Methods
Unsupervised Domain Adaptation (UDA) can transfer knowledge from labeled source data to unlabeled target data of the same categories. However, UDA for first-person video action recognition is an under-explored problem, with a lack of benchmark datasets and limited consideration of first-person video...
Saved in:
| Published in: | IEEE transactions on circuits and systems for video technology Vol. 33; no. 12; p. 1 |
|---|---|
| Main Authors: | , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
New York
IEEE
01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 1051-8215, 1558-2205 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Unsupervised Domain Adaptation (UDA) can transfer knowledge from labeled source data to unlabeled target data of the same categories. However, UDA for first-person video action recognition is an under-explored problem, with a lack of benchmark datasets and limited consideration of first-person video characteristics. Existing benchmark datasets provide videos with a single activity scene, e.g. kitchen, and similar global video statistics. However, multiple activity scenes and different global video statistics are still essential for developing robust UDA networks for real-world applications. To this end, we first introduce two first-person video domain adaptation datasets: ADL-7 and GTEA_KITCHEN-6. To the best of our knowledge, they are the first to provide multi-scene and cross-site settings for UDA problem on first-person video action recognition, promoting diversity. They provide five more domains based on the original three from existing datasets, enriching data for this area. They are also compatible with existing datasets, ensuring scalability. First-person videos have unique challenges, i.e. actions tend to occur in hand-object interaction areas. Therefore, networks paying more attention to such areas can benefit common feature learning in UDA. Attention mechanisms can endow networks with the ability to allocate resources adaptively for the important parts of the inputs and fade out the rest. Hence, we introduce channel-temporal attention modules to capture the channel-wise and temporal-wise relationships and model their inter-dependencies important to this characteristic. Moreover, we propose a Channel-Temporal Attention Network (CTAN) to integrate these modules into existing architectures. CTAN outperforms baselines on the new datasets and one existing dataset, EPIC-8. |
|---|---|
| AbstractList | Unsupervised Domain Adaptation (UDA) can transfer knowledge from labeled source data to unlabeled target data of the same categories. However, UDA for first-person video action recognition is an under-explored problem, with a lack of benchmark datasets and limited consideration of first-person video characteristics. Existing benchmark datasets provide videos with a single activity scene, e.g. kitchen, and similar global video statistics. However, multiple activity scenes and different global video statistics are still essential for developing robust UDA networks for real-world applications. To this end, we first introduce two first-person video domain adaptation datasets: ADL-7 and GTEA_KITCHEN-6. To the best of our knowledge, they are the first to provide multi-scene and cross-site settings for UDA problem on first-person video action recognition, promoting diversity. They provide five more domains based on the original three from existing datasets, enriching data for this area. They are also compatible with existing datasets, ensuring scalability. First-person videos have unique challenges, i.e. actions tend to occur in hand-object interaction areas. Therefore, networks paying more attention to such areas can benefit common feature learning in UDA. Attention mechanisms can endow networks with the ability to allocate resources adaptively for the important parts of the inputs and fade out the rest. Hence, we introduce channel-temporal attention modules to capture the channel-wise and temporal-wise relationships and model their inter-dependencies important to this characteristic. Moreover, we propose a Channel-Temporal Attention Network (CTAN) to integrate these modules into existing architectures. CTAN outperforms baselines on the new datasets and one existing dataset, EPIC-8. |
| Author | Lei, Tao Chen, Zhixiang Liu, Xianyuan Jiang, Ping Lu, Haiping Zhou, Shuo |
| Author_xml | – sequence: 1 givenname: Xianyuan orcidid: 0000-0002-3084-519X surname: Liu fullname: Liu, Xianyuan organization: Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, China – sequence: 2 givenname: Shuo orcidid: 0000-0002-8069-2814 surname: Zhou fullname: Zhou, Shuo organization: Department of Computer Science, University of Sheffield, Sheffield, United Kingdom – sequence: 3 givenname: Tao orcidid: 0000-0002-0900-1582 surname: Lei fullname: Lei, Tao organization: Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, China – sequence: 4 givenname: Ping orcidid: 0000-0002-8679-2189 surname: Jiang fullname: Jiang, Ping organization: Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, China – sequence: 5 givenname: Zhixiang surname: Chen fullname: Chen, Zhixiang organization: Department of Computer Science, University of Sheffield, Sheffield, United Kingdom – sequence: 6 givenname: Haiping orcidid: 0000-0002-0349-2181 surname: Lu fullname: Lu, Haiping organization: Department of Computer Science, University of Sheffield, Sheffield, United Kingdom |
| BookMark | eNp9kE9LAzEQxYNUsK1-AfEQ8JyaP5vd5Fhbq4JFobXXJbuZpSntbk1SxG_vru1BPHiZGYb3m-G9AerVTQ0IXTM6Yozqu-VksVqOOOViJLhiacbOUJ9JqQjnVPbamUpGFGfyAg1C2FDKEpVkfQQz50Mkb-BDU-OVs9DgabMzrsZja_bRRNfuP11c4_lhGx1ZlFADnvgmBLJwEfDURBMgBmxqi8cxQt0h5L5dWjyHuG5suETnldkGuDr1IXqfPSwnT-Tl9fF5Mn4hJddpJKqglRZSVFZZVZiqTEvblkqZQrPSsqJQqTYyA12UCS1E2moUr3TW-hImS8UQ3R7v7n3zcYAQ801z8HX7MudK6ySVSnYqdVSVnQsPVV66o9HojdvmjOZdqPlPqHkXan4KtUX5H3Tv3c74r_-hmyPkAOAXwITONBXfcMmGoA |
| CODEN | ITCTEM |
| CitedBy_id | crossref_primary_10_3390_app132312823 crossref_primary_10_3390_mti9070066 crossref_primary_10_1123_jmpb_2024_0045 |
| Cites_doi | 10.1109/CVPR46437.2021.00966 10.1109/TCSVT.2019.2963318 10.1109/CVPR.2018.00829 10.1109/TCSVT.2021.3067449 10.1109/ICRA.2018.8461249 10.1109/TCSVT.2021.3118060 10.1109/TCSVT.2021.3075470 10.1109/CVPR.2012.6248010 10.1109/CVPR.2009.5206848 10.1109/TCSVT.2022.3192135 10.1145/3511808.3557676 10.1109/ICCV.2019.00194 10.1109/CVPR.2017.107 10.1109/TCSVT.2018.2872503 10.1109/TPAMI.2019.2913372 10.1007/s11263-021-01470-y 10.1109/CVPR52688.2022.01351 10.1109/ICRA.2017.7989247 10.1109/CVPR46437.2021.01322 10.1109/TCSVT.2020.3046625 10.1609/aaai.v34i07.6854 10.1109/CVPR.2015.7298625 10.1109/TPAMI.2018.2868685 10.1109/ICCVW.2019.00461 10.1007/978-3-030-58610-2_40 10.1109/TCSVT.2020.2975845 10.1109/CVPR.2016.287 10.1016/j.patcog.2022.108725 10.1109/TCSVT.2019.2912988 10.1007/978-3-030-01246-5_49 10.1109/ICME.2019.00103 10.1007/s11263-021-01531-2 10.1109/TPAMI.2020.2991965 10.1609/aaai.v30i1.10306 10.1109/CVPR.2018.00675 10.1109/CVPR46437.2021.00687 10.1007/978-3-642-00296-0_5 10.1007/978-3-319-46493-0_36 10.1109/CVPR42600.2020.00325 10.1109/CVPR42600.2020.01098 10.1109/TCSVT.2018.2842206 10.1109/ICCV.2019.00642 10.1109/JAS.2021.1004210 10.1007/978-3-030-01234-2_1 10.1016/j.patcog.2020.107255 10.1109/CVPR.2017.502 10.1109/TCSVT.2021.3099943 10.1109/tpami.2020.3015894 10.1109/CVPR42600.2020.00099 10.1109/ICCV48922.2021.01336 10.1109/TPAMI.2018.2868668 10.1016/j.neucom.2021.11.081 10.1109/CVPR.2011.5995444 10.1109/TPAMI.2018.2881114 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TCSVT.2023.3281671 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library Online CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1558-2205 |
| EndPage | 1 |
| ExternalDocumentID | 10_1109_TCSVT_2023_3281671 10139790 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: China Scholarship Council grantid: 201904910380 funderid: 10.13039/501100004543 |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 5VS AAYXX AETIX AGSQL AI. AIBXA ALLEH CITATION EJD H~9 ICLAB IFJZH VH1 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c296t-8b0f9353fd8d8bafc6cdfc6f8ab91cd1bb869a57e9bc40b36baf82f972153a763 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 6 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001121618300048&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1051-8215 |
| IngestDate | Sat Sep 06 06:04:18 EDT 2025 Sat Nov 29 01:44:23 EST 2025 Tue Nov 18 22:35:25 EST 2025 Mon Aug 04 05:48:52 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 12 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c296t-8b0f9353fd8d8bafc6cdfc6f8ab91cd1bb869a57e9bc40b36baf82f972153a763 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-8679-2189 0000-0002-3084-519X 0000-0002-8069-2814 0000-0002-0349-2181 0000-0002-0900-1582 0000-0002-5636-6082 |
| PQID | 2899465856 |
| PQPubID | 85433 |
| PageCount | 1 |
| ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2023_3281671 crossref_primary_10_1109_TCSVT_2023_3281671 ieee_primary_10139790 proquest_journals_2899465856 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-12-01 |
| PublicationDateYYYYMMDD | 2023-12-01 |
| PublicationDate_xml | – month: 12 year: 2023 text: 2023-12-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on circuits and systems for video technology |
| PublicationTitleAbbrev | TCSVT |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 de la Torre (ref20) 2009 ref57 ref12 ref56 ref15 ref14 ref58 ref53 ref52 Van der Maaten (ref59) 2008; 9 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 Long (ref49) Sahoo (ref27); 34 Bousmalis (ref44); 29 Vaswani (ref46); 30 ref45 ref48 ref47 ref42 ref41 ref43 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref36 Long (ref50); 31 ref31 ref30 ref33 ref32 ref2 ref1 ref39 Jamal (ref51); 2 ref38 Ganin (ref21) 2015; 17 ref24 ref23 ref26 ref25 ref63 ref22 ref28 ref29 ref60 ref62 ref61 |
| References_xml | – ident: ref25 doi: 10.1109/CVPR46437.2021.00966 – ident: ref15 doi: 10.1109/TCSVT.2019.2963318 – ident: ref13 doi: 10.1109/CVPR.2018.00829 – ident: ref29 doi: 10.1109/TCSVT.2021.3067449 – ident: ref11 doi: 10.1109/ICRA.2018.8461249 – ident: ref38 doi: 10.1109/TCSVT.2021.3118060 – ident: ref2 doi: 10.1109/TCSVT.2021.3075470 – ident: ref18 doi: 10.1109/CVPR.2012.6248010 – ident: ref56 doi: 10.1109/CVPR.2009.5206848 – volume: 34 start-page: 23386 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref27 article-title: Contrast and mix: Temporal contrastive video domain adaptation with background mixing – ident: ref40 doi: 10.1109/TCSVT.2022.3192135 – volume: 30 start-page: 5998 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref46 article-title: Attention is all you need – ident: ref57 doi: 10.1145/3511808.3557676 – ident: ref34 doi: 10.1109/ICCV.2019.00194 – ident: ref41 doi: 10.1109/CVPR.2017.107 – volume: 31 start-page: 1647 volume-title: Proc. NIPS ident: ref50 article-title: Conditional adversarial domain adaptation – ident: ref31 doi: 10.1109/TCSVT.2018.2872503 – start-page: 97 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref49 article-title: Learning transferable features with deep adaptation networks – ident: ref32 doi: 10.1109/TPAMI.2019.2913372 – ident: ref10 doi: 10.1007/s11263-021-01470-y – ident: ref61 doi: 10.1109/CVPR52688.2022.01351 – ident: ref12 doi: 10.1109/ICRA.2017.7989247 – volume: 17 start-page: 2030 issue: 1 year: 2015 ident: ref21 article-title: Domain-adversarial training of neural networks publication-title: J. Mach. Learn. Res. – ident: ref45 doi: 10.1109/CVPR46437.2021.01322 – ident: ref28 doi: 10.1109/TCSVT.2020.3046625 – volume: 9 start-page: 2579 issue: 86 year: 2008 ident: ref59 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – ident: ref52 doi: 10.1609/aaai.v34i07.6854 – ident: ref55 doi: 10.1109/CVPR.2015.7298625 – year: 2009 ident: ref20 article-title: Detailed human data acquisition of kitchen activities: The CMU-multimodal activity database (CMU-MMAC) – ident: ref22 doi: 10.1109/TPAMI.2018.2868685 – ident: ref17 doi: 10.1109/ICCVW.2019.00461 – ident: ref23 doi: 10.1007/978-3-030-58610-2_40 – ident: ref9 doi: 10.1109/TCSVT.2020.2975845 – ident: ref54 doi: 10.1109/CVPR.2016.287 – ident: ref60 doi: 10.1016/j.patcog.2022.108725 – ident: ref1 doi: 10.1109/TCSVT.2019.2912988 – ident: ref53 doi: 10.1007/978-3-030-01246-5_49 – ident: ref35 doi: 10.1109/ICME.2019.00103 – ident: ref16 doi: 10.1007/s11263-021-01531-2 – ident: ref3 doi: 10.1109/TPAMI.2020.2991965 – ident: ref42 doi: 10.1609/aaai.v30i1.10306 – ident: ref7 doi: 10.1109/CVPR.2018.00675 – ident: ref5 doi: 10.1109/CVPR46437.2021.00687 – ident: ref58 doi: 10.1007/978-3-642-00296-0_5 – ident: ref43 doi: 10.1007/978-3-319-46493-0_36 – ident: ref30 doi: 10.1109/CVPR42600.2020.00325 – ident: ref47 doi: 10.1109/CVPR42600.2020.01098 – volume: 2 start-page: 5 issue: 3 volume-title: Proc. BMVC ident: ref51 article-title: Deep domain adaptation in action space – ident: ref14 doi: 10.1109/TCSVT.2018.2842206 – ident: ref24 doi: 10.1109/ICCV.2019.00642 – ident: ref63 doi: 10.1109/JAS.2021.1004210 – ident: ref33 doi: 10.1007/978-3-030-01234-2_1 – ident: ref48 doi: 10.1016/j.patcog.2020.107255 – ident: ref6 doi: 10.1109/CVPR.2017.502 – ident: ref39 doi: 10.1109/TCSVT.2021.3099943 – ident: ref37 doi: 10.1109/tpami.2020.3015894 – ident: ref36 doi: 10.1109/CVPR42600.2020.00099 – ident: ref26 doi: 10.1109/ICCV48922.2021.01336 – ident: ref8 doi: 10.1109/TPAMI.2018.2868668 – ident: ref4 doi: 10.1016/j.neucom.2021.11.081 – volume: 29 start-page: 343 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref44 article-title: Domain separation networks – ident: ref19 doi: 10.1109/CVPR.2011.5995444 – ident: ref62 doi: 10.1109/TPAMI.2018.2881114 |
| SSID | ssj0014847 |
| Score | 2.4744053 |
| Snippet | Unsupervised Domain Adaptation (UDA) can transfer knowledge from labeled source data to unlabeled target data of the same categories. However, UDA for... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | Action recognition Activity recognition Adaptation Benchmark testing Benchmarks channel-temporal attention Data mining Datasets first-person vision Image reconstruction Kitchens Knowledge management Modules Networks Representation learning Scalability Target recognition Training unsupervised domain adaptation Video |
| Title | First-Person Video Domain Adaptation with Multi-Scene Cross-Site Datasets and Attention-Based Methods |
| URI | https://ieeexplore.ieee.org/document/10139790 https://www.proquest.com/docview/2899465856 |
| Volume | 33 |
| WOSCitedRecordID | wos001121618300048&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore customDbUrl: eissn: 1558-2205 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014847 issn: 1051-8215 databaseCode: RIE dateStart: 19910101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFA46POjBnxOnU3LwJpltk6XJcW4OL47B5tit5FehoN1YO_9-k7QbE1HwUkpJSujXvPclL-99ANxHrsZ6YAjSVGjkSp4jYW0gCiJBSCxIF6fSi03EoxGbz_m4Tlb3uTDGGH_4zHTcrY_l64Vau60yO8MdX-F2hb4fx3GVrLUNGRDm1cQsXwgRs45skyET8MdpfzKbdpxQeAdHLKRx-M0LeVmVH7bYO5jhyT-HdgqOayYJexX0Z2DP5OfgaKe-4AUww8ySOzT2rBrOMm0WcLD4EFkOe1osqyg8dFux0Cfioomypg_23ZjRxJJROBCldXNlAUWuYa8sq8OR6Mk-1PDVq08XTfA2fJ72X1Ctq4BUxGmJmAxSji0KmmkmRaqo0vaSMiF5qHQoJaNcdGPDpSKBxNS2YVHq6vx0sbAG6RI08kVurgDkkcIqNU4sOCCpFIIKSWWcYoVlRBhugXDznRNVFx132hfviV98BDzx2CQOm6TGpgUetn2WVcmNP1s3HRo7LSsgWqC9wTOpp2WRuNUlsZyrS69_6XYDDt3bqwMrbdAoV2tzCw7UZ5kVqzv_x30BRknTsA |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFA4yBfXB68R5zYNvktk2aZo8zs2huA1hU3wruRUK2g1X_f0maTcUUfCllJLQ0K8550tOzvkAuIhcjfXAEKSp0MiVPEfC2kAURIKQRJAYZ9KLTSSjEXt-5g91srrPhTHG-MNnpu1ufSxfT9W72yqzM9zxFW5X6KsxIVFYpWstgwaEeT0xyxhCxKwrW-TIBPxq0h0_TdpOKryNIxbSJPzmh7ywyg9r7F1Mf_ufg9sBWzWXhJ0K_F2wYoo9sPmlwuA-MP3c0jv04Hk1fMq1mcLe9FXkBexoMavi8NBtxkKfiovGyho_2HVjRmNLR2FPlNbRlXMoCg07ZVkdj0TX9qGGQ68_PW-Cx_7NpHuLamUFpCJOS8RkkHFscdBMMykyRZW2l4wJyUOlQykZ5SJODJeKBBJT24ZFmav0E2NhTdIBaBTTwhwCyCOFVWacXHBAMikEFZLKJMMKy4gw3ALh4junqi477tQvXlK__Ah46rFJHTZpjU0LXC77zKqiG3-2bjo0vrSsgGiBkwWeaT0x56lbXxLLumJ69Eu3c7B-OxkO0sHd6P4YbLg3VcdXTkCjfHs3p2BNfZT5_O3M_32fcAbW9w |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=First-Person+Video+Domain+Adaptation+With+Multi-Scene+Cross-Site+Datasets+and+Attention-Based+Methods&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Liu%2C+Xianyuan&rft.au=Zhou%2C+Shuo&rft.au=Lei%2C+Tao&rft.au=Jiang%2C+Ping&rft.date=2023-12-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=33&rft.issue=12&rft.spage=7774&rft.epage=7788&rft_id=info:doi/10.1109%2FTCSVT.2023.3281671&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2023_3281671 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |