NICEST: Noisy Label Correction and Training for Robust Scene Graph Generation
Nearly all existing scene graph generation (SGG) models have overlooked the ground-truth annotation qualities of mainstream SGG datasets, i.e., they assume: 1) all the manually annotated positive samples are equally correct; 2) all the un-annotated negative samples are absolutely background. In this...
Uložené v:
| Vydané v: | IEEE transactions on pattern analysis and machine intelligence Ročník 46; číslo 10; s. 6873 - 6888 |
|---|---|
| Hlavní autori: | , , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
IEEE
01.10.2024
|
| Predmet: | |
| ISSN: | 0162-8828, 2160-9292 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Nearly all existing scene graph generation (SGG) models have overlooked the ground-truth annotation qualities of mainstream SGG datasets, i.e., they assume: 1) all the manually annotated positive samples are equally correct; 2) all the un-annotated negative samples are absolutely background. In this article, we argue that neither of the assumptions applies to SGG: there are numerous "noisy" ground-truth predicate labels that break these two assumptions and harm the training of unbiased SGG models. To this end, we propose a novel N o I sy label C orr E ction and S ample T raining strategy for SGG: NICEST , which rules out these noisy label issues by generating high-quality samples and designing an effective training strategy. Specifically, it consists of: 1) NICE : it detects noisy samples and then reassigns higher-quality soft predicate labels to them. To achieve this goal, NICE contains three main steps: negative Noisy Sample Detection (Neg-NSD), positive NSD (Pos-NSD), and Noisy Sample Correction (NSC). First, in Neg-NSD, it is treated as an out-of-distribution detection problem, and the pseudo labels are assigned to all detected noisy negative samples. Then, in Pos-NSD, we use a density-based clustering algorithm to detect noisy positive samples. Lastly, in NSC, we use weighted KNN to reassign more robust soft predicate labels rather than hard labels to all noisy positive samples. 2) NIST : it is a multi-teacher knowledge distillation based training strategy, which enables the model to learn unbiased fusion knowledge. A dynamic trade-off weighting strategy in NIST is designed to penalize the bias of different teachers. Due to the model-agnostic nature of both NICE and NIST, NICEST can be seamlessly incorporated into any SGG architecture to boost its performance on different predicate categories. In addition, to better assess the generalization ability of SGG models, we propose a new benchmark, VG-OOD , by reorganizing the prevalent VG dataset. This reorganization deliberately makes the predicate distributions between the training and test sets as different as possible for each subject-object category pair. This new benchmark helps disentangle the influence of subject-object category biases. Extensive ablations and results on different backbones and tasks have attested to the effectiveness and generalization ability of each component of NICEST. |
|---|---|
| AbstractList | Nearly all existing scene graph generation (SGG) models have overlooked the ground-truth annotation qualities of mainstream SGG datasets, i.e., they assume: 1) all the manually annotated positive samples are equally correct; 2) all the un-annotated negative samples are absolutely background. In this article, we argue that neither of the assumptions applies to SGG: there are numerous "noisy" ground-truth predicate labels that break these two assumptions and harm the training of unbiased SGG models. To this end, we propose a novel N o I sy label C orr E ction and S ample T raining strategy for SGG: NICEST , which rules out these noisy label issues by generating high-quality samples and designing an effective training strategy. Specifically, it consists of: 1) NICE : it detects noisy samples and then reassigns higher-quality soft predicate labels to them. To achieve this goal, NICE contains three main steps: negative Noisy Sample Detection (Neg-NSD), positive NSD (Pos-NSD), and Noisy Sample Correction (NSC). First, in Neg-NSD, it is treated as an out-of-distribution detection problem, and the pseudo labels are assigned to all detected noisy negative samples. Then, in Pos-NSD, we use a density-based clustering algorithm to detect noisy positive samples. Lastly, in NSC, we use weighted KNN to reassign more robust soft predicate labels rather than hard labels to all noisy positive samples. 2) NIST : it is a multi-teacher knowledge distillation based training strategy, which enables the model to learn unbiased fusion knowledge. A dynamic trade-off weighting strategy in NIST is designed to penalize the bias of different teachers. Due to the model-agnostic nature of both NICE and NIST, NICEST can be seamlessly incorporated into any SGG architecture to boost its performance on different predicate categories. In addition, to better assess the generalization ability of SGG models, we propose a new benchmark, VG-OOD , by reorganizing the prevalent VG dataset. This reorganization deliberately makes the predicate distributions between the training and test sets as different as possible for each subject-object category pair. This new benchmark helps disentangle the influence of subject-object category biases. Extensive ablations and results on different backbones and tasks have attested to the effectiveness and generalization ability of each component of NICEST. |
| Author | Shi, Hanrong Zhang, Hanwang Li, Lin Xiao, Jun Liu, Wei Chen, Long Yang, Yi |
| Author_xml | – sequence: 1 givenname: Lin orcidid: 0000-0002-5678-4487 surname: Li fullname: Li, Lin email: mukti@zju.edu.cn organization: College of Computer Science, Zhejiang University, Hangzhou, China – sequence: 2 givenname: Jun orcidid: 0000-0002-6142-9914 surname: Xiao fullname: Xiao, Jun email: junx@zju.edu.cn organization: College of Computer Science, Zhejiang University, Hangzhou, China – sequence: 3 givenname: Hanrong orcidid: 0000-0001-6817-0063 surname: Shi fullname: Shi, Hanrong email: hanrong@zju.edu.cn organization: College of Computer Science, Zhejiang University, Hangzhou, China – sequence: 4 givenname: Hanwang orcidid: 0000-0001-7374-8739 surname: Zhang fullname: Zhang, Hanwang email: hanwangzhang@ntu.edu.sg organization: School of Computer Science and Engineering, Nanyang Technological University, Singapore – sequence: 5 givenname: Yi orcidid: 0000-0002-0512-880X surname: Yang fullname: Yang, Yi email: yangyics@zju.edu.cn organization: College of Computer Science, Zhejiang University, Hangzhou, China – sequence: 6 givenname: Wei orcidid: 0000-0002-3865-8145 surname: Liu fullname: Liu, Wei email: wl2223@columbia.edu organization: Data Platform, Tencent, Shenzhen, China – sequence: 7 givenname: Long orcidid: 0000-0001-6148-9709 surname: Chen fullname: Chen, Long email: longchen@ust.hk organization: Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong |
| BookMark | eNp9kMtOwzAQRS1UJNrCDyAW_oEUe5w6NrsqKqVSWxAN62iSjiGoOJUTFv170scCsWA1I809o6szYD1fe2LsVoqRlMLeZy-T5XwEAuKRUiZRsb1gfZBaRBYs9FhfSA2RMWCu2KBpPoWQ8VioPluu5ul0nT3wVV01e77AgrY8rUOgsq1qz9FveBaw8pV_564O_LUuvpuWr0vyxGcBdx981q0BD_Frdulw29DNeQ7Z2-M0S5-ixfNsnk4WUQnatBFoAocadUIbrcjEVkHicEwWrdOFBVcq7ZRFGcPG2KTQFmkcl4nAQhUo1JCZ098y1E0TyOVl1R4btF3XbS5FftCSH7XkBy35WUuHwh90F6ovDPv_obsTVBHRLyC2GrrzD2WQcFo |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_1007_s11227_025_07246_2 crossref_primary_10_1145_3700877 crossref_primary_10_1007_s00521_024_10822_x crossref_primary_10_1007_s11263_024_02190_9 crossref_primary_10_1109_ACCESS_2024_3450908 crossref_primary_10_1016_j_imavis_2024_105283 crossref_primary_10_1109_ACCESS_2025_3555230 |
| Cites_doi | 10.1109/CVPR.2019.00686 10.1109/CVPR52688.2022.01889 10.1145/3565573 10.1007/978-3-031-20059-5_6 10.1016/j.neucom.2023.01.063 10.1109/CVPR.2019.00507 10.1109/TPAMI.2021.3055564 10.1214/aoms/1177729694 10.1109/CVPR42600.2020.00396 10.1109/ICCV.2019.00041 10.1109/CVPR.2018.00522 10.1109/TPAMI.2020.2992222 10.1109/TPAMI.2021.3119754 10.1109/ICCV48922.2021.00030 10.1145/3474085.3475485 10.1109/ICCV.2019.00471 10.1007/s11263-021-01453-z 10.1109/CVPR42600.2020.01081 10.1109/tcsvt.2023.3282349 10.1109/ICCV48922.2021.01512 10.24963/ijcai.2021/176 10.1109/ICME.2019.00009 10.1007/978-3-030-58558-7_15 10.1145/3097983.3098135 10.1109/CVPR.2019.00857 10.1109/CVPR.2018.00611 10.1109/GC46384.2019.00015 10.1109/TPAMI.2021.3137605 10.1109/TPAMI.2015.2456899 10.1109/ICCV51070.2023.01982 10.1126/science.1242072 10.1145/3336191.3371792 10.1109/ICCV48922.2021.01607 10.1109/CVPR42600.2020.00377 10.1109/CVPR46437.2021.01096 10.1109/TPAMI.2015.2469281 10.1109/CVPR52688.2022.01887 10.1109/CVPR52688.2022.01882 10.1109/CVPR.2018.00571 10.1145/3475723.3484247 10.1109/CVPR.2017.330 10.1109/TASLP.2022.3153264 10.1109/tpami.2023.3290012 10.1007/978-3-319-46448-0_51 10.1109/ICME55011.2023.00022 10.1109/CVPR42600.2020.00380 10.1145/3474085.3475297 10.1109/CVPR.2016.320 10.1109/WACV45572.2020.9093614 10.1109/CVPR.2015.7298990 10.1109/CVPR.2019.00678 10.1007/s11263-024-02179-4 10.1109/CVPR.2019.00264 10.1007/978-3-030-01246-5_41 10.1109/ICCV.2017.142 10.1007/978-3-031-20059-5_7 10.1109/CVPR46437.2021.00081 10.1145/3503161.3548164 10.1109/CVPR.2017.331 10.1109/CVPR52688.2022.01884 10.1016/j.neucom.2022.01.095 10.1109/CVPR.2017.766 10.1109/CVPR52688.2022.01830 10.1145/3394171.3413722 10.1016/j.neucom.2020.07.048 10.1109/CVPR.2017.106 10.1007/978-3-030-01249-6_9 10.1109/ICCV.2017.211 10.1109/ICCV.2017.324 10.1007/s11263-016-0981-7 10.1109/CVPR.2019.00632 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION |
| DOI | 10.1109/TPAMI.2024.3387349 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 |
| EndPage | 6888 |
| ExternalDocumentID | 10_1109_TPAMI_2024_3387349 10496249 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: HKUST Special Support for Young Faculty grantid: F0927 – fundername: HKUST Sports Science and Technology Research grantid: SSTRG24EG04 – fundername: Fundamental Research Funds for the Central Universities funderid: 10.13039/501100012226 – fundername: National Natural Science Foundation of China grantid: 62337001 funderid: 10.13039/501100001809 – fundername: National Key Research & Development Project of China grantid: 2021ZD0110700 |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E 9M8 AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT ADRHT AENEX AETEA AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P FA8 HZ~ H~9 IBMZZ ICLAB IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNI RNS RXW RZB TAE TN5 UHB VH1 XJT ~02 AAYXX CITATION |
| ID | FETCH-LOGICAL-c268t-26e2fa6a67ed63e849327fa5e9a9f6b92fc36f39a142d897b69ae54c70ab3ba03 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 8 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001308236900003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0162-8828 |
| IngestDate | Tue Nov 18 21:42:10 EST 2025 Sat Nov 29 08:07:03 EST 2025 Wed Aug 27 02:04:21 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 10 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c268t-26e2fa6a67ed63e849327fa5e9a9f6b92fc36f39a142d897b69ae54c70ab3ba03 |
| ORCID | 0000-0001-6148-9709 0000-0002-0512-880X 0000-0002-6142-9914 0000-0001-7374-8739 0000-0002-3865-8145 0000-0001-6817-0063 0000-0002-5678-4487 |
| PageCount | 16 |
| ParticipantIDs | ieee_primary_10496249 crossref_citationtrail_10_1109_TPAMI_2024_3387349 crossref_primary_10_1109_TPAMI_2024_3387349 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-10-01 |
| PublicationDateYYYYMMDD | 2024-10-01 |
| PublicationDate_xml | – month: 10 year: 2024 text: 2024-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationYear | 2024 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref13 ref57 ref12 Niu (ref74) ref15 ref59 ref14 ref53 ref11 ref10 ref54 ref17 ref19 Ren (ref51) Li (ref3) 2023 ref50 DeVries (ref80) 2018 ref91 ref90 ref46 Xu (ref58) 2019 ref45 Chen (ref63) ref89 ref41 ref44 ref88 ref43 ref87 ref8 ref7 Gao (ref18) 2023 ref9 ref6 ref5 Müller (ref92) Jiang (ref49) 2018 Ma (ref55) Hechenbichler (ref82) 2004; 399 ref81 ref40 ref84 ref83 Wang (ref47) ref35 ref34 ref78 ref37 ref36 ref31 ref75 ref30 ref33 ref77 Li (ref16) 2023 ref32 ref76 ref2 ref1 ref39 ref38 Simonyan (ref86) 2014 Ren (ref85) ref71 ref70 ref73 ref72 Vahdat (ref52) ref24 ref68 ref23 ref67 ref26 Knyazev (ref42) 2020 ref25 ref69 ref20 ref64 ref22 ref66 ref21 ref65 Zhang (ref56) ref28 ref27 Hendrycks (ref79) 2016 ref29 Goldberger (ref48) ref60 ref62 Li (ref4) 2023 ref61 |
| References_xml | – ident: ref20 doi: 10.1109/CVPR.2019.00686 – ident: ref17 doi: 10.1109/CVPR52688.2022.01889 – ident: ref77 doi: 10.1145/3565573 – year: 2016 ident: ref79 article-title: A baseline for detecting misclassified and out-of-distribution examples in neural networks – ident: ref69 doi: 10.1007/978-3-031-20059-5_6 – ident: ref66 doi: 10.1016/j.neucom.2023.01.063 – ident: ref62 doi: 10.1109/CVPR.2019.00507 – year: 2023 ident: ref16 article-title: Decomposed prototype learning for few-shot scene graph generation – start-page: 1 volume-title: Proc. Int. Conf. Learn. Representations ident: ref48 article-title: Training deep neural-networks using a noise adaptation layer – ident: ref59 doi: 10.1109/TPAMI.2021.3055564 – ident: ref84 doi: 10.1214/aoms/1177729694 – ident: ref60 doi: 10.1109/CVPR42600.2020.00396 – ident: ref57 doi: 10.1109/ICCV.2019.00041 – ident: ref78 doi: 10.1109/CVPR.2018.00522 – start-page: 4334 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref51 article-title: Learning to reweight examples for robust deep learning – ident: ref21 doi: 10.1109/TPAMI.2020.2992222 – ident: ref11 doi: 10.1109/TPAMI.2021.3119754 – ident: ref67 doi: 10.1109/ICCV48922.2021.00030 – ident: ref22 doi: 10.1145/3474085.3475485 – year: 2020 ident: ref42 article-title: Graph density-aware losses for novel compositions in scene graph generation – year: 2018 ident: ref80 article-title: Learning confidence for out-of-distribution detection in neural networks – ident: ref25 doi: 10.1109/ICCV.2019.00471 – ident: ref61 doi: 10.1007/s11263-021-01453-z – ident: ref10 doi: 10.1109/CVPR42600.2020.01081 – ident: ref45 doi: 10.1109/tcsvt.2023.3282349 – ident: ref43 doi: 10.1109/ICCV48922.2021.01512 – start-page: 8792 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. ident: ref56 article-title: Generalized cross entropy loss for training deep neural networks with noisy labels – year: 2023 ident: ref3 article-title: Zero-shot visual relation detection via composite visual cues from large language models – volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. ident: ref52 article-title: Toward robustness against label noise in training deep discriminative neural networks – ident: ref30 doi: 10.24963/ijcai.2021/176 – volume: 399 issue: 1 year: 2004 ident: ref82 article-title: Weighted k-nearest-neighbor techniques and ordinal classification publication-title: discussion paper – ident: ref64 doi: 10.1109/ICME.2019.00009 – ident: ref65 doi: 10.1007/978-3-030-58558-7_15 – ident: ref70 doi: 10.1145/3097983.3098135 – year: 2014 ident: ref86 article-title: Very deep convolutional networks for large-scale image recognition – ident: ref9 doi: 10.1109/CVPR.2019.00857 – ident: ref24 doi: 10.1109/CVPR.2018.00611 – start-page: 16292 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref74 article-title: Introspective distillation for robust question answering – ident: ref8 doi: 10.1109/GC46384.2019.00015 – ident: ref2 doi: 10.1109/TPAMI.2021.3137605 – ident: ref54 doi: 10.1109/TPAMI.2015.2456899 – ident: ref28 doi: 10.1109/ICCV51070.2023.01982 – year: 2019 ident: ref58 article-title: L_DMI: An information-theoretic noise-robust loss function – ident: ref81 doi: 10.1126/science.1242072 – year: 2023 ident: ref18 article-title: Compositional prompt tuning with motion cues for open-vocabulary video relation detection – ident: ref72 doi: 10.1145/3336191.3371792 – ident: ref44 doi: 10.1109/ICCV48922.2021.01607 – ident: ref29 doi: 10.1109/CVPR42600.2020.00377 – start-page: 3355 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref55 article-title: Dimensionality-driven learning with noisy labels – year: 2023 ident: ref4 article-title: Compositional zero-shot learning via progressive language-based observations – ident: ref39 doi: 10.1109/CVPR46437.2021.01096 – ident: ref7 doi: 10.1109/TPAMI.2015.2469281 – ident: ref15 doi: 10.1109/CVPR52688.2022.01887 – ident: ref90 doi: 10.1109/CVPR52688.2022.01882 – ident: ref50 doi: 10.1109/CVPR.2018.00571 – start-page: 742 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref63 article-title: Learning efficient object detection models with knowledge distillation – ident: ref76 doi: 10.1145/3475723.3484247 – ident: ref1 doi: 10.1109/CVPR.2017.330 – ident: ref73 doi: 10.1109/TASLP.2022.3153264 – ident: ref75 doi: 10.1109/tpami.2023.3290012 – ident: ref32 doi: 10.1007/978-3-319-46448-0_51 – ident: ref46 doi: 10.1109/ICME55011.2023.00022 – ident: ref27 doi: 10.1109/CVPR42600.2020.00380 – ident: ref31 doi: 10.1145/3474085.3475297 – ident: ref38 doi: 10.1109/CVPR.2016.320 – ident: ref6 doi: 10.1109/WACV45572.2020.9093614 – ident: ref5 doi: 10.1109/CVPR.2015.7298990 – ident: ref26 doi: 10.1109/CVPR.2019.00678 – ident: ref12 doi: 10.1007/s11263-024-02179-4 – volume-title: Proc. Brit. Mach. Vis. Conf. ident: ref47 article-title: Tackling the unannotated: Scene graph generation with bias-reduced models – ident: ref89 doi: 10.1109/CVPR.2019.00264 – start-page: 2304 volume-title: Int. Conf. Mach. Learn. year: 2018 ident: ref49 article-title: MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels – start-page: 4696 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref92 article-title: When does label smoothing help? – ident: ref36 doi: 10.1007/978-3-030-01246-5_41 – start-page: 91 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. ident: ref85 article-title: Faster r-cnn: Towards real-time object detection with region proposal networks – ident: ref88 doi: 10.1109/ICCV.2017.142 – ident: ref13 doi: 10.1007/978-3-031-20059-5_7 – ident: ref91 doi: 10.1109/CVPR46437.2021.00081 – ident: ref68 doi: 10.1145/3503161.3548164 – ident: ref34 doi: 10.1109/CVPR.2017.331 – ident: ref14 doi: 10.1109/CVPR52688.2022.01884 – ident: ref23 doi: 10.1016/j.neucom.2022.01.095 – ident: ref35 doi: 10.1109/CVPR.2017.766 – ident: ref33 doi: 10.1109/CVPR52688.2022.01830 – ident: ref41 doi: 10.1145/3394171.3413722 – ident: ref71 doi: 10.1016/j.neucom.2020.07.048 – ident: ref87 doi: 10.1109/CVPR.2017.106 – ident: ref83 doi: 10.1007/978-3-030-01249-6_9 – ident: ref53 doi: 10.1109/ICCV.2017.211 – ident: ref40 doi: 10.1109/ICCV.2017.324 – ident: ref19 doi: 10.1007/s11263-016-0981-7 – ident: ref37 doi: 10.1109/CVPR.2019.00632 |
| SSID | ssj0014503 |
| Score | 2.562962 |
| Snippet | Nearly all existing scene graph generation (SGG) models have overlooked the ground-truth annotation qualities of mainstream SGG datasets, i.e., they assume: 1)... |
| SourceID | crossref ieee |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 6873 |
| SubjectTerms | Annotations Benchmark testing Multi-Teacher knowledge distillation NIST Noise measurement noisy label learning out-of-distribution scene graph generation Task analysis Training Visualization |
| Title | NICEST: Noisy Label Correction and Training for Robust Scene Graph Generation |
| URI | https://ieeexplore.ieee.org/document/10496249 |
| Volume | 46 |
| WOSCitedRecordID | wos001308236900003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFA5ueNCD0zlx_iIHb9LZtWnSeBtj04ErQyvsVvLSBAZjk_0Q_O9N0m70ouCtlKSUfn1972vyvg-he9CUaB7lnhSMeETx0DN5XHqQBywnXHZBxs5sgiVJPJ3ySdms7nphlFJu85nq2EO3lp8v5db-KjMRTjg1fKGGaozRollrv2RAImeDbEoYE-KGR-w6ZHz-mE5645HhggHpGEbGQiucWclCFVsVl1WGjX_ezyk6KctH3CvwPkMHatFEjZ01Ay4jtYmOKzqD52icjPqD9_QJJ8vZ-hu_ClBz3Le-HK6rAYtFjtPSKwKbKha_LWG73pjLmS8hfraa1rjQp7bDW-hjOEj7L17po-DJgMYbL6Aq0IIKylROQxUTU7MxLSLFBdcUeKBlSHXIRZcEecwZUC5URCTzBYQg_PAC1RfLhbpEmIBgICQBUxYSDQR8rrTlLAA-sFC3UXf3XDNZioxbr4t55siGzzOHRWaxyEos2uhhP-ezkNj4c3TLAlEZWWBw9cv5a3Rkpxf7725QfbPaqlt0KL82s_Xqzr1CP-26wy0 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1dS8MwFL34BeqD3-K3efBNql2bJo1vMtQNtzK0wt5KbprAQDZxm-C_N0m7sRcF30pJQ-np7b2nyT0H4AoNo0YkZaAkpwHVIg5sHlcBlhEvqVANVKk3m-BZlvb7olc3q_teGK2133ymb9yhX8svR2rqfpXZCKeCWb6wDKsJpVFYtWvNFw1o4o2QbRFjg9wyiVmPTChu8959t23ZYERvLCfjsZPOXMhDC8YqPq88bv_zjnZgqy4gyX2F-C4s6eEebM_MGUgdq3uwuaA0uA_drN18eM3vSDYajL9JR6J-J03nzOH7GogcliSv3SKIrWPJywin44mdzn4LyZNTtSaVQrUbfgBvjw95sxXUTgqBilg6CSKmIyOZZFyXLNYptVUbNzLRQgrDUERGxczEQjZoVKaCIxNSJ1TxUGKMMowPYWU4GuojIBQlR6ko2sKQGqQYCm0ca0EMkcfmGBqz51qoWmbcuV28F55uhKLwWBQOi6LG4hiu59d8VCIbf44-cEAsjKwwOPnl_CWst_Jup-i0s-dT2HBTVbvxzmBl8jnV57CmviaD8eeFf51-AEJ-xnQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=NICEST%3A+Noisy+Label+Correction+and+Training+for+Robust+Scene+Graph+Generation&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Li%2C+Lin&rft.au=Xiao%2C+Jun&rft.au=Shi%2C+Hanrong&rft.au=Zhang%2C+Hanwang&rft.date=2024-10-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=46&rft.issue=10&rft.spage=6873&rft.epage=6888&rft_id=info:doi/10.1109%2FTPAMI.2024.3387349&rft.externalDocID=10496249 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |