Looking Beyond Single Images for Weakly Supervised Semantic Segmentation Learning
This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision for semantic segmentation learning, and struggle to make the localization maps ca...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence Jg. 46; H. 3; S. 1635 - 1649 |
|---|---|
| Hauptverfasser: | , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
IEEE
01.03.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 0162-8828, 1939-3539, 2160-9292, 1939-3539 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision for semantic segmentation learning, and struggle to make the localization maps capture more complete object content. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural co-attentions are incorporated into the classifier to complementarily capture cross-image semantic similarities and differences. In particular, given a pair of training images, one co-attention enforces the classifier to recognize the common semantics from co-attentive objects, while the other one, called contrastive co-attention, drives the classifier to identify the unique semantics from the rest, unshared objects. This helps the classifier discover more object patterns and better ground semantics in image regions. In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference, hence eventually benefiting semantic segmentation learning. More importantly, our algorithm provides a unified framework that handles well different WSSS settings, i.e., learning WSSS with 1) precise image-level supervision only, 2) extra simple single-label data, and 3) extra noisy web data. Without bells and whistles, it sets new state-of-the-arts on all these settings. Moreover, our approach ranked 1st place in the Weakly-Supervised Semantic Segmentation Track of CVPR2020 Learning from Imperfect Data Challenge. The extensive experimental results demonstrate well the efficacy and high utility of our method. |
|---|---|
| AbstractList | This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision for semantic segmentation learning, and struggle to make the localization maps capture more complete object content. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural co-attentions are incorporated into the classifier to complementarily capture cross-image semantic similarities and differences. In particular, given a pair of training images, one co-attention enforces the classifier to recognize the common semantics from co-attentive objects, while the other one, called contrastive co-attention, drives the classifier to identify the unique semantics from the rest, unshared objects. This helps the classifier discover more object patterns and better ground semantics in image regions. In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference, hence eventually benefiting semantic segmentation learning. More importantly, our algorithm provides a unified framework that handles well different WSSS settings, i.e., learning WSSS with 1) precise image-level supervision only, 2) extra simple single-label data, and 3) extra noisy web data. Without bells and whistles, it sets new state-of-the-arts on all these settings. Moreover, our approach ranked 1st place in the Weakly-Supervised Semantic Segmentation Track of CVPR2020 Learning from Imperfect Data Challenge. The extensive experimental results demonstrate well the efficacy and high utility of our method. This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision for semantic segmentation learning, and struggle to make the localization maps capture more complete object content. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural co-attentions are incorporated into the classifier to complementarily capture cross-image semantic similarities and differences. In particular, given a pair of training images, one co-attention enforces the classifier to recognize the common semantics from co-attentive objects, while the other one, called contrastive co-attention, drives the classifier to identify the unique semantics from the rest, unshared objects. This helps the classifier discover more object patterns and better ground semantics in image regions. In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference, hence eventually benefiting semantic segmentation learning. More importantly, our algorithm provides a unified framework that handles well different WSSS settings, i.e., learning WSSS with 1) precise image-level supervision only, 2) extra simple single-label data, and 3) extra noisy web data. Without bells and whistles, it sets new state-of-the-arts on all these settings. Moreover, our approach ranked 1st place in the Weakly-Supervised Semantic Segmentation Track of CVPR2020 Learning from Imperfect Data Challenge. The extensive experimental results demonstrate well the efficacy and high utility of our method.This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision for semantic segmentation learning, and struggle to make the localization maps capture more complete object content. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural co-attentions are incorporated into the classifier to complementarily capture cross-image semantic similarities and differences. In particular, given a pair of training images, one co-attention enforces the classifier to recognize the common semantics from co-attentive objects, while the other one, called contrastive co-attention, drives the classifier to identify the unique semantics from the rest, unshared objects. This helps the classifier discover more object patterns and better ground semantics in image regions. In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference, hence eventually benefiting semantic segmentation learning. More importantly, our algorithm provides a unified framework that handles well different WSSS settings, i.e., learning WSSS with 1) precise image-level supervision only, 2) extra simple single-label data, and 3) extra noisy web data. Without bells and whistles, it sets new state-of-the-arts on all these settings. Moreover, our approach ranked 1st place in the Weakly-Supervised Semantic Segmentation Track of CVPR2020 Learning from Imperfect Data Challenge. The extensive experimental results demonstrate well the efficacy and high utility of our method. |
| Author | Sun, Guolei Van Gool, Luc Wang, Wenguan |
| Author_xml | – sequence: 1 givenname: Wenguan orcidid: 0000-0002-0802-9567 surname: Wang fullname: Wang, Wenguan email: wenguanwang.ai@gmail.com organization: ReLER, AAII, University of Technology Sydney, Ultimo, NSW, Australia – sequence: 2 givenname: Guolei orcidid: 0000-0001-8667-9656 surname: Sun fullname: Sun, Guolei email: guolei.sun@vision.ee.ethz.ch organization: ETH Zurich, Zürich, Switzerland – sequence: 3 givenname: Luc orcidid: 0000-0002-3445-5711 surname: Van Gool fullname: Van Gool, Luc email: vangool@vision.ee.ethz.ch organization: ETH Zurich, Zürich, Switzerland |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35439127$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kU1PGzEQhi1EBYH2D4CEVuqFy6b-WHvtI0WlRErVVlD1aDn2bGTYtVN7t1L-fU0TOHDg5Dk8z4xn3hN0GGIAhM4InhOC1af7H1ffFnOKKZ0zIiRn-ADNKBG4VlTRQzTDRNBaSiqP0UnODxiThmN2hI4Zb5gitJ2hn8sYH31YV59hG4Or7krdQ7UYzBpy1cVU_Qbz2G-ru2kD6a_PUBgYTBi9LcV6gDCa0cdQLcGkUOz36F1n-gwf9u8p-nXz5f76tl5-_7q4vlrWlnEy1q7Mtwoba4DbTpmmka1yWDjbGtcRbFZglLAcnHINc63sRCOMYsSspAKh2Cm63PXdpPhngjzqwWcLfW8CxClrKjiVgjPCCvrxFfoQpxTK73Q5FCOUtFgW6mJPTasBnN4kP5i01c_HKoDcATbFnBN02vrd8mMyvtcE66dc9P9c9FMuep9LUekr9bn7m9L5TvIA8CKoVmDMW_YPsL6YTQ |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_1016_j_neucom_2025_131131 crossref_primary_10_1049_gtd2_70104 crossref_primary_10_1109_TCSVT_2023_3347402 crossref_primary_10_1016_j_jvcir_2025_104576 crossref_primary_10_1016_j_neucom_2024_127829 crossref_primary_10_1109_TIP_2025_3543052 crossref_primary_10_1109_TIP_2022_3214092 crossref_primary_10_1016_j_asoc_2025_113653 crossref_primary_10_1109_TCSVT_2024_3413778 crossref_primary_10_1109_TII_2024_3385533 crossref_primary_10_1109_TIP_2022_3214332 crossref_primary_10_1109_TGRS_2024_3506630 crossref_primary_10_1016_j_neucom_2024_128540 crossref_primary_10_1016_j_neucom_2025_131204 crossref_primary_10_1109_TKDE_2024_3389668 crossref_primary_10_1109_TAI_2024_3376640 crossref_primary_10_1109_TPAMI_2024_3387116 crossref_primary_10_1109_TCSVT_2024_3375505 crossref_primary_10_1109_TIP_2024_3374130 crossref_primary_10_1088_1361_6501_acc198 crossref_primary_10_1007_s00530_025_01787_5 crossref_primary_10_1109_TPAMI_2025_3557047 crossref_primary_10_1016_j_jvcir_2024_104186 crossref_primary_10_1109_TMM_2024_3407679 crossref_primary_10_1109_TPAMI_2024_3404422 crossref_primary_10_1038_s41598_024_84181_x crossref_primary_10_1007_s11263_024_02083_x crossref_primary_10_1109_TETCI_2024_3380442 crossref_primary_10_1109_TIP_2024_3444190 crossref_primary_10_1016_j_neucom_2025_130091 crossref_primary_10_1080_1206212X_2024_2333122 crossref_primary_10_1109_TAI_2023_3333827 crossref_primary_10_1109_TCSVT_2022_3215979 crossref_primary_10_1109_TIM_2025_3571123 crossref_primary_10_1109_TNNLS_2024_3467132 crossref_primary_10_1016_j_neucom_2025_130762 crossref_primary_10_1109_TNNLS_2024_3497145 crossref_primary_10_1145_3624747 crossref_primary_10_1109_TCSVT_2022_3185252 crossref_primary_10_1016_j_eswa_2025_127004 crossref_primary_10_1016_j_neucom_2025_130504 |
| Cites_doi | 10.5244/c.31.17 10.1007/s11263-014-0733-5 10.1109/TPAMI.2020.3040258 10.1109/CVPR.2018.00139 10.1109/TPAMI.2016.2636150 10.1109/CVPR.2019.00644 10.1109/TPAMI.2018.2815688 10.1007/s11263-015-0816-y 10.1609/aaai.v35i3.16294 10.1109/ICCV.2019.00580 10.1109/TPAMI.2020.3012548 10.1007/978-3-319-46478-7_34 10.1109/CVPR42600.2020.00434 10.1109/ICCV.2019.00732 10.1109/CVPR.2019.00679 10.1109/CVPR.2017.687 10.1109/CVPR.2016.90 10.1109/CVPR.2016.319 10.1631/fitee.2100463 10.1007/978-3-319-46493-0_24 10.1109/ICCV.2019.00691 10.1007/978-3-319-46484-8_21 10.1007/978-3-030-01234-2_1 10.1109/CVPR.2018.00143 10.1109/CVPR.2016.344 10.1109/CVPR.2015.7298780 10.1007/978-3-030-58542-6_40 10.1109/CVPR.2017.185 10.1109/CVPR.2018.00637 10.1109/CVPR.2018.00733 10.1007/978-3-030-01270-0_49 10.1109/ICCV.2011.6126343 10.1109/ICCV.2017.381 10.1109/ICCV.2019.00530 10.1109/CVPR.2019.00231 10.1109/CVPR42600.2020.00901 10.1109/CVPR.2019.00404 10.1109/ICCV.2019.00216 10.1109/TPAMI.2017.2699184 10.1109/CVPR.2019.00541 10.1007/978-3-319-46493-0_42 10.1007/978-3-030-58574-7_21 10.1109/CVPR.2010.5539868 10.1109/CVPR.2018.00148 10.1109/CVPR.2018.00745 10.1007/978-3-030-58542-6_19 10.1007/978-3-030-58536-5_21 10.48550/arXiv.1802.02611 10.1016/j.patcog.2019.01.006 10.1109/CVPR.2019.00285 10.1109/CVPR.2018.00523 10.1007/978-3-319-10590-1_53 10.1109/CVPR.2018.00144 10.1109/tpami.2021.3115815 10.1609/aaai.v34i07.6854 10.1109/CVPR.2017.239 10.1109/CVPR.2017.631 10.1109/CVPR42600.2020.00432 10.1109/CVPR.2019.00683 10.1609/aaai.v33i01.33015345 10.1109/CVPR.2019.00245 10.1109/tpami.2021.3051099 10.1109/CVPR.2019.00326 10.18653/v1/d16-1053 10.1109/CVPR42600.2020.01229 10.1109/ICCV48922.2021.00721 10.1109/CVPR.2018.00147 10.1109/CVPR52688.2022.00261 10.1109/CVPR52688.2022.00131 10.1109/CVPR.2017.601 10.1109/CVPR42600.2020.00895 10.1109/ACPR.2015.7486476 10.1109/CVPR.2018.00813 10.18653/v1/D15-1166 10.1109/CVPR42600.2020.00898 10.1109/ICCV.2015.203 10.1109/ICCV.2019.00531 10.1109/CVPR.2018.00759 10.1609/aaai.v34i07.6705 10.1109/ICCV.2015.191 10.1007/978-3-030-01240-3_23 10.1109/TMM.2016.2545409 10.1109/CVPR.2018.00639 10.1109/CVPR.2018.00960 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TPAMI.2022.3168530 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 1649 |
| ExternalDocumentID | 35439127 10_1109_TPAMI_2022_3168530 9760057 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: ARC DECRA grantid: DE220101390 |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E 9M8 AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT ADRHT AENEX AETEA AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P FA8 HZ~ H~9 IBMZZ ICLAB IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNI RNS RXW RZB TAE TN5 UHB VH1 XJT ~02 AAYXX CITATION NPM RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c351t-d912c90acae5cf9a44879d06dc7adf10abea96c5ed9d43d78f646a931ab89e693 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 85 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001174191100008&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Sun Sep 28 02:46:15 EDT 2025 Mon Jun 30 06:53:47 EDT 2025 Mon Jul 21 05:59:50 EDT 2025 Sat Nov 29 02:58:19 EST 2025 Tue Nov 18 22:30:36 EST 2025 Wed Aug 27 02:07:44 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 3 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c351t-d912c90acae5cf9a44879d06dc7adf10abea96c5ed9d43d78f646a931ab89e693 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0001-8667-9656 0000-0002-0802-9567 0000-0002-3445-5711 |
| PMID | 35439127 |
| PQID | 2923121708 |
| PQPubID | 85458 |
| PageCount | 15 |
| ParticipantIDs | crossref_citationtrail_10_1109_TPAMI_2022_3168530 crossref_primary_10_1109_TPAMI_2022_3168530 pubmed_primary_35439127 ieee_primary_9760057 proquest_journals_2923121708 proquest_miscellaneous_2652865313 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-03-01 |
| PublicationDateYYYYMMDD | 2024-03-01 |
| PublicationDate_xml | – month: 03 year: 2024 text: 2024-03-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref59 ref58 ref52 ref55 ref54 Zhang (ref51) Lu (ref76) Xiong (ref77) ref50 ref46 ref45 ref48 ref47 ref41 ref43 ref49 Zhang (ref74) ref8 ref7 Vaswani (ref53) ref4 Santoro (ref71) ref3 ref6 Paulus (ref57) ref5 Gidaris (ref87) ref100 ref101 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref39 ref38 Griffin (ref42) 2007 Wang (ref44) 2021 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref27 ref29 Lin (ref56) ref13 ref12 ref15 ref14 ref97 ref96 ref11 ref99 ref10 ref17 ref16 ref19 ref18 ref93 ref92 ref91 ref90 ref85 ref88 Odena (ref86) Simonyan (ref95) 2014 ref82 ref81 ref84 ref83 ref80 ref79 ref78 Krizhevsky (ref94) ref75 ref104 ref102 ref103 ref2 ref1 Pathak (ref9) 2014 ref73 ref72 Hou (ref89) ref68 ref67 ref69 ref64 ref63 ref66 ref65 Krähenbühl (ref98) Battaglia (ref70) Wei (ref28) 2020 ref60 ref62 ref61 |
| References_xml | – ident: ref22 doi: 10.5244/c.31.17 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref77 article-title: Dynamic coattention networks for question answering – ident: ref27 doi: 10.1007/s11263-014-0733-5 – ident: ref84 doi: 10.1109/TPAMI.2020.3040258 – ident: ref100 doi: 10.1109/CVPR.2018.00139 – ident: ref25 doi: 10.1109/TPAMI.2016.2636150 – ident: ref79 doi: 10.1109/CVPR.2019.00644 – ident: ref91 doi: 10.1109/TPAMI.2018.2815688 – ident: ref41 doi: 10.1007/s11263-015-0816-y – ident: ref52 doi: 10.1609/aaai.v35i3.16294 – ident: ref69 doi: 10.1109/ICCV.2019.00580 – start-page: 7354 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref74 article-title: Self-attention generative adversarial networks – ident: ref49 doi: 10.1109/TPAMI.2020.3012548 – ident: ref8 doi: 10.1007/978-3-319-46478-7_34 – ident: ref35 doi: 10.1109/CVPR42600.2020.00434 – ident: ref90 doi: 10.1109/ICCV.2019.00732 – ident: ref82 doi: 10.1109/CVPR.2019.00679 – ident: ref18 doi: 10.1109/CVPR.2017.687 – ident: ref96 doi: 10.1109/CVPR.2016.90 – ident: ref11 doi: 10.1109/CVPR.2016.319 – ident: ref63 doi: 10.1631/fitee.2100463 – ident: ref26 doi: 10.1007/978-3-319-46493-0_24 – start-page: 109 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref98 article-title: Efficient inference in fully connected CRFs with Gaussian edge potentials – start-page: 549 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref89 article-title: Self-erasing network for integral object attention – ident: ref43 doi: 10.1109/ICCV.2019.00691 – ident: ref68 doi: 10.1007/978-3-319-46484-8_21 – ident: ref62 doi: 10.1007/978-3-030-01234-2_1 – ident: ref75 doi: 10.1109/CVPR.2018.00143 – ident: ref7 doi: 10.1109/CVPR.2016.344 – ident: ref20 doi: 10.1109/CVPR.2015.7298780 – ident: ref33 doi: 10.1007/978-3-030-58542-6_40 – ident: ref46 doi: 10.1109/CVPR.2017.185 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref56 article-title: A structured self-attentive sentence embedding – ident: ref78 doi: 10.1109/CVPR.2018.00637 – ident: ref16 doi: 10.1109/CVPR.2018.00733 – ident: ref59 doi: 10.1007/978-3-030-01270-0_49 – start-page: 2642 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref86 article-title: Conditional image synthesis with auxiliary classifier GANs – ident: ref102 doi: 10.1109/ICCV.2011.6126343 – ident: ref17 doi: 10.1109/ICCV.2017.381 – start-page: 655 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref51 article-title: Causal intervention for weakly-supervised semantic segmentation – ident: ref21 doi: 10.1109/ICCV.2019.00530 – year: 2021 ident: ref44 article-title: A survey on deep learning technique for video segmentation – ident: ref36 doi: 10.1109/CVPR.2019.00231 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref87 article-title: Unsupervised representation learning by predicting image rotations – ident: ref34 doi: 10.1109/CVPR42600.2020.00901 – ident: ref97 doi: 10.1109/CVPR.2019.00404 – ident: ref32 doi: 10.1109/ICCV.2019.00216 – ident: ref2 doi: 10.1109/TPAMI.2017.2699184 – ident: ref19 doi: 10.1109/CVPR.2019.00541 – ident: ref31 doi: 10.1007/978-3-319-46493-0_42 – ident: ref40 doi: 10.1007/978-3-030-58574-7_21 – ident: ref48 doi: 10.1109/CVPR.2010.5539868 – ident: ref24 doi: 10.1109/CVPR.2018.00148 – start-page: 289 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref76 article-title: Hierarchical question-image co-attention for visual question answering – volume-title: Proc. Int. Conf. Learn. Representations ident: ref57 article-title: A deep reinforced model for abstractive summarization – ident: ref61 doi: 10.1109/CVPR.2018.00745 – ident: ref83 doi: 10.1007/978-3-030-58542-6_19 – ident: ref1 doi: 10.1007/978-3-030-58536-5_21 – ident: ref3 doi: 10.48550/arXiv.1802.02611 – ident: ref103 doi: 10.1016/j.patcog.2019.01.006 – ident: ref65 doi: 10.1109/CVPR.2019.00285 – ident: ref14 doi: 10.1109/CVPR.2018.00523 – ident: ref10 doi: 10.1007/978-3-319-10590-1_53 – start-page: 5998 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref53 article-title: Attention is all you need – ident: ref93 doi: 10.1109/CVPR.2018.00144 – ident: ref85 doi: 10.1109/tpami.2021.3115815 – ident: ref92 doi: 10.1609/aaai.v34i07.6854 – ident: ref23 doi: 10.1109/CVPR.2017.239 – ident: ref30 doi: 10.1109/CVPR.2017.631 – ident: ref72 doi: 10.1109/CVPR42600.2020.00432 – ident: ref81 doi: 10.1109/CVPR.2019.00683 – ident: ref64 doi: 10.1609/aaai.v33i01.33015345 – ident: ref73 doi: 10.1109/CVPR.2019.00245 – start-page: 4502 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref70 article-title: Interaction networks for learning about objects, relations and physics – start-page: 4967 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref71 article-title: A simple neural network module for relational reasoning – ident: ref38 doi: 10.1109/tpami.2021.3051099 – ident: ref58 doi: 10.1109/CVPR.2019.00326 – ident: ref55 doi: 10.18653/v1/d16-1053 – year: 2014 ident: ref9 article-title: Fully convolutional multi-class multiple instance learning – ident: ref12 doi: 10.1109/CVPR42600.2020.01229 – ident: ref50 doi: 10.1109/ICCV48922.2021.00721 – ident: ref101 doi: 10.1109/CVPR.2018.00147 – ident: ref15 doi: 10.1109/CVPR.2018.00147 – ident: ref5 doi: 10.1109/CVPR52688.2022.00261 – ident: ref4 doi: 10.1109/CVPR52688.2022.00131 – ident: ref67 doi: 10.1109/CVPR.2017.601 – ident: ref66 doi: 10.1109/CVPR42600.2020.00895 – year: 2014 ident: ref95 article-title: Very deep convolutional networks for large-scale image recognition – ident: ref104 doi: 10.1109/ACPR.2015.7486476 – ident: ref60 doi: 10.1109/CVPR.2018.00813 – ident: ref54 doi: 10.18653/v1/D15-1166 – start-page: 1097 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref94 article-title: ImageNet classification with deep convolutional neural networks – ident: ref45 doi: 10.1109/CVPR42600.2020.00898 – ident: ref6 doi: 10.1109/ICCV.2015.203 – year: 2007 ident: ref42 article-title: Caltech-256 object category dataset – ident: ref99 doi: 10.1109/ICCV.2019.00531 – year: 2020 ident: ref28 article-title: LID 2020: The learning from imperfect data challenge results – ident: ref13 doi: 10.1109/CVPR.2018.00759 – ident: ref39 doi: 10.1609/aaai.v34i07.6705 – ident: ref29 doi: 10.1109/ICCV.2015.191 – ident: ref47 doi: 10.1007/978-3-030-01240-3_23 – ident: ref88 doi: 10.1109/TMM.2016.2545409 – ident: ref80 doi: 10.1109/CVPR.2018.00639 – ident: ref37 doi: 10.1109/CVPR.2018.00960 |
| SSID | ssj0014503 |
| Score | 2.6575232 |
| Snippet | This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Current popular solutions... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1635 |
| SubjectTerms | Algorithms Birds Classifiers co-attention cross-image semantic relation Data mining Image segmentation Localization Location awareness Machine learning Noise measurement Pattern analysis Semantic segmentation Semantics Supervision Training Training data weakly supervised learning |
| Title | Looking Beyond Single Images for Weakly Supervised Semantic Segmentation Learning |
| URI | https://ieeexplore.ieee.org/document/9760057 https://www.ncbi.nlm.nih.gov/pubmed/35439127 https://www.proquest.com/docview/2923121708 https://www.proquest.com/docview/2652865313 |
| Volume | 46 |
| WOSCitedRecordID | wos001174191100008&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4B4tAeSgt9pAXkSr21gSSOX0dUFRUJEBVU3VvkV1aou1nE7lbqv-_YcSIObSVulmInkb8Ze8bjmQ_gg-aqpVqaXNXC5bVxMjeOuRx3ksIYqpgzEelzcXkpJxN1tQGfxlwY7328fOaPQjPG8t3CrsNR2bEKUSQmNmFTCN7nao0Rg5pFFmS0YFDD0Y0YEmQKdXxzdXJxhq5gVR0FmiZGA_0bZSHnNJDJPNiPIsHKv23NuOec7jzub5_Ds2RbkpNeGF7Ahu92YWfgbSBJjXfh6YMihHvw7Twc4XZT0iezkGtszzw5m-NKsyRo05IfXv-c_SbX67uwsCw99vFzROTWYmM6T9lLHUm1Wqcv4fvpl5vPX_NEtJBbyspV7nAirCq01Z7ZVml02YRyBXdWaNeWhTZeK26Zd8rV1AnZ8pprRUttpPJc0Vew1S06_waI11Jp2zIeir5IHKYrwZVRwlNbWVlkUA7T3dhUhTyQYcya6I0UqoloNQGtJqGVwcdxzF1fg-O_vfcCFmPPBEMG-wOqTVLTZVMF8xadskJm8H58jAoWoia684s19uEsZO_SkmbwupeG8d2DEL39-zffwRP8s7q_srYPW6v7tT-Abftrdbu8P0QpnsjDKMV_ADU06-I |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Rb9MwED6NgQQ8MNhgCwwwEm-QzYljx36cENMqumpoRewtcuxLNdGm09oi8e-xHSfaAyDxZil2Evm7s-98vvsA3muhGqZlnaqitGlRW5nWltvU7SS0rpnitg5Ij8vJRF5dqYst-DjkwiBiuHyGR74ZYvl2aTb-qOxY-SgSL-_BfV4UOe2ytYaYQcEDD7KzYZyOO0eiT5Gh6nh6cXI-cs5gnh95oibOPAEc4z7r1NPJ3NmRAsXK363NsOuc7vzf_z6FJ9G6JCedODyDLWx3YadnbiBRkXfh8Z0yhHvwdewPcdsZ6dJZyKVrz5GMFm6tWRFn1ZLvqH_Mf5HLzY1fWlbo-uDCYXJtXGO2iPlLLYnVWmfP4dvp5-mnszRSLaSG8WydWjcRRlFtNHLTKO2ctlJZKqwptW0yqmvUShiOVtmC2VI2ohBasUzXUqFQ7AVst8sWD4Cglkqbhgtf9kW6YTovhapViczkRtIEsn66KxPrkHs6jHkV_BGqqoBW5dGqIloJfBjG3HRVOP7Ze89jMfSMMCRw2KNaRUVdVbk3cJ1bRmUC74bHTsV83ES3uNy4PoL7_F2WsQT2O2kY3t0L0cs_f_MtPDybno-r8Wjy5RU8cn9ZdBfYDmF7fbvB1_DA_Fxfr27fBFn-DRCg7kE |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Looking+Beyond+Single+Images+for+Weakly+Supervised+Semantic+Segmentation+Learning&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Wang%2C+Wenguan&rft.au=Sun%2C+Guolei&rft.au=Van+Gool%2C+Luc&rft.date=2024-03-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=46&rft.issue=3&rft.spage=1635&rft.epage=1649&rft_id=info:doi/10.1109%2FTPAMI.2022.3168530&rft_id=info%3Apmid%2F35439127&rft.externalDocID=9760057 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |