PoolNet+: Exploring the Potential of Pooling for Salient Object Detection
We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two pooling-based modules are proposed. A global guidance module (GGM) is first built based on the bottom-up pathway of the U-shape architecture,...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence Jg. 45; H. 1; S. 887 - 904 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 0162-8828, 1939-3539, 2160-9292, 1939-3539 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two pooling-based modules are proposed. A global guidance module (GGM) is first built based on the bottom-up pathway of the U-shape architecture, which aims to guide the location information of the potential salient objects into layers at different feature levels. A feature aggregation module (FAM) is further designed to seamlessly fuse the coarse-level semantic information with the fine-level features in the top-down pathway. We can progressively refine the high-level semantic features with these two modules and obtain detail enriched saliency maps. Experimental results show that our proposed approach can locate the salient objects more accurately with sharpened details and substantially improve the performance compared with the existing state-of-the-art methods. Besides, our approach is fast and can run at a speed of 53 FPS when processing a <inline-formula><tex-math notation="LaTeX">300 \times 400</tex-math> <mml:math><mml:mrow><mml:mn>300</mml:mn><mml:mo>×</mml:mo><mml:mn>400</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="cheng-ieq1-3140168.gif"/> </inline-formula> image. To make our approach better applied to mobile applications, we take MobileNetV2 as our backbone and re-tailor the structure of our pooling-based modules. Our mobile version model achieves a running speed of 66 FPS yet still performs better than most existing state-of-the-art methods. To verify the generalization ability of the proposed method, we apply it to the edge detection, RGB-D salient object detection, and camouflaged object detection tasks, and our method achieves better results than the corresponding state-of-the-art methods of these three tasks. Code can be found at http://mmcheng.net/poolnet/ . |
|---|---|
| AbstractList | We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two pooling-based modules are proposed. A global guidance module (GGM) is first built based on the bottom-up pathway of the U-shape architecture, which aims to guide the location information of the potential salient objects into layers at different feature levels. A feature aggregation module (FAM) is further designed to seamlessly fuse the coarse-level semantic information with the fine-level features in the top-down pathway. We can progressively refine the high-level semantic features with these two modules and obtain detail enriched saliency maps. Experimental results show that our proposed approach can locate the salient objects more accurately with sharpened details and substantially improve the performance compared with the existing state-of-the-art methods. Besides, our approach is fast and can run at a speed of 53 FPS when processing a 300 ×400 image. To make our approach better applied to mobile applications, we take MobileNetV2 as our backbone and re-tailor the structure of our pooling-based modules. Our mobile version model achieves a running speed of 66 FPS yet still performs better than most existing state-of-the-art methods. To verify the generalization ability of the proposed method, we apply it to the edge detection, RGB-D salient object detection, and camouflaged object detection tasks, and our method achieves better results than the corresponding state-of-the-art methods of these three tasks. Code can be found at http://mmcheng.net/poolnet/. We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two pooling-based modules are proposed. A global guidance module (GGM) is first built based on the bottom-up pathway of the U-shape architecture, which aims to guide the location information of the potential salient objects into layers at different feature levels. A feature aggregation module (FAM) is further designed to seamlessly fuse the coarse-level semantic information with the fine-level features in the top-down pathway. We can progressively refine the high-level semantic features with these two modules and obtain detail enriched saliency maps. Experimental results show that our proposed approach can locate the salient objects more accurately with sharpened details and substantially improve the performance compared with the existing state-of-the-art methods. Besides, our approach is fast and can run at a speed of 53 FPS when processing a 300 ×400 image. To make our approach better applied to mobile applications, we take MobileNetV2 as our backbone and re-tailor the structure of our pooling-based modules. Our mobile version model achieves a running speed of 66 FPS yet still performs better than most existing state-of-the-art methods. To verify the generalization ability of the proposed method, we apply it to the edge detection, RGB-D salient object detection, and camouflaged object detection tasks, and our method achieves better results than the corresponding state-of-the-art methods of these three tasks. Code can be found at http://mmcheng.net/poolnet/.We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two pooling-based modules are proposed. A global guidance module (GGM) is first built based on the bottom-up pathway of the U-shape architecture, which aims to guide the location information of the potential salient objects into layers at different feature levels. A feature aggregation module (FAM) is further designed to seamlessly fuse the coarse-level semantic information with the fine-level features in the top-down pathway. We can progressively refine the high-level semantic features with these two modules and obtain detail enriched saliency maps. Experimental results show that our proposed approach can locate the salient objects more accurately with sharpened details and substantially improve the performance compared with the existing state-of-the-art methods. Besides, our approach is fast and can run at a speed of 53 FPS when processing a 300 ×400 image. To make our approach better applied to mobile applications, we take MobileNetV2 as our backbone and re-tailor the structure of our pooling-based modules. Our mobile version model achieves a running speed of 66 FPS yet still performs better than most existing state-of-the-art methods. To verify the generalization ability of the proposed method, we apply it to the edge detection, RGB-D salient object detection, and camouflaged object detection tasks, and our method achieves better results than the corresponding state-of-the-art methods of these three tasks. Code can be found at http://mmcheng.net/poolnet/. We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two pooling-based modules are proposed. A global guidance module (GGM) is first built based on the bottom-up pathway of the U-shape architecture, which aims to guide the location information of the potential salient objects into layers at different feature levels. A feature aggregation module (FAM) is further designed to seamlessly fuse the coarse-level semantic information with the fine-level features in the top-down pathway. We can progressively refine the high-level semantic features with these two modules and obtain detail enriched saliency maps. Experimental results show that our proposed approach can locate the salient objects more accurately with sharpened details and substantially improve the performance compared with the existing state-of-the-art methods. Besides, our approach is fast and can run at a speed of 53 FPS when processing a <inline-formula><tex-math notation="LaTeX">300 \times 400</tex-math> <mml:math><mml:mrow><mml:mn>300</mml:mn><mml:mo>×</mml:mo><mml:mn>400</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="cheng-ieq1-3140168.gif"/> </inline-formula> image. To make our approach better applied to mobile applications, we take MobileNetV2 as our backbone and re-tailor the structure of our pooling-based modules. Our mobile version model achieves a running speed of 66 FPS yet still performs better than most existing state-of-the-art methods. To verify the generalization ability of the proposed method, we apply it to the edge detection, RGB-D salient object detection, and camouflaged object detection tasks, and our method achieves better results than the corresponding state-of-the-art methods of these three tasks. Code can be found at http://mmcheng.net/poolnet/ . We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two pooling-based modules are proposed. A global guidance module (GGM) is first built based on the bottom-up pathway of the U-shape architecture, which aims to guide the location information of the potential salient objects into layers at different feature levels. A feature aggregation module (FAM) is further designed to seamlessly fuse the coarse-level semantic information with the fine-level features in the top-down pathway. We can progressively refine the high-level semantic features with these two modules and obtain detail enriched saliency maps. Experimental results show that our proposed approach can locate the salient objects more accurately with sharpened details and substantially improve the performance compared with the existing state-of-the-art methods. Besides, our approach is fast and can run at a speed of 53 FPS when processing a [Formula Omitted] image. To make our approach better applied to mobile applications, we take MobileNetV2 as our backbone and re-tailor the structure of our pooling-based modules. Our mobile version model achieves a running speed of 66 FPS yet still performs better than most existing state-of-the-art methods. To verify the generalization ability of the proposed method, we apply it to the edge detection, RGB-D salient object detection, and camouflaged object detection tasks, and our method achieves better results than the corresponding state-of-the-art methods of these three tasks. Code can be found at http://mmcheng.net/poolnet/ . |
| Author | Liu, Jiang-Jiang Cheng, Ming-Ming Hou, Qibin Liu, Zhi-Ang |
| Author_xml | – sequence: 1 givenname: Jiang-Jiang orcidid: 0000-0002-1341-2763 surname: Liu fullname: Liu, Jiang-Jiang email: j04.liu@gmail.com organization: College of Computer Science, Nankai University, Nankai, China – sequence: 2 givenname: Qibin orcidid: 0000-0002-8388-8708 surname: Hou fullname: Hou, Qibin email: andrewhoux@gmail.com organization: College of Computer Science, Nankai University, Nankai, China – sequence: 3 givenname: Zhi-Ang orcidid: 0000-0002-3319-4492 surname: Liu fullname: Liu, Zhi-Ang email: liuzhiang@mail.nankai.edu.cn organization: College of Computer Science, Nankai University, Nankai, China – sequence: 4 givenname: Ming-Ming orcidid: 0000-0001-5550-8758 surname: Cheng fullname: Cheng, Ming-Ming email: cmm@nankai.edu.cn organization: College of Computer Science, Nankai University, Nankai, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/34982676$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kcFu1DAQhi1URLeFFwAJReJSCWXxjB3H5laVAisVuhLlbDnJBLzKxovjleDt67BLDz1wGs3M98-M5j9jJ2MYibGXwJcA3Ly7W19-WS2RIywFSA5KP2ELBMVLgwZP2CKXsNQa9Sk7m6YN5yArLp6xUyGNRlWrBVutQxi-Unr7vrj-vRtC9OOPIv2kYh0Sjcm7oQh9MUNzow-x-OYGnzvFbbOhNhUfKOXgw_icPe3dMNGLYzxn3z9e3119Lm9uP62uLm_KFpVKZQuIrmsa0VW8rwUi9lCbuqOaEwGXKDNBVV3l3DQtx07oSjmQWoDqZCXO2cVh7i6GX3uakt36qaVhcCOF_WRRgTKVktWMvnmEbsI-jvk6i7WsFXBuIFOvj9S-2VJnd9FvXfxj_z0pA3gA2himKVL_gAC3sxP2rxN2dsIencgi_UjU-uTmR6Xo_PB_6auD1BPRwy6jlOFaiHsGopLx |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_3389_fpls_2025_1614929 crossref_primary_10_1007_s11432_024_4449_7 crossref_primary_10_1007_s40747_024_01519_8 crossref_primary_10_1109_TMM_2023_3291823 crossref_primary_10_1109_LSP_2024_3477254 crossref_primary_10_3390_app13179496 crossref_primary_10_3390_coatings12111730 crossref_primary_10_1016_j_knosys_2024_111597 crossref_primary_10_1109_JSTARS_2025_3582927 crossref_primary_10_1016_j_imavis_2024_105154 crossref_primary_10_1007_s10489_024_05860_w crossref_primary_10_1109_TIP_2024_3486225 crossref_primary_10_1145_3640816 crossref_primary_10_1016_j_eswa_2025_129605 crossref_primary_10_1109_TCSVT_2024_3417607 crossref_primary_10_3390_app12157781 crossref_primary_10_1109_TCSVT_2023_3307693 crossref_primary_10_1016_j_jksuci_2023_101838 crossref_primary_10_1109_TMM_2025_3535386 crossref_primary_10_1016_j_knosys_2024_112852 crossref_primary_10_3390_electronics12224583 crossref_primary_10_1016_j_neunet_2024_106144 crossref_primary_10_1016_j_isprsjprs_2023_11_002 crossref_primary_10_3390_app15031649 crossref_primary_10_3390_s24061901 crossref_primary_10_3390_s24041117 crossref_primary_10_1109_TCSVT_2023_3312859 crossref_primary_10_3390_electronics13244915 crossref_primary_10_1145_3674836 crossref_primary_10_1109_ACCESS_2024_3358825 crossref_primary_10_1016_j_patcog_2025_111804 crossref_primary_10_1007_s10489_023_04982_x crossref_primary_10_1016_j_neunet_2024_106751 crossref_primary_10_1016_j_patcog_2023_110074 crossref_primary_10_1007_s11263_025_02482_8 crossref_primary_10_1109_TIP_2022_3232209 crossref_primary_10_32604_cmc_2024_057833 crossref_primary_10_1016_j_knosys_2024_111617 crossref_primary_10_1109_TPAMI_2024_3516874 crossref_primary_10_1109_JIOT_2024_3458096 crossref_primary_10_1016_j_optlastec_2025_113429 crossref_primary_10_3390_s22249667 crossref_primary_10_1016_j_neucom_2023_126530 crossref_primary_10_1016_j_imavis_2024_105048 crossref_primary_10_1109_JSEN_2025_3576077 crossref_primary_10_3390_rs14174219 crossref_primary_10_1007_s10489_024_05569_w crossref_primary_10_1109_TPAMI_2023_3234586 crossref_primary_10_1109_LSP_2024_3465351 crossref_primary_10_1016_j_jvcir_2024_104271 crossref_primary_10_1109_TCE_2023_3345939 crossref_primary_10_1109_TGRS_2023_3332282 crossref_primary_10_1016_j_patcog_2025_112118 crossref_primary_10_1109_TPAMI_2024_3476683 crossref_primary_10_1007_s12559_025_10410_8 crossref_primary_10_1049_ipr2_12895 crossref_primary_10_1016_j_patcog_2024_110330 crossref_primary_10_1007_s11263_024_02058_y crossref_primary_10_1109_TPAMI_2025_3583968 crossref_primary_10_1109_TITS_2023_3293822 crossref_primary_10_1016_j_engappai_2025_111480 crossref_primary_10_1109_JSEN_2024_3401722 crossref_primary_10_3390_electronics11142209 crossref_primary_10_1109_TIM_2025_3574913 crossref_primary_10_3390_diagnostics13162714 crossref_primary_10_1109_TMM_2024_3410542 crossref_primary_10_1088_1742_6596_3072_1_012004 crossref_primary_10_1109_TIP_2023_3308295 crossref_primary_10_1109_TPAMI_2024_3510793 crossref_primary_10_1109_TGRS_2024_3398820 crossref_primary_10_1007_s40747_025_02015_3 crossref_primary_10_1109_TAES_2024_3453218 crossref_primary_10_1007_s00138_024_01552_0 crossref_primary_10_1109_TIM_2025_3556823 crossref_primary_10_1109_LGRS_2022_3220601 crossref_primary_10_1109_TASE_2023_3346887 crossref_primary_10_1109_TGRS_2024_3487244 crossref_primary_10_1109_TCSVT_2024_3436148 crossref_primary_10_1109_TIM_2025_3600718 crossref_primary_10_1145_3624747 crossref_primary_10_1109_TIE_2023_3342312 crossref_primary_10_1007_s10489_025_06635_7 crossref_primary_10_1007_s11042_023_15981_y crossref_primary_10_1145_3624984 crossref_primary_10_3390_s25113551 crossref_primary_10_1016_j_neunet_2025_108097 crossref_primary_10_1109_TCSVT_2022_3233131 crossref_primary_10_1109_TITS_2023_3342811 crossref_primary_10_1109_TWC_2025_3542426 crossref_primary_10_1016_j_engappai_2024_108530 crossref_primary_10_1109_TCSVT_2024_3502244 crossref_primary_10_1016_j_knosys_2025_113515 crossref_primary_10_1109_ACCESS_2023_3344644 crossref_primary_10_3390_aerospace10050488 crossref_primary_10_1109_TPAMI_2024_3368158 crossref_primary_10_1287_ijoc_2023_0034 crossref_primary_10_1016_j_neunet_2025_107445 crossref_primary_10_1049_ipr2_13136 crossref_primary_10_1016_j_patcog_2024_110600 crossref_primary_10_1109_TIV_2024_3351271 crossref_primary_10_1109_TETCI_2024_3380442 crossref_primary_10_1109_TAI_2023_3333827 crossref_primary_10_1007_s11042_025_20888_x crossref_primary_10_1109_TIP_2023_3293759 crossref_primary_10_1109_TPAMI_2023_3264571 crossref_primary_10_1007_s00530_024_01356_2 |
| Cites_doi | 10.1109/CVPR.2017.660 10.1007/978-3-030-01267-0_22 10.1109/ICCV.2019.00736 10.1109/CVPR.2018.00330 10.1109/CVPR.2015.7298938 10.1109/TIP.2018.2874279 10.1109/CVPR.2017.698 10.1109/CVPR.2019.00834 10.1109/TPAMI.2016.2636150 10.1007/s41095-019-0149-9 10.1007/978-3-030-58536-5_21 10.1109/CVPR.2013.407 10.1109/TPAMI.2021.3051099 10.1109/TPAMI.2010.161 10.1109/CVPR.2018.00081 10.1109/CVPR.2017.404 10.1109/CVPR.2014.43 10.1007/s11263-015-0822-0 10.1109/TPAMI.2017.2700300 10.1007/978-3-030-01240-3_15 10.1109/CVPR.2016.618 10.1109/TPAMI.2017.2703082 10.1109/TPAMI.1986.4767851 10.1109/WACV.2019.00171 10.1109/CVPR.2016.90 10.1109/TPAMI.2018.2878849 10.1109/CVPR.2019.00612 10.1109/CVPR.2015.7298965 10.1007/978-3-319-46493-0_50 10.1109/TIP.2015.2487833 10.1109/CVPR.2012.6247743 10.1109/TPAMI.2017.2662005 10.1109/CVPR.2013.271 10.1109/ICCV.2019.00733 10.1109/CVPR.2017.106 10.1109/ICCV.2017.31 10.1109/TPAMI.2014.2377715 10.1109/CVPR42600.2020.01011 10.1109/CVPR.2019.00403 10.1109/CVPR.2019.00404 10.1109/CVPR42600.2020.00406 10.1109/ICRA.2016.7487379 10.1109/CVPR.2015.7298731 10.1145/1833349.1778820 10.1109/CVPR.2019.00172 10.1109/ICCV.2019.00345 10.1109/ICCV.2019.00389 10.1109/ICCV.2015.164 10.1109/CVPR.2017.34 10.1109/CVPR.2018.00474 10.1109/CVPR.2018.00326 10.1109/CVPR.2019.00766 10.1109/CVPR.2006.298 10.1109/TPAMI.2018.2815688 10.1109/TIP.2012.2199502 10.1109/CVPR.2014.119 10.1109/CVPR.2016.80 10.1109/TPAMI.2018.2840724 10.1109/CVPR.2016.58 10.1109/ICCV.2017.32 10.1109/ICCV.2017.436 10.1109/CVPR.2019.00320 10.1109/TPAMI.2021.3107956 10.1007/978-3-319-46493-0_28 10.1109/CVPR.2018.00187 10.1109/TPAMI.2004.1273918 10.1109/TPAMI.2019.2905607 10.1007/s11263-021-01490-8 10.1109/CVPR.2013.153 10.1109/ICCV.2013.370 10.1109/TPAMI.2014.2345401 10.1109/ICCV.2017.433 10.1109/CVPR.2016.32 10.1109/CVPR.2017.191 10.1109/5.726791 10.1109/CVPR.2019.00154 10.1109/CVPR.2016.28 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TPAMI.2021.3140168 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library (IEL) (UW System Shared) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 904 |
| ExternalDocumentID | 34982676 10_1109_TPAMI_2021_3140168 9669083 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: New Generation of AI grantid: 2018AAA0100400 – fundername: National Natural Science Foundation of China; NSFC grantid: 61922046 funderid: 10.13039/501100001809 |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION 5VS 9M8 AAYOK ABFSI ADRHT AETEA AETIX AGSQL AI. AIBXA ALLEH FA8 H~9 IBMZZ ICLAB IFJZH NPM RIG RNI RZB VH1 XJT 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c266t-c122adbb3d50f73222f1797de70ee10424c12e57570e9bc02d3856a148316d453 |
| IEDL.DBID | RIE |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Thu Oct 02 11:58:45 EDT 2025 Sun Nov 09 05:38:06 EST 2025 Thu Apr 03 07:12:26 EDT 2025 Sat Nov 29 02:58:19 EST 2025 Tue Nov 18 21:01:02 EST 2025 Wed Aug 27 02:14:46 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c266t-c122adbb3d50f73222f1797de70ee10424c12e57570e9bc02d3856a148316d453 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-1341-2763 0000-0002-3319-4492 0000-0002-8388-8708 0000-0001-5550-8758 |
| PMID | 34982676 |
| PQID | 2747610091 |
| PQPubID | 85458 |
| PageCount | 18 |
| ParticipantIDs | pubmed_primary_34982676 crossref_primary_10_1109_TPAMI_2021_3140168 crossref_citationtrail_10_1109_TPAMI_2021_3140168 ieee_primary_9669083 proquest_journals_2747610091 proquest_miscellaneous_2616956455 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-Jan.-1 2023-1-1 2023-Jan 20230101 |
| PublicationDateYYYYMMDD | 2023-01-01 |
| PublicationDate_xml | – month: 01 year: 2023 text: 2023-Jan.-1 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref57 ref56 ref12 ref59 ref15 ref58 ref14 ref53 ref55 ref11 ref10 krizhevsky (ref77) 2012 ref17 ref16 ref19 ref18 simonyan (ref71) 2015 ref51 ref50 zhou (ref72) 2015 ref46 ref89 ref45 ref48 ref47 shen (ref65) 2015 ref86 ref42 ref85 ref41 ref88 ref44 ref87 ref43 ref49 kokkinos (ref90) 2016 ref8 ref7 ref9 ref4 ref3 ref6 ronneberger (ref13) 2015 ref5 ref82 ref81 ref40 ref84 ref83 ganin (ref64) 2014 ref80 ref79 ref35 ref34 ref37 ref36 ref75 ref74 ref30 ref33 ref76 ref32 gayoung (ref31) 2016 ref1 ref39 ref38 hong (ref2) 2015 lecun (ref52) 1990 ref70 ref73 ref68 ref24 li (ref29) 2015 ref67 ref23 ref26 ref69 ref25 ref20 ref63 ref66 ref22 ref21 kingma (ref78) 2015 ref28 ref27 ranzato (ref54) 2007 ref60 ref62 ref61 |
| References_xml | – ident: ref15 doi: 10.1109/CVPR.2017.660 – ident: ref82 doi: 10.1007/978-3-030-01267-0_22 – ident: ref49 doi: 10.1109/ICCV.2019.00736 – year: 2015 ident: ref71 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc Int Conf Learn Representations – ident: ref12 doi: 10.1109/CVPR.2018.00330 – ident: ref33 doi: 10.1109/CVPR.2015.7298938 – ident: ref68 doi: 10.1109/TIP.2018.2874279 – ident: ref20 doi: 10.1109/CVPR.2017.698 – ident: ref50 doi: 10.1109/CVPR.2019.00834 – ident: ref8 doi: 10.1109/TPAMI.2016.2636150 – ident: ref38 doi: 10.1007/s41095-019-0149-9 – ident: ref9 doi: 10.1007/978-3-030-58536-5_21 – ident: ref75 doi: 10.1109/CVPR.2013.407 – ident: ref39 doi: 10.1109/TPAMI.2021.3051099 – ident: ref61 doi: 10.1109/TPAMI.2010.161 – ident: ref16 doi: 10.1109/CVPR.2018.00081 – ident: ref76 doi: 10.1109/CVPR.2017.404 – ident: ref74 doi: 10.1109/CVPR.2014.43 – ident: ref32 doi: 10.1007/s11263-015-0822-0 – ident: ref89 doi: 10.1109/TPAMI.2017.2700300 – ident: ref45 doi: 10.1007/978-3-030-01240-3_15 – ident: ref34 doi: 10.1109/CVPR.2016.618 – ident: ref55 doi: 10.1109/TPAMI.2017.2703082 – ident: ref59 doi: 10.1109/TPAMI.1986.4767851 – ident: ref56 doi: 10.1109/WACV.2019.00171 – ident: ref70 doi: 10.1109/CVPR.2016.90 – ident: ref69 doi: 10.1109/TPAMI.2018.2878849 – ident: ref48 doi: 10.1109/CVPR.2019.00612 – ident: ref36 doi: 10.1109/CVPR.2015.7298965 – ident: ref19 doi: 10.1007/978-3-319-46493-0_50 – ident: ref37 doi: 10.1109/TIP.2015.2487833 – ident: ref28 doi: 10.1109/CVPR.2012.6247743 – start-page: 84 year: 2012 ident: ref77 article-title: Imagenet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inform Process Syst – ident: ref6 doi: 10.1109/TPAMI.2017.2662005 – ident: ref26 doi: 10.1109/CVPR.2013.271 – ident: ref42 doi: 10.1109/ICCV.2019.00733 – ident: ref14 doi: 10.1109/CVPR.2017.106 – ident: ref21 doi: 10.1109/ICCV.2017.31 – ident: ref63 doi: 10.1109/TPAMI.2014.2377715 – ident: ref22 doi: 10.1109/CVPR42600.2020.01011 – ident: ref43 doi: 10.1109/CVPR.2019.00403 – start-page: 1185 year: 2007 ident: ref54 article-title: Sparse feature learning for deep belief networks publication-title: Proc Adv Neural Inform Process Syst – ident: ref1 doi: 10.1109/CVPR.2019.00404 – ident: ref58 doi: 10.1109/CVPR42600.2020.00406 – ident: ref7 doi: 10.1109/ICRA.2016.7487379 – ident: ref30 doi: 10.1109/CVPR.2015.7298731 – ident: ref4 doi: 10.1145/1833349.1778820 – ident: ref46 doi: 10.1109/CVPR.2019.00172 – ident: ref57 doi: 10.1109/ICCV.2019.00345 – year: 2015 ident: ref78 article-title: Adam: A method for stochastic optimization publication-title: Proc Int Conf Learn Representation – ident: ref83 doi: 10.1109/ICCV.2019.00389 – ident: ref66 doi: 10.1109/ICCV.2015.164 – ident: ref11 doi: 10.1109/CVPR.2017.34 – year: 2016 ident: ref90 article-title: Pushing the boundaries of boundary detection using deep learning – ident: ref73 doi: 10.1109/CVPR.2018.00474 – ident: ref17 doi: 10.1109/CVPR.2018.00326 – ident: ref47 doi: 10.1109/CVPR.2019.00766 – ident: ref62 doi: 10.1109/CVPR.2006.298 – ident: ref10 doi: 10.1109/TPAMI.2018.2815688 – start-page: 3982 year: 2015 ident: ref65 article-title: DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – start-page: 597 year: 2015 ident: ref2 article-title: Online tracking by learning discriminative saliency map with convolutional neural network publication-title: Proc Int Conf Mach Learn – ident: ref5 doi: 10.1109/TIP.2012.2199502 – ident: ref86 doi: 10.1109/CVPR.2014.119 – ident: ref18 doi: 10.1109/CVPR.2016.80 – ident: ref3 doi: 10.1109/TPAMI.2018.2840724 – ident: ref41 doi: 10.1109/CVPR.2016.58 – ident: ref24 doi: 10.1109/ICCV.2017.32 – ident: ref81 doi: 10.1109/ICCV.2017.436 – ident: ref44 doi: 10.1109/CVPR.2019.00320 – ident: ref85 doi: 10.1109/TPAMI.2021.3107956 – ident: ref35 doi: 10.1007/978-3-319-46493-0_28 – ident: ref23 doi: 10.1109/CVPR.2018.00187 – ident: ref60 doi: 10.1109/TPAMI.2004.1273918 – start-page: 536 year: 2014 ident: ref64 article-title: N^ 4-fields: Neural network nearest neighbor fields for image transforms publication-title: Proc Asian Conf Comput Vis – start-page: 5455 year: 2015 ident: ref29 article-title: Visual saliency based on multiscale deep features publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref51 doi: 10.1109/TPAMI.2019.2905607 – ident: ref80 doi: 10.1007/s11263-021-01490-8 – ident: ref79 doi: 10.1109/CVPR.2013.153 – ident: ref27 doi: 10.1109/ICCV.2013.370 – ident: ref25 doi: 10.1109/TPAMI.2014.2345401 – year: 2015 ident: ref72 article-title: Object detectors emerge in deep scene CNNs publication-title: Proc Int Conf Learn Representations – ident: ref40 doi: 10.1109/ICCV.2017.433 – start-page: 234 year: 2015 ident: ref13 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Interv – start-page: 660 year: 2016 ident: ref31 article-title: Deep saliency with encoded low level distance map and high level features publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref88 doi: 10.1109/CVPR.2016.32 – start-page: 396 year: 1990 ident: ref52 article-title: Handwritten digit recognition with a back-propagation network publication-title: Proc Adv Neural Inform Process Syst – ident: ref67 doi: 10.1109/CVPR.2017.191 – ident: ref53 doi: 10.1109/5.726791 – ident: ref84 doi: 10.1109/CVPR.2019.00154 – ident: ref87 doi: 10.1109/CVPR.2016.28 |
| SSID | ssj0014503 |
| Score | 2.6998951 |
| Snippet | We explore the potential of pooling techniques on the task of salient object detection by expanding its role in convolutional neural networks. In general, two... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 887 |
| SubjectTerms | Applications programs Artificial neural networks Convolutional neural networks Corresponding states Edge detection feature aggregation Feature extraction global guidance Image edge detection Image segmentation mobile application Mobile computing Modules Object detection Object recognition pooling techniques Salience Salient object detection Semantics Task analysis |
| Title | PoolNet+: Exploring the Potential of Pooling for Salient Object Detection |
| URI | https://ieeexplore.ieee.org/document/9669083 https://www.ncbi.nlm.nih.gov/pubmed/34982676 https://www.proquest.com/docview/2747610091 https://www.proquest.com/docview/2616956455 |
| Volume | 45 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8QwEB5UPOjB9W19EcGbVtNN2zTexAd6cF1QYW-lTVIQpBXt-vudSR_sQQVvLZmkJTNDvi-TzAAcR4LrTOrQ1zxDghIhhlOxCf0sVrkoED9oblyxCTkaJZOJGs_BaX8XxlrrDp_ZM3p0sXxT6SltlZ0jNFcIGeZhXkrZ3NXqIwZh5KogI4JBD0ca0V2Q4er8eXz5cI9UcBggQ0U-EVORPhEqRNaUamRmPXIFVn7Hmm7NuR38729XYaXFluyyMYY1mLPlOgy6ug2sdeN1WJ5JQrgB9-OqehvZ-uSC9QfyGMJCNq5qOkqEI1YFIyFqQIzLnhC7Ywt7zGkTh13b2p3nKjfh5fbm-erObwss-BrX5drXwXCYmTwXJuKFpJhLgf4pjZXc2oCCoihhEdDhu8o1HxqRRHGGDEoEqNBIbMFCWZV2B1hheCYKrbCjDGNeJHmSBUZwy3VihSo8CLppTnWbfZyKYLyljoVwlTotpaSltNWSByd9n_cm98af0hukg16ynX4P9jttpq17fqZExRE3Ilby4KhvRseiaElW2mqKMnEQK8q1E3mw3VhBP3ZnPLs_f3MPlqgqfbNTsw8L9cfUHsCi_qpfPz8O0XonyaGz3m8WT-Z3 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3da9swED_SdNDuYemabvOatRrsrXMjW_5S38K60tA2CyyDvhlbkmEQ7JI4-_t3J3_Qh22wNxudZKPTod9Pd7oD-BQKrrJYBa7iGRKUEDGcjHTgZpHMRYH4QXFti03Ei0Xy-CiXA_jc34UxxtjgM3NJj9aXryu1o6OyKUJziZBhD_bDIPC95rZW7zMIQlsHGTEM2jgSie6KDJfT1XL2MEcy6HvIUZFRRFSmTwQSsTUlG3m2I9kSK39Hm3bXuRn93_8ewasWXbJZsxxew8CUxzDqKjew1pCP4eWzNIRjmC-rar0w9cUV60PyGAJDtqxqCibCEauCkRA1IMpl3xG9Ywv7ltMxDrs2tY3oKk_gx83X1Zdbty2x4CrcmWtXeb6f6TwXOuRFTF6XAi001ibmxnjkFkUJg5AO32WuuK9FEkYZcijhoUpD8QaGZVWad8AKzTNRKIkd4yDiRZInmacFN1wlRsjCAa-b5lS1-cepDMY6tTyEy9RqKSUtpa2WHLjo-zw12Tf-KT0mHfSS7fQ7MOm0mbYGuk2JjCNyRLTkwMe-GU2L_CVZaaodykReJCnbTujA22YV9GN3i-f9n795Dge3q4f79H6-uDuFQ6pR35zbTGBYb3bmA7xQv-qf282ZXcO_Ac086NY |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=PoolNet%2B%3A+Exploring+the+Potential+of+Pooling+for+Salient+Object+Detection&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Liu%2C+Jiang-Jiang&rft.au=Hou%2C+Qibin&rft.au=Liu%2C+Zhi-Ang&rft.au=Cheng%2C+Ming-Ming&rft.date=2023-01-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.volume=45&rft.issue=1&rft.spage=887&rft.epage=904&rft_id=info:doi/10.1109%2FTPAMI.2021.3140168&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2021_3140168 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |