INet: Convolutional Networks for Biomedical Image Segmentation

Encoder-decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may discard spatial information, and therefore low-level semantics are lost. Feature fusion methods can mitigate these problems but feature maps...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 9; pp. 16591 - 16603
Main Authors: Weng, Weihao, Zhu, Xin
Format: Journal Article
Language:English
Published: Piscataway IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2169-3536, 2169-3536
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Encoder-decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may discard spatial information, and therefore low-level semantics are lost. Feature fusion methods can mitigate these problems but feature maps of different scales cannot be easily fused because down- and upsampling change the spatial resolution of feature map. To address these issues, we propose INet, which enlarges receptive fields by increasing the kernel sizes of convolutional layers in steps (e.g., from <inline-formula> <tex-math notation="LaTeX">3\times 3 </tex-math></inline-formula> to <inline-formula> <tex-math notation="LaTeX">7\times 7 </tex-math></inline-formula> and then <inline-formula> <tex-math notation="LaTeX">15\times 15 </tex-math></inline-formula>) instead of downsampling. Inspired by an Inception module, INet extracts features by kernels of different sizes through concatenating the output feature maps of all preceding convolutional layers. We also find that the large kernel makes the network feasible for biomedical image segmentation. In addition, INet uses two overlapping max-poolings, i.e., max-poolings with stride 1, to extract the sharpest features. Fixed-size and fixed-channel feature maps enable INet to concatenate feature maps and add multiple shortcuts across layers. In this way, INet can recover low-level semantics by concatenating the feature maps of all preceding layers and expedite the training by adding multiple shortcuts. Because INet has additional residual shortcuts, we compare INet with a UNet system that also has residual shortcuts (ResUNet). To confirm INet as a backbone architecture for biomedical image segmentation, we implement dense connections on INet (called DenseINet) and compare it to a DenseUNet system with residual shortcuts (ResDenseUNet). INet and DenseINet require 16.9% and 37.6% fewer parameters than ResUNet and ResDenseUNet, respectively. In comparison with six encoder-decoder approaches using nine public datasets, INet and DenseINet demonstrate efficient improvements in biomedical image segmentation. INet outperforms DeepLabV3, which implementing atrous convolution instead of downsampling to increase receptive fields. INet also outperforms two recent methods (named HRNet and MS-NAS) that maintain high-resolution representations and repeatedly exchange the information across resolutions.
AbstractList Encoder–decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may discard spatial information, and therefore low-level semantics are lost. Feature fusion methods can mitigate these problems but feature maps of different scales cannot be easily fused because down- and upsampling change the spatial resolution of feature map. To address these issues, we propose INet, which enlarges receptive fields by increasing the kernel sizes of convolutional layers in steps (e.g., from [Formula Omitted] to [Formula Omitted] and then [Formula Omitted]) instead of downsampling. Inspired by an Inception module, INet extracts features by kernels of different sizes through concatenating the output feature maps of all preceding convolutional layers. We also find that the large kernel makes the network feasible for biomedical image segmentation. In addition, INet uses two overlapping max-poolings, i.e., max-poolings with stride 1, to extract the sharpest features. Fixed-size and fixed-channel feature maps enable INet to concatenate feature maps and add multiple shortcuts across layers. In this way, INet can recover low-level semantics by concatenating the feature maps of all preceding layers and expedite the training by adding multiple shortcuts. Because INet has additional residual shortcuts, we compare INet with a UNet system that also has residual shortcuts (ResUNet). To confirm INet as a backbone architecture for biomedical image segmentation, we implement dense connections on INet (called DenseINet) and compare it to a DenseUNet system with residual shortcuts (ResDenseUNet). INet and DenseINet require 16.9% and 37.6% fewer parameters than ResUNet and ResDenseUNet, respectively. In comparison with six encoder–decoder approaches using nine public datasets, INet and DenseINet demonstrate efficient improvements in biomedical image segmentation. INet outperforms DeepLabV3, which implementing atrous convolution instead of downsampling to increase receptive fields. INet also outperforms two recent methods (named HRNet and MS-NAS) that maintain high-resolution representations and repeatedly exchange the information across resolutions.
Encoder-decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may discard spatial information, and therefore low-level semantics are lost. Feature fusion methods can mitigate these problems but feature maps of different scales cannot be easily fused because down- and upsampling change the spatial resolution of feature map. To address these issues, we propose INet, which enlarges receptive fields by increasing the kernel sizes of convolutional layers in steps (e.g., from <inline-formula> <tex-math notation="LaTeX">3\times 3 </tex-math></inline-formula> to <inline-formula> <tex-math notation="LaTeX">7\times 7 </tex-math></inline-formula> and then <inline-formula> <tex-math notation="LaTeX">15\times 15 </tex-math></inline-formula>) instead of downsampling. Inspired by an Inception module, INet extracts features by kernels of different sizes through concatenating the output feature maps of all preceding convolutional layers. We also find that the large kernel makes the network feasible for biomedical image segmentation. In addition, INet uses two overlapping max-poolings, i.e., max-poolings with stride 1, to extract the sharpest features. Fixed-size and fixed-channel feature maps enable INet to concatenate feature maps and add multiple shortcuts across layers. In this way, INet can recover low-level semantics by concatenating the feature maps of all preceding layers and expedite the training by adding multiple shortcuts. Because INet has additional residual shortcuts, we compare INet with a UNet system that also has residual shortcuts (ResUNet). To confirm INet as a backbone architecture for biomedical image segmentation, we implement dense connections on INet (called DenseINet) and compare it to a DenseUNet system with residual shortcuts (ResDenseUNet). INet and DenseINet require 16.9% and 37.6% fewer parameters than ResUNet and ResDenseUNet, respectively. In comparison with six encoder-decoder approaches using nine public datasets, INet and DenseINet demonstrate efficient improvements in biomedical image segmentation. INet outperforms DeepLabV3, which implementing atrous convolution instead of downsampling to increase receptive fields. INet also outperforms two recent methods (named HRNet and MS-NAS) that maintain high-resolution representations and repeatedly exchange the information across resolutions.
Encoder-decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may discard spatial information, and therefore low-level semantics are lost. Feature fusion methods can mitigate these problems but feature maps of different scales cannot be easily fused because downand upsampling change the spatial resolution of feature map. To address these issues, we propose INet, which enlarges receptive fields by increasing the kernel sizes of convolutional layers in steps (e.g., from 3 × 3 to 7 × 7 and then 15 × 15) instead of downsampling. Inspired by an Inception module, INet extracts features by kernels of different sizes through concatenating the output feature maps of all preceding convolutional layers. We also find that the large kernel makes the network feasible for biomedical image segmentation. In addition, INet uses two overlapping max-poolings, i.e., max-poolings with stride 1, to extract the sharpest features. Fixed-size and fixed-channel feature maps enable INet to concatenate feature maps and add multiple shortcuts across layers. In this way, INet can recover low-level semantics by concatenating the feature maps of all preceding layers and expedite the training by adding multiple shortcuts. Because INet has additional residual shortcuts, we compare INet with a UNet system that also has residual shortcuts (ResUNet). To confirm INet as a backbone architecture for biomedical image segmentation, we implement dense connections on INet (called DenseINet) and compare it to a DenseUNet system with residual shortcuts (ResDenseUNet). INet and DenseINet require 16.9% and 37.6% fewer parameters than ResUNet and ResDenseUNet, respectively. In comparison with six encoder- decoder approaches using nine public datasets, INet and DenseINet demonstrate efficient improvements in biomedical image segmentation. INet outperforms DeepLabV3, which implementing atrous convolution instead of downsampling to increase receptive fields. INet also outperforms two recent methods (named HRNet and MS-NAS) that maintain high-resolution representations and repeatedly exchange the information across resolutions.
Author Weng, Weihao
Zhu, Xin
Author_xml – sequence: 1
  givenname: Weihao
  orcidid: 0000-0002-0869-3409
  surname: Weng
  fullname: Weng, Weihao
  organization: Biomedical Information Engineering Laboratory, The University of Aizu, Aizu-Wakamatsu, Japan
– sequence: 2
  givenname: Xin
  surname: Zhu
  fullname: Zhu, Xin
  email: zhuxin@u-aizu.ac.jp
  organization: Biomedical Information Engineering Laboratory, The University of Aizu, Aizu-Wakamatsu, Japan
BookMark eNqFkEtLAzEUhYMo-OovcDPgujWPycuFoEPVguiiug6ZeFNSpxPNTBX_valTRNyYTcLJOecm3yHabWMLCJ0QPCEE67PLqprO5xOKKZkwzFmJ1Q46oEToMeNM7P4676NR1y1xXipLXB6gi9k99OdFFdv32Kz7EFvbFFn6iOmlK3xMxVWIK3gOLuuzlV1AMYfFCtrebszHaM_bpoPRdj9CT9fTx-p2fPdwM6su78aulGU_rmVtgdSO1tZ5UXslCPG6tEopwTnXILXFUirlKauFdJwqxTH2XlqFndDsCM2G3udol-Y1hZVNnybaYL6FmBbGpj64BgzWXhOuHRbElxqDdlxaVnNBHeYUbO46HbpeU3xbQ9ebZVyn_O_O0FIxyll-UnaxweVS7LoE_mcqwWbD3QzczYa72XLPKf0n5cJAqk82NP9kT4ZsAICfaZrle12yL2KwkJQ
CODEN IAECCG
CitedBy_id crossref_primary_10_1016_j_ijmecsci_2025_110861
crossref_primary_10_1007_s10462_023_10595_0
crossref_primary_10_1002_ima_23069
crossref_primary_10_3233_JIFS_211459
crossref_primary_10_1007_s11440_022_01706_2
crossref_primary_10_1007_s10462_025_11234_6
crossref_primary_10_1016_j_addma_2024_104591
crossref_primary_10_1016_j_compbiomed_2022_105941
crossref_primary_10_1016_j_compbiomed_2021_104697
crossref_primary_10_3390_bioengineering9110644
crossref_primary_10_1038_s41467_022_32398_7
crossref_primary_10_1007_s40747_022_00909_0
crossref_primary_10_1109_ACCESS_2023_3318000
crossref_primary_10_1016_j_engappai_2025_110507
crossref_primary_10_3390_diagnostics13182924
crossref_primary_10_1109_TVLSI_2024_3383871
crossref_primary_10_1007_s42979_023_01854_6
crossref_primary_10_3390_s23167242
crossref_primary_10_14358_PERS_23_00083R2
crossref_primary_10_1016_j_procs_2025_03_278
crossref_primary_10_1038_s41433_024_03532_0
crossref_primary_10_1007_s11760_021_02107_w
crossref_primary_10_1038_s41598_025_05783_7
crossref_primary_10_1016_j_asoc_2022_109837
crossref_primary_10_1016_j_zemedi_2022_04_002
crossref_primary_10_1016_j_displa_2024_102929
crossref_primary_10_1016_j_engappai_2025_110633
crossref_primary_10_1016_j_patcog_2025_112339
crossref_primary_10_1117_1_JMI_11_2_026001
crossref_primary_10_1007_s10439_024_03461_9
crossref_primary_10_1016_j_scitotenv_2024_173843
crossref_primary_10_1177_13506501211073242
crossref_primary_10_1007_s12541_023_00857_w
crossref_primary_10_1111_tpj_16627
crossref_primary_10_1109_ACCESS_2023_3330958
crossref_primary_10_1007_s10791_025_09679_y
crossref_primary_10_1002_mp_16967
crossref_primary_10_1007_s00521_024_10757_3
crossref_primary_10_1007_s10462_023_10621_1
crossref_primary_10_1016_j_ijleo_2022_169565
crossref_primary_10_1007_s12524_024_01829_x
crossref_primary_10_1111_srt_13891
crossref_primary_10_3389_fpls_2022_847225
crossref_primary_10_1186_s12880_023_01131_1
crossref_primary_10_1002_admt_202400172
crossref_primary_10_3390_app15052830
crossref_primary_10_1016_j_dsp_2024_104614
crossref_primary_10_3390_s22145140
crossref_primary_10_1007_s11042_023_16659_1
crossref_primary_10_3389_fneur_2023_1217796
crossref_primary_10_1002_mp_17014
crossref_primary_10_1007_s00371_023_03256_4
crossref_primary_10_22430_22565337_3052
crossref_primary_10_3390_buildings15071126
crossref_primary_10_1016_j_nexres_2025_100465
crossref_primary_10_1109_TGRS_2022_3229086
crossref_primary_10_1016_j_eij_2025_100702
crossref_primary_10_1117_1_OE_62_3_031208
crossref_primary_10_5194_amt_16_3257_2023
crossref_primary_10_1016_j_rsase_2022_100870
crossref_primary_10_7717_peerj_cs_1364
crossref_primary_10_1109_TVCG_2024_3372117
crossref_primary_10_1016_j_displa_2024_102708
crossref_primary_10_1111_ijac_70020
crossref_primary_10_1002_cav_2234
crossref_primary_10_1093_bib_bbad072
crossref_primary_10_1117_1_JEI_33_6_063021
crossref_primary_10_1109_OJCS_2025_3565185
crossref_primary_10_3390_electronics14193758
crossref_primary_10_1016_j_zemedi_2023_11_001
crossref_primary_10_1038_s41598_025_08649_0
crossref_primary_10_1007_s12524_024_01902_5
crossref_primary_10_3389_fdata_2023_1174478
crossref_primary_10_1016_j_compag_2021_106624
crossref_primary_10_1016_j_procir_2023_03_065
crossref_primary_10_3389_frwa_2021_800369
crossref_primary_10_1007_s42514_024_00184_0
crossref_primary_10_1007_s44443_025_00167_3
crossref_primary_10_1016_j_compag_2024_109560
crossref_primary_10_1016_j_compag_2025_110172
crossref_primary_10_1117_1_JEI_34_2_023002
crossref_primary_10_1016_j_bspc_2025_107762
crossref_primary_10_1007_s00521_024_09459_7
crossref_primary_10_1016_j_procs_2023_01_299
crossref_primary_10_3390_app132011555
crossref_primary_10_1016_j_cmpb_2022_107208
crossref_primary_10_1148_ryai_220080
crossref_primary_10_1007_s00401_021_02393_1
crossref_primary_10_1016_j_ultras_2024_107439
crossref_primary_10_1016_j_jag_2023_103180
crossref_primary_10_3389_frwa_2023_1178114
crossref_primary_10_3389_frwa_2024_1439906
crossref_primary_10_1038_s41598_022_26372_y
crossref_primary_10_1016_j_matchar_2021_111638
crossref_primary_10_1002_jeo2_12111
crossref_primary_10_1007_s10055_023_00892_y
crossref_primary_10_1016_j_eswa_2025_128618
crossref_primary_10_1016_j_ecolind_2023_110086
crossref_primary_10_1016_j_cmpb_2022_107314
crossref_primary_10_1109_ACCESS_2024_3354379
crossref_primary_10_3389_fphys_2025_1457197
crossref_primary_10_1038_s41598_023_40899_8
crossref_primary_10_32604_cmes_2025_060917
crossref_primary_10_1016_j_jvcir_2022_103748
crossref_primary_10_1016_j_scs_2022_104181
crossref_primary_10_3390_rs14122745
crossref_primary_10_1002_smll_202405065
crossref_primary_10_1016_j_neucom_2024_128531
crossref_primary_10_1016_j_compbiomed_2023_107793
crossref_primary_10_1016_j_energy_2024_133186
crossref_primary_10_1080_15230406_2025_2533316
crossref_primary_10_1016_j_ejmp_2023_103184
crossref_primary_10_1016_j_matchar_2024_114532
crossref_primary_10_1007_s11571_023_09965_9
crossref_primary_10_1016_j_bspc_2024_106975
crossref_primary_10_1016_j_bspc_2024_106850
crossref_primary_10_4103_jmss_jmss_42_24
crossref_primary_10_3390_rs13204088
crossref_primary_10_1016_j_measen_2023_100998
crossref_primary_10_1016_j_rsma_2025_104065
crossref_primary_10_1007_s10278_024_01239_y
crossref_primary_10_1007_s11069_025_07396_9
crossref_primary_10_14201_adcaij_31528
crossref_primary_10_1007_s11042_023_16398_3
crossref_primary_10_1029_2022JD038163
crossref_primary_10_1007_s41870_024_01945_4
crossref_primary_10_3390_rs16020275
crossref_primary_10_1016_j_rsase_2023_101036
crossref_primary_10_3390_pharmaceutics14112378
crossref_primary_10_3389_feart_2024_1345104
crossref_primary_10_1038_s41375_025_02769_2
crossref_primary_10_1007_s11042_022_13900_1
crossref_primary_10_1109_TPAMI_2025_3579271
crossref_primary_10_1109_TPS_2023_3339669
crossref_primary_10_1088_1361_6560_ad4c4f
crossref_primary_10_3390_agriengineering5010018
crossref_primary_10_1007_s13198_024_02667_3
crossref_primary_10_1016_j_tws_2025_113262
crossref_primary_10_1016_j_engappai_2023_106743
crossref_primary_10_1016_j_compbiomed_2024_108167
crossref_primary_10_1016_j_matchar_2025_115486
crossref_primary_10_1002_jbmr_4879
crossref_primary_10_1007_s10570_023_05314_5
crossref_primary_10_1371_journal_pone_0277277
crossref_primary_10_1016_j_jfutfo_2022_03_004
crossref_primary_10_1109_LRA_2025_3592080
crossref_primary_10_1080_15376494_2022_2129888
crossref_primary_10_1016_j_asoc_2024_111749
crossref_primary_10_4103_ATMR_ATMR_27_25
crossref_primary_10_3390_s22239366
crossref_primary_10_1109_ACCESS_2023_3269068
crossref_primary_10_1016_j_dsp_2025_105167
crossref_primary_10_1016_j_crad_2024_06_006
crossref_primary_10_1016_j_jag_2025_104786
crossref_primary_10_1177_14780771241290649
crossref_primary_10_1109_JIOT_2023_3304526
crossref_primary_10_3390_photonics11090887
crossref_primary_10_3390_mi13101611
crossref_primary_10_3390_e25020228
crossref_primary_10_1177_20552076251332018
crossref_primary_10_1088_1361_6501_ad35dd
crossref_primary_10_1016_j_jag_2022_102877
crossref_primary_10_1007_s10462_024_10887_z
crossref_primary_10_1016_j_aquaeng_2022_102299
crossref_primary_10_3390_biomedicines10092157
crossref_primary_10_3390_rs13193981
crossref_primary_10_1016_j_eng_2023_09_023
crossref_primary_10_3390_agriculture14071098
crossref_primary_10_1016_j_procs_2021_11_084
crossref_primary_10_1016_j_compbiomed_2023_107484
crossref_primary_10_1134_S0001433823120083
crossref_primary_10_1007_s11227_022_04379_6
crossref_primary_10_1109_TPAMI_2024_3435571
crossref_primary_10_1007_s42243_023_01031_2
crossref_primary_10_1016_j_measurement_2024_116557
crossref_primary_10_1016_j_cels_2025_101261
crossref_primary_10_1038_s41598_025_94244_2
crossref_primary_10_1109_TGRS_2023_3301310
crossref_primary_10_1371_journal_pcbi_1012075
crossref_primary_10_1186_s40537_024_00948_z
crossref_primary_10_1016_j_displa_2025_103032
crossref_primary_10_4236_health_2023_155029
crossref_primary_10_3390_s22229019
crossref_primary_10_1016_j_jnucmat_2022_154219
crossref_primary_10_1109_ACCESS_2023_3249294
crossref_primary_10_1016_j_biosystemseng_2025_02_007
crossref_primary_10_3389_fnins_2023_1277501
crossref_primary_10_1136_bjo_2023_323308
crossref_primary_10_1007_s42979_023_01879_x
crossref_primary_10_1016_j_dsp_2023_104221
crossref_primary_10_1093_biomethods_bpaf030
crossref_primary_10_1371_journal_pone_0304691
crossref_primary_10_1016_j_compbiomed_2022_105806
crossref_primary_10_1007_s11694_023_02092_3
crossref_primary_10_1109_JSTARS_2022_3184156
crossref_primary_10_1097_SCS_0000000000009597
crossref_primary_10_1007_s12541_024_01092_7
crossref_primary_10_3390_cancers14010036
crossref_primary_10_1007_s11227_022_04686_y
crossref_primary_10_3390_app15116302
crossref_primary_10_3390_electronics12020292
crossref_primary_10_1021_acs_energyfuels_4c06337
crossref_primary_10_3390_electronics12224600
crossref_primary_10_12677_mos_2024_132176
crossref_primary_10_1007_s10489_022_04272_y
crossref_primary_10_1093_bib_bbad358
crossref_primary_10_1002_adts_202300465
crossref_primary_10_1016_j_engappai_2024_109292
crossref_primary_10_1016_j_matdes_2025_114475
Cites_doi 10.1109/CVPRW.2017.156
10.1016/j.compbiomed.2019.05.002
10.1109/SSCI.2017.8280804
10.1109/TPAMI.2015.2389824
10.1111/hpb.12056
10.1007/s10278-017-9983-4
10.1109/CVPR.2016.98
10.1155/2017/9283480
10.1007/978-3-030-00889-5_1
10.1109/TBME.2009.2035102
10.1109/TMI.2017.2664042
10.1109/ISBI.2018.8363547
10.1007/s00268-004-7435-z
10.1109/ISBI.2008.4540988
10.1109/TPAMI.2007.56
10.1007/978-3-319-46484-8_29
10.1007/978-3-319-46976-8_19
10.1109/CVPR.2016.314
10.1016/j.media.2018.10.004
10.1109/5.726791
10.1109/CVPR.2015.7299173
10.1109/CVPR.2017.243
10.1109/LGRS.2018.2802944
10.1109/CVPR.2015.7298965
10.1109/TMI.2013.2290491
10.1002/mp.13141
10.1016/j.eswa.2014.09.020
10.1109/CVPR.2007.383157
10.1109/CVPR.2017.189
10.1109/CVPR.2016.90
10.1007/s40273-014-0198-y
10.1371/journal.pone.0140381
10.1109/CVPR.2019.00584
10.1038/s41598-019-48802-0
10.1016/j.neunet.2019.08.025
10.1109/TPAMI.2016.2644615
10.1109/CVPR.2015.7298594
10.1109/TMI.2018.2845918
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
ESBDL
RIA
RIE
AAYXX
CITATION
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
DOA
DOI 10.1109/ACCESS.2021.3053408
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE Xplore Open Access Journals
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
METADEX
Technology Research Database
Materials Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Directory of Open Access Journals
DatabaseTitle CrossRef
Materials Research Database
Engineered Materials Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
METADEX
Computer and Information Systems Abstracts Professional
DatabaseTitleList Materials Research Database


Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2169-3536
EndPage 16603
ExternalDocumentID oai_doaj_org_article_09f9159c061f490e9c57a3b562c052ea
10_1109_ACCESS_2021_3053408
9330594
Genre orig-research
GrantInformation_xml – fundername: JSPS KAKENHI
  grantid: 18K11532; 18K08010
  funderid: 10.13039/501100001691
– fundername: Competitive Research Fund of The University of Aizu
  grantid: 2020-P-3
  funderid: 10.13039/501100006665
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
ABAZT
ABVLG
ACGFS
ADBBV
AGSQL
ALMA_UNASSIGNED_HOLDINGS
BCNDV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
ESBDL
GROUPED_DOAJ
IPLJI
JAVBF
KQ8
M43
M~E
O9-
OCL
OK1
RIA
RIE
RNS
AAYXX
CITATION
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c474t-b7bae1bc2bacf6bf8611f94a88865559e79a07788f23b67c5288500ff7a80c693
IEDL.DBID DOA
ISICitedReferencesCount 249
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000613541200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2169-3536
IngestDate Fri Oct 03 12:50:55 EDT 2025
Mon Jun 30 06:30:28 EDT 2025
Sat Nov 29 06:11:53 EST 2025
Tue Nov 18 22:27:33 EST 2025
Wed Aug 27 05:54:23 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License https://creativecommons.org/licenses/by-nc-nd/4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c474t-b7bae1bc2bacf6bf8611f94a88865559e79a07788f23b67c5288500ff7a80c693
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-0869-3409
OpenAccessLink https://doaj.org/article/09f9159c061f490e9c57a3b562c052ea
PQID 2483253555
PQPubID 4845423
PageCount 13
ParticipantIDs crossref_primary_10_1109_ACCESS_2021_3053408
proquest_journals_2483253555
doaj_primary_oai_doaj_org_article_09f9159c061f490e9c57a3b562c052ea
crossref_citationtrail_10_1109_ACCESS_2021_3053408
ieee_primary_9330594
PublicationCentury 2000
PublicationDate 20210000
2021-00-00
20210101
2021-01-01
PublicationDateYYYYMMDD 2021-01-01
PublicationDate_xml – year: 2021
  text: 20210000
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE access
PublicationTitleAbbrev Access
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref56
ref12
ref15
ref14
health (ref47) 2016
ref53
ref52
ref55
ref11
ref54
ref10
bilic (ref45) 2019
krizhevsky (ref2) 2012
ref17
ref16
ref19
ref18
liu (ref27) 2015
liu (ref37) 2018
ref50
szegedy (ref36) 2017
ref46
luo (ref33) 2016
ref48
bakas (ref30) 2018
ref42
dumoulin (ref31) 2016
ref41
ref43
hinton (ref38) 1986; 1
simpson (ref44) 2019
ref49
simonyan (ref3) 2014
ref8
ref9
ref4
ref6
ref5
chen (ref32) 2017
ref40
ref35
ref34
wang (ref23) 2016
ref1
ref39
ref24
ref26
ref25
ref22
ref21
ronneberger (ref20) 2015
ref28
chollet (ref51) 2015
ref29
ray (ref13) 2004
goodfellow (ref7) 2016
References_xml – ident: ref41
  doi: 10.1109/CVPRW.2017.156
– ident: ref42
  doi: 10.1016/j.compbiomed.2019.05.002
– year: 2016
  ident: ref7
  publication-title: Deep Learning
– year: 2015
  ident: ref27
  article-title: ParseNet: Looking wider to see better
  publication-title: arXiv 1506 04579
– ident: ref11
  doi: 10.1109/SSCI.2017.8280804
– year: 2016
  ident: ref47
  publication-title: Ultrasound Nerve Segmentation
– year: 2017
  ident: ref32
  article-title: Rethinking atrous convolution for semantic image segmentation
  publication-title: arXiv 1706 05587
– ident: ref35
  doi: 10.1109/TPAMI.2015.2389824
– year: 2004
  ident: ref13
  publication-title: Information Technology Principles and Applications
– ident: ref50
  doi: 10.1111/hpb.12056
– year: 2015
  ident: ref51
  publication-title: Keras Github
– ident: ref8
  doi: 10.1007/s10278-017-9983-4
– ident: ref26
  doi: 10.1109/CVPR.2016.98
– ident: ref54
  doi: 10.1155/2017/9283480
– year: 2016
  ident: ref23
  article-title: Deeply-fused nets
  publication-title: arXiv 1605 07716
– ident: ref52
  doi: 10.1007/978-3-030-00889-5_1
– start-page: 9605
  year: 2018
  ident: ref37
  article-title: An intriguing failing of convolutional neural networks and the coordconv solution
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref10
  doi: 10.1109/TBME.2009.2035102
– ident: ref46
  doi: 10.1109/TMI.2017.2664042
– ident: ref16
  doi: 10.1109/ISBI.2018.8363547
– year: 2014
  ident: ref3
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: arXiv 1409 1556
– ident: ref49
  doi: 10.1007/s00268-004-7435-z
– year: 2019
  ident: ref44
  article-title: A large annotated medical image dataset for the development and evaluation of segmentation algorithms
  publication-title: arXiv 1902 09063
– start-page: 4898
  year: 2016
  ident: ref33
  article-title: Understanding the effective receptive field in deep convolutional neural networks
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 1097
  year: 2012
  ident: ref2
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref18
  doi: 10.1109/ISBI.2008.4540988
– ident: ref34
  doi: 10.1109/TPAMI.2007.56
– ident: ref22
  doi: 10.1007/978-3-319-46484-8_29
– ident: ref21
  doi: 10.1007/978-3-319-46976-8_19
– year: 2016
  ident: ref31
  article-title: A guide to convolution arithmetic for deep learning
  publication-title: ArXiv 1603 07285
– ident: ref28
  doi: 10.1109/CVPR.2016.314
– ident: ref53
  doi: 10.1016/j.media.2018.10.004
– start-page: 1
  year: 2017
  ident: ref36
  article-title: Inception-v4, inception-ResNet and the impact of residual connections on learning
  publication-title: Proc 31st AAAI Conf Artif Intell
– ident: ref1
  doi: 10.1109/5.726791
– ident: ref55
  doi: 10.1109/CVPR.2015.7299173
– ident: ref6
  doi: 10.1109/CVPR.2017.243
– ident: ref39
  doi: 10.1109/LGRS.2018.2802944
– year: 2018
  ident: ref30
  article-title: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge
  publication-title: arXiv 1811 02629
– ident: ref14
  doi: 10.1109/CVPR.2015.7298965
– volume: 1
  year: 1986
  ident: ref38
  article-title: Distributed representations
  publication-title: Parallel Distributed Processing Explorations in the Microstructure of Cognition Foundations
– ident: ref9
  doi: 10.1109/TMI.2013.2290491
– ident: ref17
  doi: 10.1002/mp.13141
– ident: ref19
  doi: 10.1016/j.eswa.2014.09.020
– ident: ref12
  doi: 10.1109/CVPR.2007.383157
– ident: ref25
  doi: 10.1109/CVPR.2017.189
– ident: ref5
  doi: 10.1109/CVPR.2016.90
– ident: ref48
  doi: 10.1007/s40273-014-0198-y
– ident: ref43
  doi: 10.1371/journal.pone.0140381
– ident: ref24
  doi: 10.1109/CVPR.2019.00584
– ident: ref40
  doi: 10.1038/s41598-019-48802-0
– year: 2019
  ident: ref45
  article-title: The liver tumor segmentation benchmark (LiTS)
  publication-title: arXiv 1901 04056
– ident: ref56
  doi: 10.1016/j.neunet.2019.08.025
– ident: ref15
  doi: 10.1109/TPAMI.2016.2644615
– ident: ref4
  doi: 10.1109/CVPR.2015.7298594
– start-page: 234
  year: 2015
  ident: ref20
  article-title: U-Net: Convolutional networks for biomedical image segmentation
  publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent
– ident: ref29
  doi: 10.1109/TMI.2018.2845918
SSID ssj0000816957
Score 2.612436
Snippet Encoder-decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may...
Encoder–decoder networks are state-of-the-art approaches to biomedical image segmentation, but have two problems: i.e., the widely used pooling operations may...
SourceID doaj
proquest
crossref
ieee
SourceType Open Website
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 16591
SubjectTerms Biomedical image
Biomedical imaging
Coders
Convolution
convolutional networks
encoder–decoder networks
Feature extraction
Feature maps
Image segmentation
Kernel
Kernels
Medical imaging
semantic segmentation
Semantics
Spatial data
Spatial resolution
Tumors
SummonAdditionalLinks – databaseName: IEEE Electronic Library (IEL)
  dbid: RIE
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEB6S0EN7SNOmpdsmxYce40SWJUvqobBZEhooS6Et5CYseVQKjTfsI7-_I1kxCymB3IzRGGlGj2_Gmm8APumuMx1KVaKi1SQkdqVG78qqRs4dR8IkiTL_m5rP9fW1-b4DJ2MuDCKmy2d4Gh_Tv_xu4TcxVHYWnW9pxC7sKqWGXK0xnhILSBipMrFQxczZdDajMZALyKtTkqtFLCG5dfgkjv5cVOXBTpyOl8uXT-vYAexnGFlMB7u_gh3sX8OLLXLBQ_hyNcf152K26O_y9CKB-XDre1UQVi3OU-p9tFJxdUP7SvEDf9_kXKT-Dfy6vPg5-1rmagmlF0qsS6dci5Xz3LU-NC7opqqCES25uI0kvwGVaZkijzfw2jXKS661ZCwE1WrmG1O_hb1-0eM7KOoYD-Kh01yhaI02TDjtefC-Q8JXOAF-r0brM5V4rGjx1yaXghk76N5G3dus-wmcjEK3A5PG483Po33GppEGO70gxdu8qiwzwRAe8wRKgjAMjZeqrR1hOs8kx3YCh9FY40eynSZwdG9tm5fsynJBY5YEv-T7_0t9gOexg0P85Qj21ssNHsMzf7f-s1p-TLPxHwN-28o
  priority: 102
  providerName: IEEE
Title INet: Convolutional Networks for Biomedical Image Segmentation
URI https://ieeexplore.ieee.org/document/9330594
https://www.proquest.com/docview/2483253555
https://doaj.org/article/09f9159c061f490e9c57a3b562c052ea
Volume 9
WOSCitedRecordID wos000613541200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2169-3536
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000816957
  issn: 2169-3536
  databaseCode: DOA
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2169-3536
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000816957
  issn: 2169-3536
  databaseCode: M~E
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8QwEA4iHvQgPnF90YNHq2maNIkHQRdFQRdBBW-hSSciuFXc1aO_3UkalwVBL156KEnazExmvgnJN4TsqabRDQiZg8TVxAU0uQJn86IExiwDxCSRMv9KDgbq4UHfTJX6CmfCOnrgTnCHVHuNIddh3PFcU9BOyLq0GLYdFQwiNELUM5VMRR-sikoLmWiGCqoPT_p9nBEmhKw4QBsveSgoORWKImN_KrHywy_HYHO-RBYTSsxOur9bJjPQrpCFKe7AVXJ8OYDxUdZ_aT-S9WCHQXeoe5QhFM1O4836oITscohuI7uFx2G6atSukfvzs7v-RZ6KIeSOSz7OrbQ1FNYxWztfWa-qovCa15jBVgLTApC6phITWs9KW0knmFKCUu9lrairdLlOZtuXFjZIVobtHuYbxSTwWitNuVWOeecaQPgEPcK-5WJcYgoPBSueTcwYqDadME0QpknC7JH9SafXjijj9-anQeCTpoHlOr5A3Zuke_OX7ntkNahrMkjYnBGa98j2t_pMWpEjwzjOWSC6Epv_8ektMh-m023GbJPZ8ds77JA59zF-Gr3tRmPE5_Xn2W68UvgF7vLgOA
linkProvider Directory of Open Access Journals
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3daxQxEB9qFdQHv6p4WnUffOy22WyySXwQ2sPSw3MRrNC3sMlOimD3pHft3-8kmy4FRfBtWTJLMpOP38xmfgPwXve96VGqEhWtJiGxLzV6V1Y1cu44EiZJlPlL1bb67Mx83YK9KRcGEdPlM9yPj-lffr_yVzFUdhCdb2nEHbgrheDVmK01RVRiCQkjVaYWqpg5OJzPaRTkBPJqnyRrEYtI3jp-Ekt_Lqvyx16cDpjjx__XtSfwKAPJ4nC0_FPYwuEZPLxFL7gDHxctbj4U89VwnScYCbTjve91QWi1OErJ99FOxeKCdpbiG55f5Gyk4Tl8P_50Oj8pc72E0gslNqVTrsPKee46HxoXdFNVwYiOnNxGkueAynRMkc8beO0a5SXXWjIWguo0842pX8D2sBrwJRR1jAjx0GuuUHRGGyac9jx43yMhLJwBv1Gj9ZlMPNa0-GmTU8GMHXVvo-5t1v0M9iahXyOXxr-bH0X7TE0jEXZ6QYq3eV1ZZoIhROYJlgRhGBovVVc7QnWeSY7dDHaisaaPZDvNYPfG2jYv2rXlgsYsCYDJV3-Xegf3T06_LO1y0X5-DQ9iZ8dozC5sby6v8A3c89ebH-vLt2lm_gaaU98R
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=INet%3A+Convolutional+Networks+for+Biomedical+Image+Segmentation&rft.jtitle=IEEE+access&rft.au=Weng%2C+Weihao&rft.au=Zhu%2C+Xin&rft.date=2021&rft.issn=2169-3536&rft.eissn=2169-3536&rft.volume=9&rft.spage=16591&rft.epage=16603&rft_id=info:doi/10.1109%2FACCESS.2021.3053408&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_ACCESS_2021_3053408
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon