Accelerated muscle mass estimation from CT images through transfer learning

Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a ce...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:BMC medical imaging Ročník 24; číslo 1; s. 271 - 18
Hlavní autoři: Yoon, Seunghan, Kim, Tae Hyung, Jung, Young Kul, Kim, Younghoon
Médium: Journal Article
Jazyk:angličtina
Vydáno: London BioMed Central 09.10.2024
BioMed Central Ltd
Springer Nature B.V
BMC
Témata:
ISSN:1471-2342, 1471-2342
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device. Methods In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades. Results We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems. Conclusion In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.
AbstractList Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device. Methods In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades. Results We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems. Conclusion In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.
The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device. In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades. We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems. In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.
The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device.BACKGROUNDThe cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device.In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades.METHODSIn this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades.We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems.RESULTSWe show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems.In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.CONCLUSIONIn the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.
Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device. Methods In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades. Results We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems. Conclusion In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks. Keywords: Medical image segmentation, CT image segmentation, Deep learning, Convolutional neural network
The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device. In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades. We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems. In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.
BackgroundThe cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device.MethodsIn this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades.ResultsWe show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems.ConclusionIn the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.
Abstract Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to variances in images depending on the computed tomography (CT) devices, a deep learning based segmentation model trained with a certain device often does not work with images from a different device. Methods In this study, we propose an efficient learning strategy for deep learning models in medical image segmentation. We aim to overcome the difficulties of segmentation in CT images by training a VNet segmentation model which enables rapid labeling of organs in CT images with the model obtained by transfer learning using a small number of manually labeled images, called SEED images. We established a process for generating SEED images and conducting transfer learning a model. We evaluate the performance of various segmentation models such as vanilla UNet, UNETR, Swin-UNETR and VNet. Furthermore, assuming a scenario that a model is repeatedly trained with CT images collected from multiple devices, in which is catastrophic forgetting often occurs, we examine if the performance of our model degrades. Results We show that transfer learning can train a model that does a good job of segmenting muscles with a small number of images. In addition, it was confirmed that VNet shows better performance when comparing the performance of existing semi-automated segmentation tools and other deep learning networks to muscle and liver segmentation tasks. Additionally, we confirmed that VNet is the most robust model to deal with catastrophic forgetting problems. Conclusion In the 2D CT image segmentation task, we confirmed that the CNN-based network shows better performance than the existing semi-automatic segmentation tool or latest transformer-based networks.
ArticleNumber 271
Audience Academic
Author Jung, Young Kul
Yoon, Seunghan
Kim, Younghoon
Kim, Tae Hyung
Author_xml – sequence: 1
  givenname: Seunghan
  surname: Yoon
  fullname: Yoon, Seunghan
  organization: Department of Computer Science & Engineering (Major in Bio Artificial Intelligence), Hanyang University at Ansan
– sequence: 2
  givenname: Tae Hyung
  surname: Kim
  fullname: Kim, Tae Hyung
  organization: Division of Gastroenterology and Hepatology, Hallym University Sacred Heart Hospital
– sequence: 3
  givenname: Young Kul
  surname: Jung
  fullname: Jung, Young Kul
  email: free93cool@gmail.com
  organization: Division of Gastroenterology and Hepatology, Department of Internal Medicine, Korea University Ansan Hospital
– sequence: 4
  givenname: Younghoon
  surname: Kim
  fullname: Kim, Younghoon
  email: nongaussian@hanyang.ac.kr
  organization: Department of Computer Science & Engineering (Major in Bio Artificial Intelligence), Hanyang University at Ansan
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39385108$$D View this record in MEDLINE/PubMed
BookMark eNp9Uktv1DAYjFARfcAf4IAiceGS4tfG9gmtKh4VlbiUs-XEn7NeJXaxEyT-PV93W9qtUOWDXzNjz2hOq6OYIlTVW0rOKVXtx0KZUqQhTDSECqEb8aI6oULShnHBjh6tj6vTUraEUKm4eFUdc83VihJ1Un1f9z2MkO0Mrp6W0o9QT7aUGsocJjuHFGuf01RfXNe4H6DU8yanZdjUc7axeMj1CDbHEIfX1UtvxwJv7uaz6ueXz9cX35qrH18vL9ZXTd8SNTeaOgFUQ699K7uOiq4DR7TwLYCitAPLOJdWUW2V5xKca6lV0jmnndRA-Fl1udd1yW7NTcZ_5T8m2WB2BykPxuY5oBXjGXi_wsecZIIxpnwvuffasZXofKdR69Ne62bpJnA9RLQ1Hoge3sSwMUP6bSgV7UozhQof7hRy-rVgbGYKBTMdbYS0FMMpXaE7rVqEvn8C3aYlR8xqh6L4PyIfUINFByH6hA_3t6JmjflIyRldIer8PygcDqbQY1F8wPMDwrvHTv9ZvO8CAtQe0OdUSgZv-jDvGoDKYTSUmNvamX3tDNbO7GpnBFLZE-q9-rMkvicVBMcB8kMaz7D-Auiq6UM
CitedBy_id crossref_primary_10_3389_fpls_2025_1571445
crossref_primary_10_1007_s44196_025_00937_x
Cites_doi 10.1109/TAI.2023.3327981
10.1007/978-3-030-87199-4_23
10.1186/s40537-023-00727-2
10.1016/j.compbiomed.2023.106646
10.1109/ACCESS.2023.3244952
10.1016/j.eswa.2022.119024
10.3390/app10134523
10.1016/j.compbiomed.2022.106365
10.1109/WACV51458.2022.00181
10.1109/ACCESS.2023.3335948
10.1007/978-3-031-08999-2_22
10.1016/j.inffus.2023.03.008
10.1007/978-3-030-33128-3_11
10.3389/fonc.2021.580806
10.3390/s19122650
10.1117/12.2549406
10.1016/j.media.2023.102958
10.1038/s41598-021-95972-x
10.1007/s12194-024-00839-1
10.1371/journal.pone.0257371
10.1109/83.661186
10.3350/cmh.2022.0231
10.3348/kjr.2019.0470
10.1109/ICASSP40776.2020.9053405
10.1109/3DV.2016.79
10.1016/j.ejrad.2020.109153
10.1007/s10462-011-9220-3
10.1109/CVPR.2016.308
10.1016/j.media.2020.101950
10.1097/MD.0000000000015867
10.1007/978-3-030-00889-5_1
10.1007/s10278-017-9988-z
10.1016/j.compmedimag.2019.04.007
10.3390/s21062083
10.1148/ryai.2021200130
10.1016/j.clnu.2021.06.025
10.1002/mp.14465
10.1109/WACV56688.2023.00614
10.1117/12.2512965
10.1016/j.promfg.2020.01.386
10.1007/978-3-319-24574-4_28
10.3390/electronics9030427
10.1016/j.media.2022.102680
10.3390/cancers13071590
ContentType Journal Article
Copyright The Author(s) 2024
2024. The Author(s).
COPYRIGHT 2024 BioMed Central Ltd.
2024. This work is licensed under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
The Author(s) 2024 2024
Copyright_xml – notice: The Author(s) 2024
– notice: 2024. The Author(s).
– notice: COPYRIGHT 2024 BioMed Central Ltd.
– notice: 2024. This work is licensed under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: The Author(s) 2024 2024
DBID C6C
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7QO
7RV
7X7
7XB
88E
8FD
8FE
8FG
8FH
8FI
8FJ
8FK
ABUWG
AFKRA
ARAPS
AZQEC
BBNVY
BENPR
BGLVJ
BHPHI
CCPQU
DWQXO
FR3
FYUFA
GHDGH
GNUQQ
HCIFZ
K9.
KB0
LK8
M0S
M1P
M7P
NAPCQ
P5Z
P62
P64
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
DOI 10.1186/s12880-024-01449-4
DatabaseName Springer Nature OA Free Journals
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Biotechnology Research Abstracts
Nursing & Allied Health Database
Health & Medical Collection (ProQuest)
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Natural Science Collection
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials
Biological Science Collection
ProQuest Central
Technology Collection
Natural Science Collection
ProQuest One Community College
ProQuest Central Korea
Engineering Research Database
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
SciTech Collection (ProQuest)
ProQuest Health & Medical Complete (Alumni)
Nursing & Allied Health Database (Alumni Edition)
Biological Sciences
ProQuest Health & Medical Collection
Medical Database
Biological Science Database
Nursing & Allied Health Premium
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
Biotechnology and BioEngineering Abstracts
Proquest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest Central Student
Technology Collection
Technology Research Database
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Biotechnology Research Abstracts
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Health & Medical Research Collection
Biological Science Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
Advanced Technologies & Aerospace Collection
ProQuest Biological Science Collection
ProQuest One Academic Eastern Edition
ProQuest Nursing & Allied Health Source
ProQuest Hospital Collection
ProQuest Technology Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
ProQuest Hospital Collection (Alumni)
Biotechnology and BioEngineering Abstracts
Advanced Technologies & Aerospace Database
Nursing & Allied Health Premium
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest Nursing & Allied Health Source (Alumni)
Engineering Research Database
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList

MEDLINE - Academic

MEDLINE
Publicly Available Content Database

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
EISSN 1471-2342
EndPage 18
ExternalDocumentID oai_doaj_org_article_f2eff519ed7242228fc73ff9d254bfb9
PMC11465928
A811773215
39385108
10_1186_s12880_024_01449_4
Genre Journal Article
GeographicLocations South Korea
GeographicLocations_xml – name: South Korea
GrantInformation_xml – fundername: Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA)
  grantid: RS-2022-00155885; RS-2022-00155885
– fundername: Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT)
  grantid: P0025661; P0025661
– fundername: Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA)
  grantid: RS-2022-00155885
– fundername: Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT)
  grantid: P0025661
GroupedDBID ---
0R~
23N
2WC
53G
5VS
6J9
7RV
7X7
88E
8FE
8FG
8FH
8FI
8FJ
AAFWJ
AAJSJ
AASML
ABUWG
ACGFO
ACGFS
ACIHN
ACIWK
ACPRK
ADBBV
ADRAZ
ADUKV
AEAQA
AENEX
AFKRA
AFPKN
AFRAH
AHBYD
AHMBA
AHYZX
ALMA_UNASSIGNED_HOLDINGS
AMKLP
AMTXH
AOIJS
ARAPS
BAPOH
BAWUL
BBNVY
BCNDV
BENPR
BFQNJ
BGLVJ
BHPHI
BMC
BPHCQ
BVXVI
C6C
CCPQU
CS3
DIK
DU5
E3Z
EBD
EBLON
EBS
EMB
EMOBN
F5P
FYUFA
GROUPED_DOAJ
GX1
HCIFZ
HMCUK
HYE
IAO
IHR
INH
INR
ITC
KQ8
LK8
M1P
M48
M7P
M~E
NAPCQ
O5R
O5S
OK1
OVT
P2P
P62
PGMZT
PHGZM
PHGZT
PIMPY
PJZUB
PPXIY
PQGLB
PQQKQ
PROAC
PSQYO
PUEGO
RBZ
RNS
ROL
RPM
RSV
SMD
SOJ
SV3
TR2
UKHRP
W2D
WOQ
WOW
XSB
AAYXX
AFFHD
CITATION
ALIPV
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7QO
7XB
8FD
8FK
AZQEC
DWQXO
FR3
GNUQQ
K9.
P64
PKEHL
PQEST
PQUKI
PRINS
7X8
5PM
ID FETCH-LOGICAL-c608t-91d4e19ec9f67bb14bbed094f6ee811bea2337a819a8f37edd61a87ddd9d79e03
IEDL.DBID 7RV
ISICitedReferencesCount 3
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001329341900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1471-2342
IngestDate Fri Oct 03 12:51:02 EDT 2025
Tue Nov 04 02:05:57 EST 2025
Fri Sep 05 13:54:21 EDT 2025
Tue Oct 07 05:22:21 EDT 2025
Sat Nov 29 13:57:09 EST 2025
Sat Nov 29 10:34:43 EST 2025
Mon Jul 21 05:39:33 EDT 2025
Tue Nov 18 22:15:21 EST 2025
Sat Nov 29 06:11:10 EST 2025
Sat Sep 06 07:26:53 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Deep learning
Convolutional neural network
Medical image segmentation
CT image segmentation
Language English
License 2024. The Author(s).
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c608t-91d4e19ec9f67bb14bbed094f6ee811bea2337a819a8f37edd61a87ddd9d79e03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
OpenAccessLink https://www.proquest.com/docview/3115122807?pq-origsite=%requestingapplication%
PMID 39385108
PQID 3115122807
PQPubID 44833
PageCount 18
ParticipantIDs doaj_primary_oai_doaj_org_article_f2eff519ed7242228fc73ff9d254bfb9
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11465928
proquest_miscellaneous_3115094986
proquest_journals_3115122807
gale_infotracmisc_A811773215
gale_infotracacademiconefile_A811773215
pubmed_primary_39385108
crossref_citationtrail_10_1186_s12880_024_01449_4
crossref_primary_10_1186_s12880_024_01449_4
springer_journals_10_1186_s12880_024_01449_4
PublicationCentury 2000
PublicationDate 2024-10-09
PublicationDateYYYYMMDD 2024-10-09
PublicationDate_xml – month: 10
  year: 2024
  text: 2024-10-09
  day: 09
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle BMC medical imaging
PublicationTitleAbbrev BMC Med Imaging
PublicationTitleAlternate BMC Med Imaging
PublicationYear 2024
Publisher BioMed Central
BioMed Central Ltd
Springer Nature B.V
BMC
Publisher_xml – name: BioMed Central
– name: BioMed Central Ltd
– name: Springer Nature B.V
– name: BMC
References H Li (1449_CR26) 2019; 19
1449_CR47
P Bilic (1449_CR3) 2023; 84
TH Kim (1449_CR14) 2022; 28
L Alzubaidi (1449_CR45) 2021; 13
1449_CR48
H Lee (1449_CR23) 2017; 30
LL Ackermans (1449_CR19) 2021; 21
1449_CR41
FJM Shamrat (1449_CR31) 2023; 155
1449_CR40
1449_CR42
FJM Shamrat (1449_CR33) 2023; 11
1449_CR44
1449_CR2
1449_CR5
AE Kavur (1449_CR4) 2021; 69
1449_CR36
1449_CR35
1449_CR39
1449_CR7
1449_CR6
1449_CR9
1449_CR32
L Alzubaidi (1449_CR46) 2020; 9
1449_CR34
L Alzubaidi (1449_CR43) 2020; 10
Y Fu (1449_CR24) 2020; 47
S Dabiri (1449_CR22) 2019; 75
KA Weber (1449_CR25) 2021; 11
D Nishiyama (1449_CR28) 2021; 16
1449_CR29
N Sharma (1449_CR8) 2010; 35
AM Mharib (1449_CR10) 2012; 37
1449_CR21
YS Lee (1449_CR18) 2021; 40
H Lu (1449_CR1) 2019; 39
L Alzubaidi (1449_CR30) 2023; 10
D Zopfs (1449_CR27) 2020; 130
C Xu (1449_CR49) 1998; 7
1449_CR13
1449_CR16
X Li (1449_CR37) 2023; 152
1449_CR15
1449_CR17
HJ Park (1449_CR20) 2020; 21
1449_CR50
1449_CR51
1449_CR12
1449_CR11
H Yang (1449_CR38) 2023; 213
References_xml – volume: 35
  start-page: 3
  issue: 1
  year: 2010
  ident: 1449_CR8
  publication-title: J Med Phys Assoc Med Phys India.
– ident: 1449_CR42
– ident: 1449_CR32
  doi: 10.1109/TAI.2023.3327981
– ident: 1449_CR5
  doi: 10.1007/978-3-030-87199-4_23
– volume: 10
  start-page: 46
  issue: 1
  year: 2023
  ident: 1449_CR30
  publication-title: J Big Data.
  doi: 10.1186/s40537-023-00727-2
– volume: 155
  start-page: 106646
  year: 2023
  ident: 1449_CR31
  publication-title: Comput Biol Med.
  doi: 10.1016/j.compbiomed.2023.106646
– volume: 11
  start-page: 16376
  year: 2023
  ident: 1449_CR33
  publication-title: IEEE Access.
  doi: 10.1109/ACCESS.2023.3244952
– volume: 213
  start-page: 119024
  year: 2023
  ident: 1449_CR38
  publication-title: Expert Syst Appl.
  doi: 10.1016/j.eswa.2022.119024
– volume: 10
  start-page: 4523
  issue: 13
  year: 2020
  ident: 1449_CR43
  publication-title: Appl Sci.
  doi: 10.3390/app10134523
– volume: 152
  start-page: 106365
  year: 2023
  ident: 1449_CR37
  publication-title: Comput Biol Med.
  doi: 10.1016/j.compbiomed.2022.106365
– ident: 1449_CR50
  doi: 10.1109/WACV51458.2022.00181
– ident: 1449_CR39
  doi: 10.1109/ACCESS.2023.3335948
– ident: 1449_CR51
  doi: 10.1007/978-3-031-08999-2_22
– ident: 1449_CR29
  doi: 10.1016/j.inffus.2023.03.008
– ident: 1449_CR40
  doi: 10.1007/978-3-030-33128-3_11
– ident: 1449_CR35
– ident: 1449_CR17
  doi: 10.3389/fonc.2021.580806
– volume: 19
  start-page: 2650
  issue: 12
  year: 2019
  ident: 1449_CR26
  publication-title: Sensors.
  doi: 10.3390/s19122650
– ident: 1449_CR16
  doi: 10.1117/12.2549406
– ident: 1449_CR9
  doi: 10.1016/j.media.2023.102958
– volume: 11
  start-page: 1
  issue: 1
  year: 2021
  ident: 1449_CR25
  publication-title: Sci Rep.
  doi: 10.1038/s41598-021-95972-x
– ident: 1449_CR15
– ident: 1449_CR41
  doi: 10.1007/s12194-024-00839-1
– volume: 16
  start-page: e0257371
  issue: 9
  year: 2021
  ident: 1449_CR28
  publication-title: PLoS ONE.
  doi: 10.1371/journal.pone.0257371
– volume: 7
  start-page: 359
  issue: 3
  year: 1998
  ident: 1449_CR49
  publication-title: IEEE Trans Image Process.
  doi: 10.1109/83.661186
– volume: 28
  start-page: 876
  issue: 4
  year: 2022
  ident: 1449_CR14
  publication-title: Clin Mol Hepatol.
  doi: 10.3350/cmh.2022.0231
– ident: 1449_CR11
– volume: 21
  start-page: 88
  issue: 1
  year: 2020
  ident: 1449_CR20
  publication-title: Korean J Radiol.
  doi: 10.3348/kjr.2019.0470
– ident: 1449_CR13
  doi: 10.1109/ICASSP40776.2020.9053405
– ident: 1449_CR48
  doi: 10.1109/3DV.2016.79
– volume: 130
  start-page: 109153
  year: 2020
  ident: 1449_CR27
  publication-title: Eur J Radiol.
  doi: 10.1016/j.ejrad.2020.109153
– volume: 37
  start-page: 83
  year: 2012
  ident: 1449_CR10
  publication-title: Artif Intell Rev.
  doi: 10.1007/s10462-011-9220-3
– ident: 1449_CR34
  doi: 10.1109/CVPR.2016.308
– volume: 69
  start-page: 101950
  year: 2021
  ident: 1449_CR4
  publication-title: Med Image Anal.
  doi: 10.1016/j.media.2020.101950
– ident: 1449_CR7
  doi: 10.1097/MD.0000000000015867
– ident: 1449_CR12
  doi: 10.1007/978-3-030-00889-5_1
– volume: 30
  start-page: 487
  issue: 4
  year: 2017
  ident: 1449_CR23
  publication-title: J Digit Imaging.
  doi: 10.1007/s10278-017-9988-z
– ident: 1449_CR44
– volume: 75
  start-page: 47
  year: 2019
  ident: 1449_CR22
  publication-title: Comput Med Imaging Graph.
  doi: 10.1016/j.compmedimag.2019.04.007
– volume: 21
  start-page: 2083
  issue: 6
  year: 2021
  ident: 1449_CR19
  publication-title: Sensors.
  doi: 10.3390/s21062083
– ident: 1449_CR21
  doi: 10.1148/ryai.2021200130
– volume: 40
  start-page: 5038
  issue: 8
  year: 2021
  ident: 1449_CR18
  publication-title: Clin Nutr.
  doi: 10.1016/j.clnu.2021.06.025
– volume: 47
  start-page: 5723
  issue: 11
  year: 2020
  ident: 1449_CR24
  publication-title: Med Phys.
  doi: 10.1002/mp.14465
– ident: 1449_CR36
  doi: 10.1109/WACV56688.2023.00614
– ident: 1449_CR6
  doi: 10.1117/12.2512965
– volume: 39
  start-page: 422
  year: 2019
  ident: 1449_CR1
  publication-title: Procedia Manuf.
  doi: 10.1016/j.promfg.2020.01.386
– ident: 1449_CR2
  doi: 10.1007/978-3-319-24574-4_28
– volume: 9
  start-page: 427
  issue: 3
  year: 2020
  ident: 1449_CR46
  publication-title: Electronics.
  doi: 10.3390/electronics9030427
– ident: 1449_CR47
– volume: 84
  start-page: 102680
  year: 2023
  ident: 1449_CR3
  publication-title: Med Image Anal.
  doi: 10.1016/j.media.2022.102680
– volume: 13
  start-page: 1590
  issue: 7
  year: 2021
  ident: 1449_CR45
  publication-title: Cancers.
  doi: 10.3390/cancers13071590
SSID ssj0017834
Score 2.348717
Snippet Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields....
The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields. Furthermore, due to...
Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields....
BackgroundThe cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields....
Abstract Background The cost of labeling to collect training data sets using deep learning is especially high in medical applications compared to other fields....
SourceID doaj
pubmedcentral
proquest
gale
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 271
SubjectTerms Abdomen
Analysis
Artificial intelligence
Comparative analysis
Computed tomography
Convolutional neural network
CT image segmentation
CT imaging
Datasets
Deep Learning
Diagnostic imaging
Humans
Image processing
Image Processing, Computer-Assisted - methods
Image segmentation
Imaging
Knowledge transfer
Labeling
Liver
Machine learning
Medical image segmentation
Medical imaging
Medical imaging equipment
Medicine
Medicine & Public Health
Methods
Muscle, Skeletal - diagnostic imaging
Muscles
Neural networks
Performance degradation
Performance evaluation
Radiology
Technology application
Tomography, X-Ray Computed - methods
Transfer learning
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3daxQxEA9SRHwRv12tEkHwQUM3m718PJ7FIqjFhyp9C5uvWrBbub3r39-ZbPbsVtQXH-8ygWQyk5nZzPyGkFdcdl2z8InJZDRrIxfMJVWzAKbM6DoEnuGavn1Sh4f6-Nh8udLqC3PCRnjgkXF7qYkpgZsRg2ry54rklUjJBIhsXHK5dA-8nimYKu8H2D5iKpHRcm-AW1jXDOwRwwjCsHZmhjJa_-938hWjdD1h8tqraTZGB3fJneJF0uW4-nvkRuzvk1ufyzv5A_Jx6T3YE4SBCPRsMwARPQM3mSKmxlisSLGwhO4fUfh9EgdaGvbQdfZk44qWfhInD8nXg_dH-x9YaZvAvKz1Gq6vADw30ZsklXO8dS4GiOKSjFFz7mLXCKE6cAU6nYSKIUjeaRVCMEGZWItHZKc_7-MTQhei1aaOqOapFeDLRJgoMWoRrQtGV4RPXLS-YIpja4sfNscWWtqR8xY4bzPnbVuRN9s5P0dEjb9Sv8PD2VIiGnb-A2TEFhmx_5KRirzGo7Wos7A835XSA9gkol_ZJVbbKgHeT0V2Z5Sga34-PAmHLbo-WMQr4hlVqCIvt8M4E_PX-ni-GWngCIyWFXk8ytJ2S8IIcHtrYKaeSdlsz_OR_vR7RgLHkvKFaWDq20kgf63rz0x9-j-Y-ozcblChMJnC7JKd9WoTn5Ob_mJ9OqxeZHW8BE9BNp4
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: SpringerLINK Contemporary 1997-Present
  dbid: RSV
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Zb9QwEB5BQYgX7iNQkJGQeICoSZz18bhUVEhAhaBUfbMSH0sluos2u_x-ZhwnkHJI8Jh4RorHczn2fAPwtBRNU81syEXQKq99yfM2yCJ3GMq0KpwrI1zT8Vt5eKhOTvT7VBTWDbfdhyPJ6KmjWSux16EnVUWOMSWnXYDO64twCcOdInP88PF4PDug1hFDecxv-SYhKCL1_-qPfwpI5y9LnjsxjYHo4Pr_TeEGXEuJJ5v3mnITLvjlLbjyLh2t34Y3c2sxBBFyhGNn2w6J2Blm1oxgOPr6Rka1KGz_iOHzwncs9fhhm5j8-jVLLSgWd-DTwauj_dd56rSQW1GoDXo8h8ukvdVByLYt67b1Djd-QXivyrL1TcW5bDB7aFTg0jsnykZJ55x2UvuC34Wd5Wrp7wOb8VrpwpNnCDXH9Mcjo6CNDq9bp1UG5SB8YxMMOXXD-GLidkQJ00vJoJRMlJKpM3g-8nztQTj-Sv2S1nSkJADt-GK1XphkjyZUPgTMXr2TVfwLFqzkIWiHG-Y2tDqDZ6QRhswcP882qVoBJ0mAWWZOBbqSY8KUwe6EEs3TTocHnTLJPXSGII7KCESUwZNxmDjpytvSr7Y9DS6BViKDe70KjlPimmOmXKAw1UQ5J3OejixPP0fwcKpCn-kKWV8MOvrju_4s1Af_Rv4Qrlak5nTTQu_Czma99Y_gsv22Oe3Wj6O9fgeLcznT
  priority: 102
  providerName: Springer Nature
Title Accelerated muscle mass estimation from CT images through transfer learning
URI https://link.springer.com/article/10.1186/s12880-024-01449-4
https://www.ncbi.nlm.nih.gov/pubmed/39385108
https://www.proquest.com/docview/3115122807
https://www.proquest.com/docview/3115094986
https://pubmed.ncbi.nlm.nih.gov/PMC11465928
https://doaj.org/article/f2eff519ed7242228fc73ff9d254bfb9
Volume 24
WOSCitedRecordID wos001329341900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVADU
  databaseName: BioMedCentral Open Access - Free Access to All
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: RBZ
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.biomedcentral.com/search/
  providerName: BioMedCentral
– providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: DOA
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: M~E
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Advanced Technologies & Aerospace Database
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: P5Z
  dateStart: 20090101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/hightechjournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Biological Science Database
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: M7P
  dateStart: 20090101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/biologicalscijournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Health & Medical Collection (ProQuest)
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: 7X7
  dateStart: 20090101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Nursing & Allied Health Database
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: 7RV
  dateStart: 20090101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/nahs
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: BENPR
  dateStart: 20090101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: PIMPY
  dateStart: 20090101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1471-2342
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017834
  issn: 1471-2342
  databaseCode: RSV
  dateStart: 20011201
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3db9MwELdgQ4gXvgeBURkJiQeIlsRpbD-hbtoEglVRGVXhxUr80U3a2tG0_P3cOW5HhtgLL5YSn6U4d74723e_I-RNWlRV1tcuLpwUcW5TFteOJ7EBUyZFYkzq4ZrGX_hwKCYTWYYDtyaEVa51olfUZq7xjHwPUWFSj93y4fJnjFWj8HY1lNC4TbZTtN0gz3w03twiYBGJdaKMKPYa0MUiicEqxbiPkHHeMUYes_9vzfyHaboeNnnt7tSbpKMH_zuZh-R-cEbpoJWeR-SWnT0md4_DdfsT8nmgNZglRJMw9GLVABG9AG-bIjRHm_NIMT-FHpxQeJ7ahoa6P3TpHWK7oKEsxfQp-XZ0eHLwMQ7VF2JdJGIJWtAA66TV0hW8rtO8rq2BzaArrBVpWtsqY4xX4FFUwjFujSnSSnBjjDRc2oTtkK3ZfGafE9pnuZCJRW3hcgYukYWBBW5-WF4bKSKSrtmgdIAmxwoZ58pvUUShWtYpYJ3yrFN5RN5txly2wBw3Uu8jdzeUCKrtX8wXUxXWqHKZdQ48Wmt45k_GnObMOWlgE127WkbkLcqGwqUPn6erkMEAk0QQLTXApF3OwImKyG6HEpas7navxUIFldGoK5mIyOtNN47EMLiZna9aGmCBFEVEnrXCuJkSkwy85wR-puiIaWfO3Z7Z2akHFMfM9L7MYOj7tURffde_f-qLm6fxktzLcK1htIXcJVvLxcq-Inf0r-VZs-j5lYrthPtW9Mj2_uGwHPX8sUgPg3BLaMv-D-gpPx2X3-Fp9HX8G4vETMw
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3db9MwED-NgYAXvj8CA4wE4gGiJXEa2w8IlcG0qV3FQ0F9M0lsl0msHU0L4p_ib-TOSToyxN72wGPjc-Wz737nS-4D4Fmc5XnSK12YOSXD1MY8LJyIQoOmTMnImNiXa_o0FKORnEzUhw341ebCUFhli4keqM28pHfk21QVJva1W94cfwupaxR9XW1baNRiMbA_f6DLVr3ef4fn-zxJdt-Pd_bCpqtAWGaRXKJ2G1ySsqVymSiKOC0Ka9DJcZm1Mo4LmyecixwtZS4dF9aYLM6lMMYoI5SNOP7vBbiIOC4ohExM1g5eTE0r2sQcmW1XiP0yCtEKhuS3qDDtGD_fI-BvS_CHKTwdpnnqW603gbvX_7fNuwHXmss269facRM27OwWXD5owgluw6Bflmh2qVqGYUerConYEXoTjEqP1DmdjPJv2M6Y4e-prVjT14gt_YXfLljTdmN6Bz6eCyt3YXM2n9n7wHo8lSqyhIYu5XjlszgxI-eOp4VRMoC4PXZdNqXXqQPIV-1dMJnpWlQ0ior2oqLTAF6u5xzXhUfOpH5L0rSmpKLh_sF8MdUNBmmXWOfwxm6NSPybP1cK7pwySS8tXKECeEGyqAnacHll3mRoIJNUJEz3KSlZcLwkBrDVoURIKrvDrRjqBhIrfSKDATxdD9NMCvOb2fmqpsEjUDIL4F4t_GuWuOLoHUS4mbKjFh2euyOzwy--YDpl3vdUglNftRp0sq5_b-qDs9l4Alf2xgdDPdwfDR7C1YT0nCJL1BZsLhcr-wguld-Xh9XisUcJBp_PW7N-A3t5o0A
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3db9QwDLdgoIkXvj8KA4KExANUa5tePh6PwQm0cZrEmPYWtU1yTGK96drj78dOe2UdHxLisY0tNY4d2038M8DLVBRFNql8LLxWce5SHpdeJrFFV6ZVYm0a4JqOD-R8rk5O9OGFKv5w231zJNnVNBBKU93unlvfmbgSuw3uqiqJ0b_ElBHoOL8K13JqGkT5-ufj4RyB2khsSmV-yzdyRwG1_9e9-YJzunxx8tLpaXBKs1v_P53bcLMPSNm006A7cMXVd2H7U3_kfg_2p1WFrokQJSw7WzdIxM4w4mYEz9HVPTKqUWF7RwyfF65hfe8f1oag2K1Y35picR--zN4f7X2I-w4McSUS1eJOaHH5tKu0F7Is07wsncWE0AvnVJqWrsg4lwVGFYXyXDprRVooaa3VVmqX8AewVS9r9wjYhOdKJ452DJ9zDIscMgpKgHheWq0iSDcLYaoenpy6ZHwzIU1RwnRSMiglE6Rk8gheDzznHTjHX6nf0voOlASsHV4sVwvT26nxmfMeo1pnZRb-jvlKcu-1xUS69KWO4BVphyHzx8-rir6KASdJQFpmSoW7kmMgFcHOiBLNthoPb_TL9NtGYwj6KA0ARRG8GIaJk67C1W657mhwCbQSETzs1HGYEtccI-gEhalGijqa83ikPv0aQMWpOn2iM2R9s9HXn9_1Z6E-_jfy57B9-G5mDj7O95_AjYw0ni5j6B3Yaldr9xSuV9_b02b1LJjxD0WDRZs
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Accelerated+muscle+mass+estimation+from+CT+images+through+transfer+learning&rft.jtitle=BMC+medical+imaging&rft.au=Yoon%2C+Seunghan&rft.au=Kim%2C+Tae+Hyung&rft.au=Jung%2C+Young+Kul&rft.au=Kim%2C+Younghoon&rft.date=2024-10-09&rft.pub=BioMed+Central&rft.eissn=1471-2342&rft.volume=24&rft_id=info:doi/10.1186%2Fs12880-024-01449-4&rft_id=info%3Apmid%2F39385108&rft.externalDocID=PMC11465928
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1471-2342&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1471-2342&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1471-2342&client=summon