Multimodal medical image‐to‐image translation via variational autoencoder latent space mapping

Background Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality‐specific nature, requiring separate model training for each pair of...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Medical physics (Lancaster) Ročník 52; číslo 7; s. e17912 - n/a
Hlavní autori: Liang, Zhiwen, Cheng, Mengjie, Ma, Jinhui, Hu, Ying, Li, Song, Tian, Xin
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States 01.07.2025
Predmet:
ISSN:0094-2405, 2473-4209, 2473-4209
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Background Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality‐specific nature, requiring separate model training for each pair of imaging modalities. This limitation hinders the efficient deployment of comprehensive multimodal solutions in clinical practice. Purpose To develop a unified image translation method using variational autoencoder (VAE) latent space mapping, which enables flexible conversion between different medical imaging modalities to meet clinical demands. Methods We propose a three‐stage approach to construct a unified image translation model. Initially, a VAE is trained to learn a shared latent space for various medical images. A stacked bidirectional transformer is subsequently utilized to learn the mapping between different modalities within the latent space under the guidance of the image modality. Finally, the VAE decoder is fine‐tuned to improve image quality. Our internal dataset collected paired imaging data from 87 head and neck cases, with each case containing cone beam computed tomography (CBCT), computed tomography (CT), MR T1c, and MR T2W images. The effectiveness of this strategy is quantitatively evaluated on our internal dataset and a public dataset by the mean absolute error (MAE), peak‐signal‐to‐noise ratio (PSNR), and structural similarity index (SSIM). Additionally, the dosimetry characteristics of the synthetic CT images are evaluated, and subjective quality assessments of the synthetic MR images are conducted to determine their clinical value. Results The VAE with the Kullback‒Leibler (KL)‐16 image tokenizer demonstrates superior image reconstruction ability, achieving a Fréchet inception distance (FID) of 4.84, a PSNR of 32.80 dB, and an SSIM of 92.33%. In synthetic CT tasks, the model shows greater accuracy in intramodality translations than in cross‐modality translations, as evidenced by an MAE of 21.60 ± 8.80 Hounsfield unit (HU) in the CBCT‐to‐CT task and 45.23 ± 13.21 HU/47.55 ± 13.88 in the MR T1c/T2w‐to‐CT tasks. For the cross‐contrast MR translation tasks, the results are very close, with mean PSNR and SSIM values of 26.33 ± 1.36 dB and 85.21% ± 2.21%, respectively, for the T1c‐to‐T2w translation and 26.03 ± 1.67 dB and 85.73% ± 2.66%, respectively, for the T2w‐to‐T1c translation. Dosimetric results indicate that all the gamma pass rates for synthetic CTs are higher than 99% for photon intensity‐modulated radiation therapy (IMRT) planning. However, the subjective quality assessment scores for synthetic MR images are lower than those for real MR images. Conclusions The proposed three‐stage approach successfully develops a unified image translation model that can effectively handle a wide range of medical image translation tasks. This flexibility and effectiveness make it a valuable tool for clinical applications.
AbstractList Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality-specific nature, requiring separate model training for each pair of imaging modalities. This limitation hinders the efficient deployment of comprehensive multimodal solutions in clinical practice. To develop a unified image translation method using variational autoencoder (VAE) latent space mapping, which enables flexible conversion between different medical imaging modalities to meet clinical demands. We propose a three-stage approach to construct a unified image translation model. Initially, a VAE is trained to learn a shared latent space for various medical images. A stacked bidirectional transformer is subsequently utilized to learn the mapping between different modalities within the latent space under the guidance of the image modality. Finally, the VAE decoder is fine-tuned to improve image quality. Our internal dataset collected paired imaging data from 87 head and neck cases, with each case containing cone beam computed tomography (CBCT), computed tomography (CT), MR T1c, and MR T2W images. The effectiveness of this strategy is quantitatively evaluated on our internal dataset and a public dataset by the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Additionally, the dosimetry characteristics of the synthetic CT images are evaluated, and subjective quality assessments of the synthetic MR images are conducted to determine their clinical value. The VAE with the Kullback‒Leibler (KL)-16 image tokenizer demonstrates superior image reconstruction ability, achieving a Fréchet inception distance (FID) of 4.84, a PSNR of 32.80 dB, and an SSIM of 92.33%. In synthetic CT tasks, the model shows greater accuracy in intramodality translations than in cross-modality translations, as evidenced by an MAE of 21.60 ± 8.80 Hounsfield unit (HU) in the CBCT-to-CT task and 45.23 ± 13.21 HU/47.55 ± 13.88 in the MR T1c/T2w-to-CT tasks. For the cross-contrast MR translation tasks, the results are very close, with mean PSNR and SSIM values of 26.33 ± 1.36 dB and 85.21% ± 2.21%, respectively, for the T1c-to-T2w translation and 26.03 ± 1.67 dB and 85.73% ± 2.66%, respectively, for the T2w-to-T1c translation. Dosimetric results indicate that all the gamma pass rates for synthetic CTs are higher than 99% for photon intensity-modulated radiation therapy (IMRT) planning. However, the subjective quality assessment scores for synthetic MR images are lower than those for real MR images. The proposed three-stage approach successfully develops a unified image translation model that can effectively handle a wide range of medical image translation tasks. This flexibility and effectiveness make it a valuable tool for clinical applications.
Background Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality‐specific nature, requiring separate model training for each pair of imaging modalities. This limitation hinders the efficient deployment of comprehensive multimodal solutions in clinical practice. Purpose To develop a unified image translation method using variational autoencoder (VAE) latent space mapping, which enables flexible conversion between different medical imaging modalities to meet clinical demands. Methods We propose a three‐stage approach to construct a unified image translation model. Initially, a VAE is trained to learn a shared latent space for various medical images. A stacked bidirectional transformer is subsequently utilized to learn the mapping between different modalities within the latent space under the guidance of the image modality. Finally, the VAE decoder is fine‐tuned to improve image quality. Our internal dataset collected paired imaging data from 87 head and neck cases, with each case containing cone beam computed tomography (CBCT), computed tomography (CT), MR T1c, and MR T2W images. The effectiveness of this strategy is quantitatively evaluated on our internal dataset and a public dataset by the mean absolute error (MAE), peak‐signal‐to‐noise ratio (PSNR), and structural similarity index (SSIM). Additionally, the dosimetry characteristics of the synthetic CT images are evaluated, and subjective quality assessments of the synthetic MR images are conducted to determine their clinical value. Results The VAE with the Kullback‒Leibler (KL)‐16 image tokenizer demonstrates superior image reconstruction ability, achieving a Fréchet inception distance (FID) of 4.84, a PSNR of 32.80 dB, and an SSIM of 92.33%. In synthetic CT tasks, the model shows greater accuracy in intramodality translations than in cross‐modality translations, as evidenced by an MAE of 21.60 ± 8.80 Hounsfield unit (HU) in the CBCT‐to‐CT task and 45.23 ± 13.21 HU/47.55 ± 13.88 in the MR T1c/T2w‐to‐CT tasks. For the cross‐contrast MR translation tasks, the results are very close, with mean PSNR and SSIM values of 26.33 ± 1.36 dB and 85.21% ± 2.21%, respectively, for the T1c‐to‐T2w translation and 26.03 ± 1.67 dB and 85.73% ± 2.66%, respectively, for the T2w‐to‐T1c translation. Dosimetric results indicate that all the gamma pass rates for synthetic CTs are higher than 99% for photon intensity‐modulated radiation therapy (IMRT) planning. However, the subjective quality assessment scores for synthetic MR images are lower than those for real MR images. Conclusions The proposed three‐stage approach successfully develops a unified image translation model that can effectively handle a wide range of medical image translation tasks. This flexibility and effectiveness make it a valuable tool for clinical applications.
Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality-specific nature, requiring separate model training for each pair of imaging modalities. This limitation hinders the efficient deployment of comprehensive multimodal solutions in clinical practice.BACKGROUNDMedical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality-specific nature, requiring separate model training for each pair of imaging modalities. This limitation hinders the efficient deployment of comprehensive multimodal solutions in clinical practice.To develop a unified image translation method using variational autoencoder (VAE) latent space mapping, which enables flexible conversion between different medical imaging modalities to meet clinical demands.PURPOSETo develop a unified image translation method using variational autoencoder (VAE) latent space mapping, which enables flexible conversion between different medical imaging modalities to meet clinical demands.We propose a three-stage approach to construct a unified image translation model. Initially, a VAE is trained to learn a shared latent space for various medical images. A stacked bidirectional transformer is subsequently utilized to learn the mapping between different modalities within the latent space under the guidance of the image modality. Finally, the VAE decoder is fine-tuned to improve image quality. Our internal dataset collected paired imaging data from 87 head and neck cases, with each case containing cone beam computed tomography (CBCT), computed tomography (CT), MR T1c, and MR T2W images. The effectiveness of this strategy is quantitatively evaluated on our internal dataset and a public dataset by the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Additionally, the dosimetry characteristics of the synthetic CT images are evaluated, and subjective quality assessments of the synthetic MR images are conducted to determine their clinical value.METHODSWe propose a three-stage approach to construct a unified image translation model. Initially, a VAE is trained to learn a shared latent space for various medical images. A stacked bidirectional transformer is subsequently utilized to learn the mapping between different modalities within the latent space under the guidance of the image modality. Finally, the VAE decoder is fine-tuned to improve image quality. Our internal dataset collected paired imaging data from 87 head and neck cases, with each case containing cone beam computed tomography (CBCT), computed tomography (CT), MR T1c, and MR T2W images. The effectiveness of this strategy is quantitatively evaluated on our internal dataset and a public dataset by the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Additionally, the dosimetry characteristics of the synthetic CT images are evaluated, and subjective quality assessments of the synthetic MR images are conducted to determine their clinical value.The VAE with the Kullback‒Leibler (KL)-16 image tokenizer demonstrates superior image reconstruction ability, achieving a Fréchet inception distance (FID) of 4.84, a PSNR of 32.80 dB, and an SSIM of 92.33%. In synthetic CT tasks, the model shows greater accuracy in intramodality translations than in cross-modality translations, as evidenced by an MAE of 21.60 ± 8.80 Hounsfield unit (HU) in the CBCT-to-CT task and 45.23 ± 13.21 HU/47.55 ± 13.88 in the MR T1c/T2w-to-CT tasks. For the cross-contrast MR translation tasks, the results are very close, with mean PSNR and SSIM values of 26.33 ± 1.36 dB and 85.21% ± 2.21%, respectively, for the T1c-to-T2w translation and 26.03 ± 1.67 dB and 85.73% ± 2.66%, respectively, for the T2w-to-T1c translation. Dosimetric results indicate that all the gamma pass rates for synthetic CTs are higher than 99% for photon intensity-modulated radiation therapy (IMRT) planning. However, the subjective quality assessment scores for synthetic MR images are lower than those for real MR images.RESULTSThe VAE with the Kullback‒Leibler (KL)-16 image tokenizer demonstrates superior image reconstruction ability, achieving a Fréchet inception distance (FID) of 4.84, a PSNR of 32.80 dB, and an SSIM of 92.33%. In synthetic CT tasks, the model shows greater accuracy in intramodality translations than in cross-modality translations, as evidenced by an MAE of 21.60 ± 8.80 Hounsfield unit (HU) in the CBCT-to-CT task and 45.23 ± 13.21 HU/47.55 ± 13.88 in the MR T1c/T2w-to-CT tasks. For the cross-contrast MR translation tasks, the results are very close, with mean PSNR and SSIM values of 26.33 ± 1.36 dB and 85.21% ± 2.21%, respectively, for the T1c-to-T2w translation and 26.03 ± 1.67 dB and 85.73% ± 2.66%, respectively, for the T2w-to-T1c translation. Dosimetric results indicate that all the gamma pass rates for synthetic CTs are higher than 99% for photon intensity-modulated radiation therapy (IMRT) planning. However, the subjective quality assessment scores for synthetic MR images are lower than those for real MR images.The proposed three-stage approach successfully develops a unified image translation model that can effectively handle a wide range of medical image translation tasks. This flexibility and effectiveness make it a valuable tool for clinical applications.CONCLUSIONSThe proposed three-stage approach successfully develops a unified image translation model that can effectively handle a wide range of medical image translation tasks. This flexibility and effectiveness make it a valuable tool for clinical applications.
Author Li, Song
Liang, Zhiwen
Tian, Xin
Hu, Ying
Cheng, Mengjie
Ma, Jinhui
Author_xml – sequence: 1
  givenname: Zhiwen
  surname: Liang
  fullname: Liang, Zhiwen
  organization: Hubei Key Laboratory of Precision Radiation Oncology
– sequence: 2
  givenname: Mengjie
  surname: Cheng
  fullname: Cheng, Mengjie
  organization: Renmin Hospital of Wuhan University
– sequence: 3
  givenname: Jinhui
  surname: Ma
  fullname: Ma, Jinhui
  organization: Union Hospital, Tongji Medical College, Huazhong University of Science and Technology
– sequence: 4
  givenname: Ying
  surname: Hu
  fullname: Hu, Ying
  organization: Hubei University of Education
– sequence: 5
  givenname: Song
  surname: Li
  fullname: Li, Song
  email: ls@whu.edu.cn
  organization: Wuhan University
– sequence: 6
  givenname: Xin
  surname: Tian
  fullname: Tian, Xin
  email: xin.tian@whu.edu.cn
  organization: Wuhan University
BackLink https://www.ncbi.nlm.nih.gov/pubmed/40439703$$D View this record in MEDLINE/PubMed
BookMark eNp10MlOwzAQBmALFdEFJJ4A5cglZbxkO6KKTWoFBzhbjuNURokd7KSoNx6BZ-RJMC3LicuMRvo0mvmnaGSsUQidYphjAHLRdnOcFZgcoAlhGY0ZgWKEJgAFiwmDZIym3j8DQEoTOEJjBowWGdAJKldD0-vWVqKJWlVpGbpuxVp9vL33NpTdEPVOGN-IXlsTbbSINsLp3RS4GHqrjLSVclEgyvSR74RUUSu6Tpv1MTqsRePVyXefoafrq8fFbby8v7lbXC5jSXJG4jSRCUtBUlnLnJUlwxiEEjUlVJV5xkpW4brOSxXuplnCCgEyx1meyJoWAgOdofP93s7Zl0H5nrfaS9U0wig7eE4JpikmJC8CPfumQxm-5p0Lb7ot_8nlb5d01nun6l-CgX9FztuO7yIPNN7TV92o7b-Orx72_hMeTYOY
Cites_doi 10.1007/s44163‐021‐00006‐0
10.1007/978-3-031-16980-9_7
10.1016/j.radonc.2020.06.049
10.1088/1361‐6560/abf1bb
10.1038/s41551‐024‐01283‐7
10.1002/mp.16704
10.1002/mp.12251
10.1016/j.media.2023.103046
10.1109/ICCV.2017.244
10.1109/CVPR.2018.00068
10.1016/j.rpor.2019.02.001
10.1088/1361-6560/abd953
10.1109/CVPR52688.2022.01042
10.1016/j.compbiomed.2023.107054
10.1002/mp.12748
10.1002/mp.14121
10.1002/mp.15264
10.1109/TMI.2023.3325703
10.1002/mp.13927
10.1088/2057‐1976/ab6e1f
10.1109/CISP-BMEI.2018.8633142
10.48550/arXiv.2010.02502
10.1109/CVPR42600.2020.00821
10.3389/fonc.2024.1440944
10.1109/CVPR.2017.632
ContentType Journal Article
Copyright 2025 American Association of Physicists in Medicine.
Copyright_xml – notice: 2025 American Association of Physicists in Medicine.
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1002/mp.17912
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Physics
EISSN 2473-4209
EndPage n/a
ExternalDocumentID 40439703
10_1002_mp_17912
MP17912
Genre researchArticle
Journal Article
GrantInformation_xml – fundername: Natural Science Foundation of Hubei Province of China
  funderid: 2025AFB842
– fundername: Natural Science Foundation of Hubei Province of China
  grantid: 2025AFB842
GroupedDBID ---
--Z
-DZ
.GJ
0R~
1OB
1OC
29M
2WC
33P
36B
3O-
4.4
53G
5GY
5RE
5VS
AAHQN
AAIPD
AAMMB
AAMNL
AANLZ
AAQQT
AASGY
AAXRX
AAYCA
AAZKR
ABCUV
ABDPE
ABEFU
ABFTF
ABJNI
ABLJU
ABQWH
ABXGK
ACAHQ
ACBEA
ACCZN
ACGFO
ACGFS
ACGOF
ACPOU
ACXBN
ACXQS
ADBBV
ADBTR
ADKYN
ADMLS
ADOZA
ADXAS
ADZMN
AEFGJ
AEGXH
AEIGN
AENEX
AEUYR
AEYWJ
AFBPY
AFFPM
AFWVQ
AGHNM
AGXDD
AGYGG
AHBTC
AIACR
AIAGR
AIDQK
AIDYY
AITYG
AIURR
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMYDB
ASPBG
BFHJK
C45
CS3
DCZOG
DRFUL
DRMAN
DRSTM
DU5
EBD
EBS
EJD
EMB
EMOBN
F5P
HDBZQ
HGLYW
I-F
KBYEO
LATKE
LEEKS
LOXES
LUTES
LYRES
MEWTI
O9-
OVD
P2P
P2W
PALCI
PHY
RJQFR
RNS
ROL
SAMSI
SUPJJ
SV3
TEORI
TN5
TWZ
USG
WOHZO
WXSBR
XJT
ZGI
ZVN
ZXP
ZY4
ZZTAW
AAYXX
ABUFD
AIQQE
CITATION
LH4
CGR
CUY
CVF
ECM
EIF
NPM
7X8
ID FETCH-LOGICAL-c2842-65c5460c3cfc84bb4110aeaf323eb874b4d1ff8be70337549a0c81785cf39a103
IEDL.DBID DRFUL
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001498563100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0094-2405
2473-4209
IngestDate Fri Sep 05 16:02:48 EDT 2025
Wed Jul 16 06:54:32 EDT 2025
Sat Nov 29 07:29:51 EST 2025
Tue Jul 22 09:32:03 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 7
Keywords deep learning
multimodality image translation
latent space mapping
bidirectional transformer
adaptive radiotherapy
Language English
License 2025 American Association of Physicists in Medicine.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c2842-65c5460c3cfc84bb4110aeaf323eb874b4d1ff8be70337549a0c81785cf39a103
Notes Zhiwen Liang and Mengjie Cheng contributed equally to this work.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
PMID 40439703
PQID 3213612289
PQPubID 23479
PageCount 12
ParticipantIDs proquest_miscellaneous_3213612289
pubmed_primary_40439703
crossref_primary_10_1002_mp_17912
wiley_primary_10_1002_mp_17912_MP17912
PublicationCentury 2000
PublicationDate July 2025
PublicationDateYYYYMMDD 2025-07-01
PublicationDate_xml – month: 07
  year: 2025
  text: July 2025
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Medical physics (Lancaster)
PublicationTitleAlternate Med Phys
PublicationYear 2025
References 2023; 51
2020; 6
2021; 48
2021; 66
2023
2022
2023; 162
2025; 9
2020
2017; 44
2019; 24
2024; 92
2020; 150
2018
2017
2020; 47
2024; 43
2020; 33
2024
2021; 1
2018; 45
2024; 14
e_1_2_8_28_1
e_1_2_8_29_1
e_1_2_8_24_1
e_1_2_8_25_1
e_1_2_8_26_1
e_1_2_8_27_1
Ho J (e_1_2_8_12_1) 2020
e_1_2_8_3_1
e_1_2_8_2_1
e_1_2_8_5_1
e_1_2_8_4_1
e_1_2_8_7_1
e_1_2_8_6_1
e_1_2_8_9_1
Karras T (e_1_2_8_11_1) 2020
e_1_2_8_8_1
e_1_2_8_20_1
e_1_2_8_21_1
e_1_2_8_22_1
e_1_2_8_23_1
e_1_2_8_17_1
e_1_2_8_18_1
e_1_2_8_39_1
e_1_2_8_19_1
e_1_2_8_13_1
e_1_2_8_36_1
e_1_2_8_14_1
e_1_2_8_35_1
e_1_2_8_15_1
e_1_2_8_38_1
e_1_2_8_16_1
e_1_2_8_37_1
e_1_2_8_32_1
e_1_2_8_10_1
e_1_2_8_31_1
e_1_2_8_34_1
e_1_2_8_33_1
e_1_2_8_30_1
References_xml – volume: 24
  start-page: 245
  issue: 2
  year: 2019
  end-page: 250
  article-title: Validation of dose distribution computation on sCT images generated from MRI scans by Philips MRCAT
  publication-title: Rep Pract Oncol Radiother
– volume: 14
  year: 2024
  article-title: A joint learning framework for multisite CBCT‐to‐CT translation using a hybrid CNN‐transformer synthesizer and a registration network
  publication-title: Front Oncol
– volume: 9
  start-page: 521
  issue: 4
  year: 2025
  end-page: 538
  article-title: A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks
  publication-title: Nat Biomed Eng
– start-page: 8185
  year: 2020
  end-page: 8194
  article-title: StarGAN v2: diverse image synthesis for multiple domains
– volume: 1
  start-page: 5
  issue: 1
  year: 2021
  article-title: When medical images meet generative adversarial network: recent development and research opportunities
  publication-title: Discov Artif Intell
– volume: 6
  issue: 1
  year: 2020
  article-title: Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR‐only liver radiotherapy
  publication-title: Biomed Phys Eng Express
– year: 2017
  article-title: Image‐to‐image translation with conditional adversarial networks
– year: 2024
– year: 2020
  article-title: Denoising diffusion implicit models
  publication-title: ArXiv E‐Prints
– start-page: 586
  year: 2018
  end-page: 595
  article-title: The unreasonable effectiveness of deep features as a perceptual metric
– start-page: 66
  year: 2022
  end-page: 78
  article-title: Morphology‐Preserving Autoregressive 3D Generative Modelling of the Brain
– start-page: 2242
  year: 2017
  end-page: 2251
  article-title: Unpaired image‐to‐image translation using cycle‐consistent adversarial networks
– volume: 150
  start-page: 217
  year: 2020
  end-page: 224
  article-title: Magnetic resonance‐based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning
  publication-title: Radiother Oncol
– volume: 43
  start-page: 980
  issue: 3
  year: 2024
  end-page: 993
  article-title: Zero‐shot medical image translation via frequency‐guided diffusion models
  publication-title: IEEE Trans Med Imaging
– start-page: 1
  year: 2018
  end-page: 5
  article-title: A novel method of synthetic CT generation from MR images based on convolutional neural networks
– start-page: 10674
  year: 2022
  end-page: 10685
  article-title: High‐Resolution Image Synthesis with Latent Diffusion Models
– volume: 45
  start-page: 1295
  issue: 3
  year: 2018
  end-page: 1300
  article-title: MR and CT data with multiobserver delineations of organs in the pelvic area—Part of the Gold Atlas project
  publication-title: Med Phys
– volume: 48
  start-page: 7063
  issue: 11
  year: 2021
  end-page: 7073
  article-title: Synthetic CT‐aided multiorgan segmentation for CBCT‐guided adaptive pancreatic radiotherapy
  publication-title: Med Phys
– volume: 92
  year: 2024
  article-title: Deep learning based synthesis of MRI, CT and PET: review and analysis
  publication-title: Med Image Anal
– volume: 47
  start-page: 626
  issue: 2
  year: 2020
  end-page: 642
  article-title: Patch‐based generative adversarial neural network models for head and neck MR‐only planning
  publication-title: Med Phys
– year: 2022
  article-title: DPM‐solver: A Fast ODE solver for diffusion probabilistic model sampling in around 10 steps
– year: 2022
– volume: 47
  start-page: 2472
  issue: 6
  year: 2020
  end-page: 2483
  article-title: CBCT‐based synthetic CT generation using deep‐attention cycleGAN for pancreatic adaptive radiotherapy
  publication-title: Med Phys
– year: 2020
– year: 2023
– volume: 162
  year: 2023
  article-title: Multimodality MRI synchronous construction based deep learning framework for MRI‐guided radiotherapy synthetic CT generation
  publication-title: Comput Biol Med
– volume: 33
  start-page: 6840
  year: 2020
  end-page: 6851
– volume: 51
  start-page: 1847
  issue: 3
  year: 2023
  end-page: 1859
  article-title: CBCT‐Based synthetic CT image generation using conditional denoising diffusion probabilistic model
  publication-title: Med Phys
– volume: 66
  issue: 9
  year: 2021
  article-title: A feature invariant generative adversarial network for head and neck MRI/CT image synthesis
  publication-title: Phys Med Biol
– volume: 44
  start-page: 2556
  issue: 6
  year: 2017
  end-page: 2568
  article-title: Development of the open‐source dose calculation and optimization toolkit matRad
  publication-title: Med Phys
– start-page: 1
  year: 2020
  end-page: 11
– ident: e_1_2_8_5_1
  doi: 10.1007/s44163‐021‐00006‐0
– start-page: 6840
  volume-title: Advances in Neural Information Processing Systems
  year: 2020
  ident: e_1_2_8_12_1
– ident: e_1_2_8_37_1
  doi: 10.1007/978-3-031-16980-9_7
– ident: e_1_2_8_14_1
– ident: e_1_2_8_8_1
  doi: 10.1016/j.radonc.2020.06.049
– ident: e_1_2_8_17_1
– ident: e_1_2_8_4_1
  doi: 10.1088/1361‐6560/abf1bb
– ident: e_1_2_8_22_1
– ident: e_1_2_8_39_1
– ident: e_1_2_8_25_1
– ident: e_1_2_8_35_1
  doi: 10.1038/s41551‐024‐01283‐7
– ident: e_1_2_8_3_1
  doi: 10.1002/mp.16704
– ident: e_1_2_8_24_1
  doi: 10.1002/mp.12251
– ident: e_1_2_8_2_1
  doi: 10.1016/j.media.2023.103046
– ident: e_1_2_8_6_1
  doi: 10.1109/ICCV.2017.244
– ident: e_1_2_8_19_1
  doi: 10.1109/CVPR.2018.00068
– ident: e_1_2_8_21_1
– ident: e_1_2_8_29_1
  doi: 10.1016/j.rpor.2019.02.001
– start-page: 1
  volume-title: Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS 2020)
  year: 2020
  ident: e_1_2_8_11_1
– ident: e_1_2_8_33_1
  doi: 10.1088/1361-6560/abd953
– ident: e_1_2_8_16_1
  doi: 10.1109/CVPR52688.2022.01042
– ident: e_1_2_8_28_1
  doi: 10.1016/j.compbiomed.2023.107054
– ident: e_1_2_8_23_1
  doi: 10.1002/mp.12748
– ident: e_1_2_8_27_1
– ident: e_1_2_8_31_1
  doi: 10.1002/mp.14121
– ident: e_1_2_8_34_1
  doi: 10.1002/mp.15264
– ident: e_1_2_8_38_1
– ident: e_1_2_8_30_1
– ident: e_1_2_8_13_1
  doi: 10.1109/TMI.2023.3325703
– ident: e_1_2_8_15_1
– ident: e_1_2_8_9_1
  doi: 10.1002/mp.13927
– ident: e_1_2_8_7_1
  doi: 10.1088/2057‐1976/ab6e1f
– ident: e_1_2_8_10_1
  doi: 10.1109/CISP-BMEI.2018.8633142
– ident: e_1_2_8_18_1
  doi: 10.48550/arXiv.2010.02502
– ident: e_1_2_8_32_1
  doi: 10.1109/CVPR42600.2020.00821
– ident: e_1_2_8_36_1
  doi: 10.3389/fonc.2024.1440944
– ident: e_1_2_8_20_1
  doi: 10.1109/CVPR.2017.632
– ident: e_1_2_8_26_1
SSID ssj0006350
Score 2.4769492
Snippet Background Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose...
Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation....
SourceID proquest
pubmed
crossref
wiley
SourceType Aggregation Database
Index Database
Publisher
StartPage e17912
SubjectTerms adaptive radiotherapy
Autoencoder
bidirectional transformer
Cone-Beam Computed Tomography
deep learning
Head and Neck Neoplasms - diagnostic imaging
Humans
Image Processing, Computer-Assisted - methods
latent space mapping
Magnetic Resonance Imaging
Multimodal Imaging - methods
multimodality image translation
Title Multimodal medical image‐to‐image translation via variational autoencoder latent space mapping
URI https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fmp.17912
https://www.ncbi.nlm.nih.gov/pubmed/40439703
https://www.proquest.com/docview/3213612289
Volume 52
WOSCitedRecordID wos001498563100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Full Collection 2020
  customDbUrl:
  eissn: 2473-4209
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0006350
  issn: 0094-2405
  databaseCode: DRFUL
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://onlinelibrary.wiley.com
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LT-MwEB6xZRdxAZblUV7ySqu9BepHGvuIgIpDi9BqWfUW2Y6DekhTtWnP_AR-I7-EsZN2hVZIK3FJFGliWZ7xzNgz8w3Aj0S7JNPKRTzWMhJdoSNjqc8yd1Qb2s1dgGP400_u7uRwqO6brEpfC1PjQ6wu3PzOCPrab3BtZhd_QUOLybmH1kT1u85QbEUL1q9_9R76Kz2MprQuQFHCxxDiJfRsh10s_31rjP7xMN86rMHi9LY_Mtcd2Gr8THJZC8ZXWHPjXdgYNJH0XfgSUj_t7BuYUINblBmSF3XYhowKVDMvT89ViY_wQSpv1OrEObIYabLAQ3ZzkUj0vCo9IGbmpgRJ0I4RVFTWkUJ7-IfHPXjo3fy-uo2azguRRXPFom5sY9HtWG5zK4UxAp0E7XTOGXdGJsKIjOa5NA4X3vfQVbpjJU1kbHOuNO3wfWiNy7E7BJJT9AkY5yLWVMTWSeoDtYZmVDmmMtWG70sWpJMaYCOtoZRZWkzSsGxIs-RNitLvQxp67Mr5LOWMcvTR8NTYhoOaaatRPG6Qwgm24WfgzbvDp4P78D76X8Jj2GS-DXDI2j2BVjWdu1P4bBfVaDY9g0_JUJ410vgKf03ibQ
linkProvider Wiley-Blackwell
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1fa9swED9C2m19Wdus27K1qwZjb24tS44t9jTaho4mIYxk9M1Islzy4DgkTp77EfYZ90l2ku2MMAaFvdgYzkLoTnen-_MTwKdImiiVwngslLHHe1x6SlNbZW6oVLSXGQfH8GMQjUbx_b0Yt-BL0wtT4UNsA252Zzh9bTe4DUhf_kENzRcXFlsT9e8eRykK27B3_b0_HWwVMdrSqgNFcJtECBvsWT-4bP7dtUZ_uZi7HqszOf3D_5rsEbysPU3ytRKNY2iZeQeeD-tcegeeueJPvXoFynXh5kWK5HmVuCGzHBXNr8efZYEP90FKa9aq0jmymUmywWN2HUokcl0WFhIzNUuCJGjJCKoqbUguLQDEwwlM-zeTq1uvvnvB02iwAq8X6pD3fM10pmOuFEc3QRqZsYAZFUdc8ZRmWawMagx7i66Qvo5pFIc6Y0JSn72G9ryYm7dAMopeQcAYDyXloTYxtalaRVMqTCBS0YWPDQ-SRQWxkVRgykGSLxK3bEjTMCdB-bdJDTk3xXqVsIAy9NLw3NiFNxXXtqNY5CCBE-zCZ8ecfw6fDMfu_e6phOfw4nYyHCSDb6O793AQ2EuBXQ3vKbTL5dqcwb7elLPV8kMtlL8Bt4vldQ
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Li9swEB5C0g29bNts201fq0Lpzd3Ikh-ip9I0tGwSwrIpuRlJlksOjkPi5Lw_YX9jf0lHsp0llMJCLzaGsRCapzQznwA-RNJEqRTGY4GMPR5y6SlNbZW5oVLRMDMOjuHnOJpO48VCzFrwuemFqfAhDgduVjOcvbYKbtZpdnmPGpqvP1lsTbS_HR6IELWyM7wezccHQ4y-tOpAEdwmEYIGe3bgXzb_Hnujv0LM44jVuZzRk_-a7FM4rSNN8qUSjWfQMqsedCd1Lr0HJ674U2_PQLku3LxIkTyvEjdkmaOh-X17Vxb4cB-ktG6tKp0j-6Uke9xm10eJRO7KwkJipmZDkAQ9GUFTpQ3JpQWA-PUc5qNvN1-_e_XdC55Gh-V7YaADHg4005mOuVIcwwRpZMZ8ZlQcccVTmmWxMmgx7C26Qg50TKM40BkTkg7YC2ivipU5B5JRjAp8xnggKQ-0ialN1SqaUmF8kYo-vG94kKwriI2kAlP2k3yduGVDmoY5Ccq_TWrIlSl224T5lGGUhvvGPrysuHYYxSIHCZxgHz465vxz-GQyc-9XDyW8gO5sOErGP6ZXr-Gxb-8EdiW8b6BdbnbmLTzS-3K53byrZfIParvk8A
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multimodal+medical+image-to-image+translation+via+variational+autoencoder+latent+space+mapping&rft.jtitle=Medical+physics+%28Lancaster%29&rft.au=Liang%2C+Zhiwen&rft.au=Cheng%2C+Mengjie&rft.au=Ma%2C+Jinhui&rft.au=Hu%2C+Ying&rft.date=2025-07-01&rft.eissn=2473-4209&rft.volume=52&rft.issue=7&rft.spage=e17912&rft_id=info:doi/10.1002%2Fmp.17912&rft_id=info%3Apmid%2F40439703&rft.externalDocID=40439703
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0094-2405&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0094-2405&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0094-2405&client=summon