Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation

Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the tradi...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Vol. 25; no. 1; p. 80
Main Authors: Yang, Wei-Jong, Wu, Chih-Chen, Yang, Jar-Ferr
Format: Journal Article
Language:English
Published: Switzerland MDPI AG 01.01.2025
MDPI
Subjects:
ISSN:1424-8220, 1424-8220
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset.
AbstractList Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset.
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset.Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset.
Audience Academic
Author Yang, Jar-Ferr
Yang, Wei-Jong
Wu, Chih-Chen
AuthorAffiliation 1 Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan; wjyang@ncut.edu.tw
2 Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan; jesse90302@gmail.com
AuthorAffiliation_xml – name: 1 Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan; wjyang@ncut.edu.tw
– name: 2 Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan; jesse90302@gmail.com
Author_xml – sequence: 1
  givenname: Wei-Jong
  surname: Yang
  fullname: Yang, Wei-Jong
– sequence: 2
  givenname: Chih-Chen
  surname: Wu
  fullname: Wu, Chih-Chen
– sequence: 3
  givenname: Jar-Ferr
  orcidid: 0000-0003-3024-5634
  surname: Yang
  fullname: Yang, Jar-Ferr
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39796871$$D View this record in MEDLINE/PubMed
BookMark eNpdks9rVDEQx4O02Hb14D8gD7zoYWt-b3KSpba2UBGkevEQ8pLJNsvbZE3eK_jfm3br0koOCTMfvjPfyZygg5QTIPSG4FPGNP5YqcAEY4VfoGPCKZ8rSvHBk_cROql1jTFljKmX6IjphZZqQY7Rr-9Qo5_s0P2MNebU3RSbashlA6WzyXdLb7djvIPuYnrIL6cxQ3LZQ6ld47qvOWU3DbZ0n2E73nbndYwbOzb2FToMdqjw-vGeoR8X5zdnl_Prb1-uzpbXc8elHueUe-JDD0IGFSgFxbElole4X1gm-qBDr1zPeuDWgaRMqt4qCqClwB5zzGboaqfrs12bbWnlyx-TbTQPgVxWxpYxugEMU9RxHhhlIXClQS98G4qSPdHBEcub1qed1nbqN-AdpLHY4Zno80yKt2aV7wwhC0qFlE3h_aNCyb8nqKPZxOpgGGyCPFXDiOAcCyVEQ9_9h67zVFKb1T3FpG6obtTpjlrZ5iCmkFth146HTXRtE0Js8aWiTFAim7UZevvUw775f7_egA87wJVca4GwRwg29xtl9hvF_gKPBLzG
Cites_doi 10.1007/978-3-642-15561-1_51
10.1109/TMI.2019.2959609
10.1109/CVPR52688.2022.01055
10.1109/CVPR.2017.699
10.1109/CVPR.2018.00567
10.3390/s22145353
10.1109/CVPR.2017.238
10.1088/1361-6501/acd136
10.1109/ICCVW.2017.108
10.1109/LRA.2016.2535859
10.1109/ICIP46576.2022.9897187
10.3390/rs13030516
10.1109/TPAMI.2007.1166
10.1109/ICCV48922.2021.00717
10.1016/j.image.2006.11.013
10.1109/TAI.2023.3324624
10.1109/CVPR.2018.00388
10.1109/CVPR.2016.90
10.1109/TIV.2022.3185303
10.1109/CVPR.2012.6248074
10.1117/12.48428
10.1016/j.neucom.2023.127122
10.1007/978-3-642-33715-4_54
10.1109/CVPRW50498.2020.00508
10.1109/TPAMI.2017.2699184
ContentType Journal Article
Copyright COPYRIGHT 2025 MDPI AG
2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2024 by the authors. 2024
Copyright_xml – notice: COPYRIGHT 2025 MDPI AG
– notice: 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2024 by the authors. 2024
DBID AAYXX
CITATION
NPM
3V.
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
7X8
5PM
DOA
DOI 10.3390/s25010080
DatabaseName CrossRef
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials - QC
ProQuest Central
ProQuest One Community College
ProQuest Central Korea
Proquest Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
Health & Medical Collection (Alumni)
Medical Database
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList
Publicly Available Content Database

CrossRef
MEDLINE - Academic

PubMed
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1424-8220
ExternalDocumentID oai_doaj_org_article_382c44f323ff489e97d02386b19fc1a4
PMC11722566
A823521632
39796871
10_3390_s25010080
Genre Journal Article
GeographicLocations Taiwan
GeographicLocations_xml – name: Taiwan
GrantInformation_xml – fundername: National Science and Technology Council, Taiwan
  grantid: NSC 111-2222-E-167 -005
– fundername: National Science and Technology Council, Taiwan
  grantid: NSTC 113-2221-E-006-158
GroupedDBID ---
123
2WC
53G
5VS
7X7
88E
8FE
8FG
8FI
8FJ
AADQD
AAHBH
AAYXX
ABDBF
ABUWG
ACUHS
ADBBV
ADMLS
AENEX
AFFHD
AFKRA
AFZYC
ALMA_UNASSIGNED_HOLDINGS
BENPR
BPHCQ
BVXVI
CCPQU
CITATION
CS3
D1I
DU5
E3Z
EBD
ESX
F5P
FYUFA
GROUPED_DOAJ
GX1
HH5
HMCUK
HYE
IAO
ITC
KQ8
L6V
M1P
M48
MODMG
M~E
OK1
OVT
P2P
P62
PHGZM
PHGZT
PIMPY
PJZUB
PPXIY
PQQKQ
PROAC
PSQYO
RNS
RPM
TUS
UKHRP
XSB
~8M
ALIPV
NPM
3V.
7XB
8FK
AZQEC
DWQXO
K9.
PKEHL
PQEST
PQUKI
7X8
PUEGO
5PM
ID FETCH-LOGICAL-c469t-24d1dfbe56f8f22e840a15b80b7a35bf9fb8cb3be4ace62368ba82ee9650d0403
IEDL.DBID 7X7
ISICitedReferencesCount 3
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001393888100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1424-8220
IngestDate Tue Oct 14 18:57:20 EDT 2025
Tue Nov 04 02:05:13 EST 2025
Thu Oct 02 06:04:18 EDT 2025
Tue Oct 07 07:40:04 EDT 2025
Tue Nov 04 18:13:27 EST 2025
Mon Jul 21 06:00:36 EDT 2025
Sat Nov 29 07:16:20 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords autoencoder
convolutional neural networks
adaptive fusion
residual vision transformer
monocular depth estimation
Language English
License Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c469t-24d1dfbe56f8f22e840a15b80b7a35bf9fb8cb3be4ace62368ba82ee9650d0403
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-3024-5634
OpenAccessLink https://www.proquest.com/docview/3153691549?pq-origsite=%requestingapplication%
PMID 39796871
PQID 3153691549
PQPubID 2032333
ParticipantIDs doaj_primary_oai_doaj_org_article_382c44f323ff489e97d02386b19fc1a4
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11722566
proquest_miscellaneous_3154405855
proquest_journals_3153691549
gale_infotracacademiconefile_A823521632
pubmed_primary_39796871
crossref_primary_10_3390_s25010080
PublicationCentury 2000
PublicationDate 2025-01-01
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – month: 01
  year: 2025
  text: 2025-01-01
  day: 01
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
– name: Basel
PublicationTitle Sensors (Basel, Switzerland)
PublicationTitleAlternate Sensors (Basel)
PublicationYear 2025
Publisher MDPI AG
MDPI
Publisher_xml – name: MDPI AG
– name: MDPI
References Hirschmuller (ref_10) 2007; 30
Chen (ref_29) 2018; 40
ref_13
ref_35
Kauff (ref_3) 2007; 22
ref_12
ref_34
ref_11
ref_33
ref_32
ref_31
ref_30
ref_19
Zhou (ref_15) 2020; 39
ref_18
ref_17
ref_16
Yang (ref_21) 2024; 5
Vaswani (ref_14) 2017; 30
Fabrizio (ref_1) 2017; 2
Natan (ref_2) 2023; 8
ref_25
ref_24
ref_23
ref_22
ref_20
ref_28
ref_27
ref_26
LeCun (ref_7) 2016; 17
ref_9
ref_8
ref_5
ref_4
ref_6
References_xml – ident: ref_6
  doi: 10.1007/978-3-642-15561-1_51
– volume: 30
  start-page: 1
  year: 2017
  ident: ref_14
  article-title: Attention is all you need
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: ref_30
– volume: 39
  start-page: 1856
  year: 2020
  ident: ref_15
  article-title: UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation
  publication-title: IEEE Trans. Med. Imaging
  doi: 10.1109/TMI.2019.2959609
– ident: ref_25
  doi: 10.1109/CVPR52688.2022.01055
– ident: ref_19
  doi: 10.1109/CVPR.2017.699
– ident: ref_32
– ident: ref_24
– ident: ref_9
  doi: 10.1109/CVPR.2018.00567
– ident: ref_16
  doi: 10.3390/s22145353
– ident: ref_11
– ident: ref_20
  doi: 10.1109/CVPR.2017.238
– ident: ref_27
  doi: 10.1088/1361-6501/acd136
– ident: ref_8
  doi: 10.1109/ICCVW.2017.108
– volume: 2
  start-page: 56
  year: 2017
  ident: ref_1
  article-title: Real-time computation of distance to dynamic obstacles with multiple depth sensors
  publication-title: IEEE Robot. Autom. Lett.
  doi: 10.1109/LRA.2016.2535859
– ident: ref_18
– ident: ref_35
– ident: ref_26
  doi: 10.1109/ICIP46576.2022.9897187
– volume: 17
  start-page: 1
  year: 2016
  ident: ref_7
  article-title: Stereo matching by training a convolutional neural network to compare image patches
  publication-title: J. Mach. Learn. Res.
– ident: ref_22
  doi: 10.3390/rs13030516
– volume: 30
  start-page: 328
  year: 2007
  ident: ref_10
  article-title: Stereo processing by semiglobal matching and mutual information
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2007.1166
– ident: ref_23
  doi: 10.1109/ICCV48922.2021.00717
– volume: 22
  start-page: 217
  year: 2007
  ident: ref_3
  article-title: Depth map creation and image-based rendering for advanced 3DTV services providing interoperability and scalability
  publication-title: Signal Process. Image Commun.
  doi: 10.1016/j.image.2006.11.013
– volume: 5
  start-page: 613
  year: 2024
  ident: ref_21
  article-title: Video-based depth estimation autoencoder with weighted temporal feature and spatial edge guided modules
  publication-title: IEEE Trans. Artif. Intell.
  doi: 10.1109/TAI.2023.3324624
– ident: ref_31
  doi: 10.1109/CVPR.2018.00388
– ident: ref_12
  doi: 10.1109/CVPR.2016.90
– volume: 8
  start-page: 557
  year: 2023
  ident: ref_2
  article-title: End-to-end autonomous driving with semantic depth cloud mapping and multi-agent
  publication-title: IEEE Trans. Intell. Veh.
  doi: 10.1109/TIV.2022.3185303
– ident: ref_34
  doi: 10.1109/CVPR.2012.6248074
– ident: ref_4
  doi: 10.1117/12.48428
– ident: ref_13
– ident: ref_28
  doi: 10.1016/j.neucom.2023.127122
– ident: ref_17
– ident: ref_33
  doi: 10.1007/978-3-642-33715-4_54
– ident: ref_5
  doi: 10.1109/CVPRW50498.2020.00508
– volume: 40
  start-page: 834
  year: 2018
  ident: ref_29
  article-title: DeepLab: Semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2699184
SSID ssj0023338
Score 2.4558387
Snippet Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer...
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer...
SourceID doaj
pubmedcentral
proquest
gale
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
StartPage 80
SubjectTerms adaptive fusion
autoencoder
Computer vision
convolutional neural networks
Deep learning
Dimensions
Estimation theory
Image processing
Machine vision
Methods
monocular depth estimation
Neural networks
residual vision transformer
Semantics
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV09j9QwEB2hEwUUiOMz3IEMQqKKLrEdxy4XuBUFOiF0oJMoLDu2dddkV7tZfj8zTna1EQUNbezCmcl43otnngHeY1b3uo6yVB7pqmxVVzpN8niVM7ENLhmfPf21vbrSNzfm29FVX1QTNsoDj4a7EJp3UibBRUpSm2jaQGlG-dqkrnZZCbRqzZ5MTVRLIPMadYQEkvqLLSZ6UrGpZtkni_T_vRUf5aJ5neRR4lk-hkcTYmSLcaWncC_2T-DhkY7gU_j1PW5zUxX7mVvF2fUejsYNc31gi-DWtK-x5S6PL3bDihQsqYqZ4TyGob3KFansc1wPt-wSI39sanwGP5aX15--lNOtCWWHVHcouQx1SD42KunEeUQG5-rG68q3TjQ-meR154WP0nURwY_S3mkeo0GsFjCkxXM46Vd9fAmsTj6EKtYhOCMjsR8tUkBIWVU-cKkKeLe3pl2P4hgWSQWZ3B5MXsBHsvNhAulZ5wfoZTt52f7LywV8IC9Zijp0Reem5gFcJ-lX2YXmiCQRW_ICzveOtFM4bq3AfV0ZUqMr4O1hGAOJTkdcH1e7PEcietVNU8CL0e-HNdPhp0JqWYCefRGzl5qP9He3Way7RoSIsFK9-h9mOIMHnO4fzr-AzuFk2Ozia7jf_R7utps3OQT-ANQKDOc
  priority: 102
  providerName: Directory of Open Access Journals
Title Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation
URI https://www.ncbi.nlm.nih.gov/pubmed/39796871
https://www.proquest.com/docview/3153691549
https://www.proquest.com/docview/3154405855
https://pubmed.ncbi.nlm.nih.gov/PMC11722566
https://doaj.org/article/382c44f323ff489e97d02386b19fc1a4
Volume 25
WOSCitedRecordID wos001393888100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: DOA
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: M~E
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Health & Medical Collection
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: 7X7
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: BENPR
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: PIMPY
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Nj9MwEB3BLgc48A0bWCqDkDhFm8Ru4pxQF1qBxFbVakFFHCI7ttm9JN0m5chvZ8ZJu42QuHDJwfbB0fPMvPHHG4C3GNW1jK0IU43pqsjSMlSS5PEildvMKJdrj_SXbD6Xy2W-6Dfcmv5a5dYnekdt6pL2yE84mmaak6DY-9V1SFWj6HS1L6FxGw6pbDat82x5k3BxzL86NSGOqf1Jg-GetGyiQQzyUv1_O-S9iDS8LbkXfmYP_nfiD-F-TzzZpFspj-CWrR7DvT05wifw49w2_m0W--ZfnLOLLau1a6YqwyZGrcg9stnG9082bU1CmHQZmuE4hh6i9hdb2Ue7ai_ZFB1I9zbyKXydTS8-fAr74gthiRlzGybCxMZpO06ddEliMRFU8VjLSGeKj7XLnZal5toKVVrkUKnUSibW5kj5DHoG_gwOqrqyR8Bip42JbGyMyoWlJEpyZ5CZRpE2iUgDeLOFo1h1GhsF5iaEWbHDLIBTAmo3gGSxfUO9_ln0VlZwmZRCOJ5w54TMbZ4Z4iSpjnNXxkoE8I5gLsh4EctS9W8QcJ4kg1VMZIKEFClqEsDxFs2it-qmuIEygNe7brRHOmRRla03foxAEizH4wCedwtnN2c6Q00xQw1ADpbU4KeGPdXVpdf8jpFoIjtNX_x7Xi_hbkIFiv0e0TEctOuNfQV3yl_tVbMeeevwXzmCw9PpfHE-8psQ-D37PcW2xeezxfc__0sh2g
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB2VggQc-P4IFDAIxClqYmcT-4DQQrtq1WWFUEEr9RDs2Ka9JMt-gPhT_EZmnM12V0jceuC6tiJn_fz8Jva8AXiJu7qRqcvi3GC4mhV5FWtJ9niJVq6w2isTZnpYjEZyPFYft-B3lwtD1yo7TgxEbZuKvpHvClyauSJDsbeT7zFVjaLT1a6ERguLI_frJ4ZsszeHezi_rzgf7B-_P4iXVQXiCkPBecwzm1pvXC_30nPuMMLRac_IxBRa9IxX3sjKCOMyXTkUB7k0WnLnFGoZi5AX-NxLcBl5vKBgrxifB3gC473WvUgIlezOUF6Qd06yseeF0gB_bwBrO-Dm7cy17W5w83_7o27BjaWwZv12JdyGLVffgetrdot34eSTm4XcM_YlZNSz4061uynTtWV9qydE_2ywCO39xbwho0-67M2wH0MGbMLFXbbnJvNTto8E2eZ-3oPPF_J292G7bmr3EFjqjbWJS63VKnMUJErhLSrvJDGWZ3kEL7rpLyeth0iJsRdhpFxhJIJ3BIxVB7L9Dj8002_lkkVKIXmVZV5w4X0mlVOFJc2Vm1T5KtVZBK8JViWRE2Kn0sscCxwn2XyVfclRcKME5xHsdOgpl6w1K8-hE8HzVTPyDR0i6do1i9AnQ5Eve70IHrRAXY2ZzohzjMAjkBsQ3nipzZb67DR4mqcopFF954_-Pa5ncPXg-MOwHB6Ojh7DNU7FmMP3sB3Ynk8X7glcqX7Mz2bTp2FlMvh60Qj_A5ugelw
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lj9MwEB4tXYSWA-9HYAGDQJyiJrab2geECt2KapeqQgtaxCHYsb27l6T0AeKv8esYO0lphcRtD1xjK3KSzzPfxDPfADxHr65FanmcaQxXeT8rYiW8PF6ipO0b5aQOX_qoP5mIkxM53YFfbS2MT6tsbWIw1KYq_D_yLsOtmUkvKNZ1TVrEdDh6PfsW-w5S_qS1badRQ-TQ_vyB4dvi1XiI3_oFpaOD47fv4qbDQFxgWLiMKTepcdr2MiccpRajHZX2tEh0X7GedtJpUWimLVeFRaKQCa0EtVYirzEIf4b3vQS7SMk57cDudPx--nkd7jGM_motI8Zk0l0g2fBKOsmWBwyNAv52Bxv-cDtXc8P5ja7_z6_tBlxrKDcZ1HvkJuzY8hZc3RBivA1fPthFqEojn0KtPTlu-bydE1UaMjBq5h0DGa3C-GC1rLwEqE8DJziPoG2sQkovGdrZ8owcoOmsq0LvwMcLebq70Cmr0t4HkjptTGJTY5Tk1oePgjmDnDxJtKE8i-BZC4V8VquL5BiVebzka7xE8MaDZD3BC4KHC9X8NG_sS84ELTh3jDLnuJBW9o1nY5lOpStSxSN46SGWe7OFOCpUU32B6_QCYPlAUKTiSM5pBPstkvLGni3yPzCK4Ol6GC2RP15Spa1WYQ5H-i96vQju1aBdr9mfHmcYm0cgtuC89VDbI-X5WVA7T5FiIy_PHvx7XU_gCgI7PxpPDh_CHvVdmsOPsn3oLOcr-wguF9-X54v542abEvh60RD_DQFkhKs
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Residual+Vision+Transformer+and+Adaptive+Fusion+Autoencoders+for+Monocular+Depth+Estimation&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Wei-Jong%2C+Yang&rft.au=Chih-Chen%2C+Wu&rft.au=Yang%2C+Jar-Ferr&rft.date=2025-01-01&rft.pub=MDPI+AG&rft.eissn=1424-8220&rft.volume=25&rft.issue=1&rft.spage=80&rft_id=info:doi/10.3390%2Fs25010080&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon