Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the tradi...
Saved in:
| Published in: | Sensors (Basel, Switzerland) Vol. 25; no. 1; p. 80 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Switzerland
MDPI AG
01.01.2025
MDPI |
| Subjects: | |
| ISSN: | 1424-8220, 1424-8220 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset. |
|---|---|
| AbstractList | Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset. Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset.Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset. |
| Audience | Academic |
| Author | Yang, Jar-Ferr Yang, Wei-Jong Wu, Chih-Chen |
| AuthorAffiliation | 1 Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan; wjyang@ncut.edu.tw 2 Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan; jesse90302@gmail.com |
| AuthorAffiliation_xml | – name: 1 Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan; wjyang@ncut.edu.tw – name: 2 Institute of Computer and Communication Engineering, Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan; jesse90302@gmail.com |
| Author_xml | – sequence: 1 givenname: Wei-Jong surname: Yang fullname: Yang, Wei-Jong – sequence: 2 givenname: Chih-Chen surname: Wu fullname: Wu, Chih-Chen – sequence: 3 givenname: Jar-Ferr orcidid: 0000-0003-3024-5634 surname: Yang fullname: Yang, Jar-Ferr |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39796871$$D View this record in MEDLINE/PubMed |
| BookMark | eNpdks9rVDEQx4O02Hb14D8gD7zoYWt-b3KSpba2UBGkevEQ8pLJNsvbZE3eK_jfm3br0koOCTMfvjPfyZygg5QTIPSG4FPGNP5YqcAEY4VfoGPCKZ8rSvHBk_cROql1jTFljKmX6IjphZZqQY7Rr-9Qo5_s0P2MNebU3RSbashlA6WzyXdLb7djvIPuYnrIL6cxQ3LZQ6ld47qvOWU3DbZ0n2E73nbndYwbOzb2FToMdqjw-vGeoR8X5zdnl_Prb1-uzpbXc8elHueUe-JDD0IGFSgFxbElole4X1gm-qBDr1zPeuDWgaRMqt4qCqClwB5zzGboaqfrs12bbWnlyx-TbTQPgVxWxpYxugEMU9RxHhhlIXClQS98G4qSPdHBEcub1qed1nbqN-AdpLHY4Zno80yKt2aV7wwhC0qFlE3h_aNCyb8nqKPZxOpgGGyCPFXDiOAcCyVEQ9_9h67zVFKb1T3FpG6obtTpjlrZ5iCmkFth146HTXRtE0Js8aWiTFAim7UZevvUw775f7_egA87wJVca4GwRwg29xtl9hvF_gKPBLzG |
| Cites_doi | 10.1007/978-3-642-15561-1_51 10.1109/TMI.2019.2959609 10.1109/CVPR52688.2022.01055 10.1109/CVPR.2017.699 10.1109/CVPR.2018.00567 10.3390/s22145353 10.1109/CVPR.2017.238 10.1088/1361-6501/acd136 10.1109/ICCVW.2017.108 10.1109/LRA.2016.2535859 10.1109/ICIP46576.2022.9897187 10.3390/rs13030516 10.1109/TPAMI.2007.1166 10.1109/ICCV48922.2021.00717 10.1016/j.image.2006.11.013 10.1109/TAI.2023.3324624 10.1109/CVPR.2018.00388 10.1109/CVPR.2016.90 10.1109/TIV.2022.3185303 10.1109/CVPR.2012.6248074 10.1117/12.48428 10.1016/j.neucom.2023.127122 10.1007/978-3-642-33715-4_54 10.1109/CVPRW50498.2020.00508 10.1109/TPAMI.2017.2699184 |
| ContentType | Journal Article |
| Copyright | COPYRIGHT 2025 MDPI AG 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2024 by the authors. 2024 |
| Copyright_xml | – notice: COPYRIGHT 2025 MDPI AG – notice: 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2024 by the authors. 2024 |
| DBID | AAYXX CITATION NPM 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M1P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI 7X8 5PM DOA |
| DOI | 10.3390/s25010080 |
| DatabaseName | CrossRef PubMed ProQuest Central (Corporate) Health & Medical Collection (ProQuest) ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One ProQuest Central Korea Proquest Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) ProQuest Health & Medical Collection Medical Database ProQuest ProQuest Central Premium ProQuest One Academic (New) ProQuest Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic (retired) ProQuest One Academic UKI Edition MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef PubMed Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | Publicly Available Content Database CrossRef MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: PIMPY name: Publicly Available Content Database url: http://search.proquest.com/publiccontent sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1424-8220 |
| ExternalDocumentID | oai_doaj_org_article_382c44f323ff489e97d02386b19fc1a4 PMC11722566 A823521632 39796871 10_3390_s25010080 |
| Genre | Journal Article |
| GeographicLocations | Taiwan |
| GeographicLocations_xml | – name: Taiwan |
| GrantInformation_xml | – fundername: National Science and Technology Council, Taiwan grantid: NSC 111-2222-E-167 -005 – fundername: National Science and Technology Council, Taiwan grantid: NSTC 113-2221-E-006-158 |
| GroupedDBID | --- 123 2WC 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD AAHBH AAYXX ABDBF ABUWG ACUHS ADBBV ADMLS AENEX AFFHD AFKRA AFZYC ALMA_UNASSIGNED_HOLDINGS BENPR BPHCQ BVXVI CCPQU CITATION CS3 D1I DU5 E3Z EBD ESX F5P FYUFA GROUPED_DOAJ GX1 HH5 HMCUK HYE IAO ITC KQ8 L6V M1P M48 MODMG M~E OK1 OVT P2P P62 PHGZM PHGZT PIMPY PJZUB PPXIY PQQKQ PROAC PSQYO RNS RPM TUS UKHRP XSB ~8M ALIPV NPM 3V. 7XB 8FK AZQEC DWQXO K9. PKEHL PQEST PQUKI 7X8 PUEGO 5PM |
| ID | FETCH-LOGICAL-c469t-24d1dfbe56f8f22e840a15b80b7a35bf9fb8cb3be4ace62368ba82ee9650d0403 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 3 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001393888100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1424-8220 |
| IngestDate | Tue Oct 14 18:57:20 EDT 2025 Tue Nov 04 02:05:13 EST 2025 Thu Oct 02 06:04:18 EDT 2025 Tue Oct 07 07:40:04 EDT 2025 Tue Nov 04 18:13:27 EST 2025 Mon Jul 21 06:00:36 EDT 2025 Sat Nov 29 07:16:20 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Keywords | autoencoder convolutional neural networks adaptive fusion residual vision transformer monocular depth estimation |
| Language | English |
| License | Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c469t-24d1dfbe56f8f22e840a15b80b7a35bf9fb8cb3be4ace62368ba82ee9650d0403 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-3024-5634 |
| OpenAccessLink | https://doaj.org/article/382c44f323ff489e97d02386b19fc1a4 |
| PMID | 39796871 |
| PQID | 3153691549 |
| PQPubID | 2032333 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_382c44f323ff489e97d02386b19fc1a4 pubmedcentral_primary_oai_pubmedcentral_nih_gov_11722566 proquest_miscellaneous_3154405855 proquest_journals_3153691549 gale_infotracacademiconefile_A823521632 pubmed_primary_39796871 crossref_primary_10_3390_s25010080 |
| PublicationCentury | 2000 |
| PublicationDate | 2025-01-01 |
| PublicationDateYYYYMMDD | 2025-01-01 |
| PublicationDate_xml | – month: 01 year: 2025 text: 2025-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Switzerland |
| PublicationPlace_xml | – name: Switzerland – name: Basel |
| PublicationTitle | Sensors (Basel, Switzerland) |
| PublicationTitleAlternate | Sensors (Basel) |
| PublicationYear | 2025 |
| Publisher | MDPI AG MDPI |
| Publisher_xml | – name: MDPI AG – name: MDPI |
| References | Hirschmuller (ref_10) 2007; 30 Chen (ref_29) 2018; 40 ref_13 ref_35 Kauff (ref_3) 2007; 22 ref_12 ref_34 ref_11 ref_33 ref_32 ref_31 ref_30 ref_19 Zhou (ref_15) 2020; 39 ref_18 ref_17 ref_16 Yang (ref_21) 2024; 5 Vaswani (ref_14) 2017; 30 Fabrizio (ref_1) 2017; 2 Natan (ref_2) 2023; 8 ref_25 ref_24 ref_23 ref_22 ref_20 ref_28 ref_27 ref_26 LeCun (ref_7) 2016; 17 ref_9 ref_8 ref_5 ref_4 ref_6 |
| References_xml | – ident: ref_6 doi: 10.1007/978-3-642-15561-1_51 – volume: 30 start-page: 1 year: 2017 ident: ref_14 article-title: Attention is all you need publication-title: Adv. Neural Inf. Process. Syst. – ident: ref_30 – volume: 39 start-page: 1856 year: 2020 ident: ref_15 article-title: UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2019.2959609 – ident: ref_25 doi: 10.1109/CVPR52688.2022.01055 – ident: ref_19 doi: 10.1109/CVPR.2017.699 – ident: ref_32 – ident: ref_24 – ident: ref_9 doi: 10.1109/CVPR.2018.00567 – ident: ref_16 doi: 10.3390/s22145353 – ident: ref_11 – ident: ref_20 doi: 10.1109/CVPR.2017.238 – ident: ref_27 doi: 10.1088/1361-6501/acd136 – ident: ref_8 doi: 10.1109/ICCVW.2017.108 – volume: 2 start-page: 56 year: 2017 ident: ref_1 article-title: Real-time computation of distance to dynamic obstacles with multiple depth sensors publication-title: IEEE Robot. Autom. Lett. doi: 10.1109/LRA.2016.2535859 – ident: ref_18 – ident: ref_35 – ident: ref_26 doi: 10.1109/ICIP46576.2022.9897187 – volume: 17 start-page: 1 year: 2016 ident: ref_7 article-title: Stereo matching by training a convolutional neural network to compare image patches publication-title: J. Mach. Learn. Res. – ident: ref_22 doi: 10.3390/rs13030516 – volume: 30 start-page: 328 year: 2007 ident: ref_10 article-title: Stereo processing by semiglobal matching and mutual information publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2007.1166 – ident: ref_23 doi: 10.1109/ICCV48922.2021.00717 – volume: 22 start-page: 217 year: 2007 ident: ref_3 article-title: Depth map creation and image-based rendering for advanced 3DTV services providing interoperability and scalability publication-title: Signal Process. Image Commun. doi: 10.1016/j.image.2006.11.013 – volume: 5 start-page: 613 year: 2024 ident: ref_21 article-title: Video-based depth estimation autoencoder with weighted temporal feature and spatial edge guided modules publication-title: IEEE Trans. Artif. Intell. doi: 10.1109/TAI.2023.3324624 – ident: ref_31 doi: 10.1109/CVPR.2018.00388 – ident: ref_12 doi: 10.1109/CVPR.2016.90 – volume: 8 start-page: 557 year: 2023 ident: ref_2 article-title: End-to-end autonomous driving with semantic depth cloud mapping and multi-agent publication-title: IEEE Trans. Intell. Veh. doi: 10.1109/TIV.2022.3185303 – ident: ref_34 doi: 10.1109/CVPR.2012.6248074 – ident: ref_4 doi: 10.1117/12.48428 – ident: ref_13 – ident: ref_28 doi: 10.1016/j.neucom.2023.127122 – ident: ref_17 – ident: ref_33 doi: 10.1007/978-3-642-33715-4_54 – ident: ref_5 doi: 10.1109/CVPRW50498.2020.00508 – volume: 40 start-page: 834 year: 2018 ident: ref_29 article-title: DeepLab: Semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2017.2699184 |
| SSID | ssj0023338 |
| Score | 2.4558387 |
| Snippet | Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer... Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human-computer... |
| SourceID | doaj pubmedcentral proquest gale pubmed crossref |
| SourceType | Open Website Open Access Repository Aggregation Database Index Database |
| StartPage | 80 |
| SubjectTerms | adaptive fusion autoencoder Computer vision convolutional neural networks Deep learning Dimensions Estimation theory Image processing Machine vision Methods monocular depth estimation Neural networks residual vision transformer Semantics |
| SummonAdditionalLinks | – databaseName: Health & Medical Collection (ProQuest) dbid: 7X7 link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB5B4UAPvGlDCzIIiVPUxnYS-1Qt0BUHVCFU0EocItuxaS_JdpPl9zPjZLcbIXHhurYir-b1jT3zDcC7XAb0eSKkQmR5Kl3mUkNF47kqBcJTW2oXeWa_lBcXarHQX8cLt24sq9z4xOio69bRHfkJfkoUmgjFzpY3KU2NotfVcYTGXbhHY7NJz8vFbcIlMP8a2IQEpvYnHYZ74rI5ncSgSNX_t0PeiUjTasmd8DN_9L8HfwwPR-DJZoOmPIE7vnkK-zt0hM_g5zffxd4s9iN2nLPLDar1K2aams1qsyT3yObruD5b9y0RYVIxNMN9DD1EGwtb2Se_7K_YOTqQoTfyOXyfn19-_JyOwxdShxlzn3JZZ3WwPi-CCpx7TARNllt1aksjcht0sMpZYb00ziOGKpQ1inuvEfLV6BnEC9hr2sYfApPOZEZpaUrnUBskgg5Mi6XOgqu5cyKBtxtxVMuBY6PC3IRkVm1llsAHEtR2A9Fixx_a1a9qtLJKKO6kDIKLEKTSXpc1YZLCZjq4zMgE3pOYKzJelKUzYw8CnpNosKqZ4ghIEaLyBI430qxGq-6qW1Em8Ga7jPZIjyym8e067pEIglWeJ3AwKM72zPSGWmCGmoCaqNTkT01XmuuryPmdIdBEdFq8_Pe5juABpwHF8Y7oGPb61dq_gvvud3_drV5H6_gDJmEalg priority: 102 providerName: ProQuest |
| Title | Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation |
| URI | https://www.ncbi.nlm.nih.gov/pubmed/39796871 https://www.proquest.com/docview/3153691549 https://www.proquest.com/docview/3154405855 https://pubmed.ncbi.nlm.nih.gov/PMC11722566 https://doaj.org/article/382c44f323ff489e97d02386b19fc1a4 |
| Volume | 25 |
| WOSCitedRecordID | wos001393888100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: DOA dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: M~E dateStart: 20010101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Health & Medical Collection (ProQuest) customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: 7X7 dateStart: 20010101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: BENPR dateStart: 20010101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 1424-8220 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0023338 issn: 1424-8220 databaseCode: PIMPY dateStart: 20010101 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nj9MwEB3BwgEOiM8lsFQGIXGKtv5IbB-70AoktqpWCyriENmOrd1LWrUpR347YyetGnHgwsUH2wdnJuN5Lxk_A7wvRMA9j4ecc1rkwlGXm1g0XijJEZ5aqV3Smf0q53O1XOrF0VVfsSaskwfuDHfOFXNCBM54CEJpr2Ud00xpqQ6OmqQEOpZ6T6Z6qsWReXU6QhxJ_fkWE31UsRkPsk8S6f97Kz7KRcM6yaPEM3sMj3rESCbdSp_AHd88hYdHOoLP4OeV36ZDVeR7OipOrvdw1G-IaWoyqc067mtktkvjk127igqWsYqZ4DyCob1KFankk1-3N2SKkd8danwO32bT64-f8_7WhNwh1W1zJmpaB-uLMqjAmEcGZ2hh1dhKwwsbdLDKWW69MM4j-CmVNYp5rxGr1RjS_AWcNKvGvwQinKFGaWGkc-hGgWgB-azQNLiaOcczeLe3ZrXuxDEqJBXR5NXB5BlcRDsfJkQ969SBXq56L1f_8nIGH6KXqhh16Apn-sMDuM6oX1VNFEMkidiSZXC2d2TVh-O2wveQlzqq0WXw9jCMgRT_jpjGr3ZpjkD0qooig9PO74c1x5-fJVLLDNTgjRg81HCkub1JYt0UESLCyvLV_zDDa3jA4v3D6RPQGZy0m51_A_fdr_Z2uxnBXbmUqVUjuHcxnS-uRikssL38PcW-xZfLxY8_DMISSQ |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB2VggQc-P4IFDAIxCnqxnYS-4DQQrtq1WWFUEEr9RAcx6a9JMt-gPhT_EZmnM12V0jceuC6tiJn8_L8Jp55A_AylR45T_hYiCSNpU1sbChpPFW5QHla5toGn9lhPhqp8Vh_3ILfXS0MpVV2nBiIumosfSPfxUuJTJOh2NvJ95i6RtHpatdCo4XFkfv1E0O22ZvDPXy-rzgf7B-_P4iXXQVii6HgPOaySipfujTzynPuMMIxSVqqXpkbkZZe-1LZUpROGutQHGSqNIo7p1HLVAh5gde9BJeRx3MK9vLxeYAnMN5r3YuE0L3dGcoL8s7pbex5oTXA3xvA2g64mZ25tt0Nbv5vf9QtuLEU1qzfvgm3YcvVd-D6mt3iXTj55Gah9ox9CRX17LhT7W7KTF2xfmUmRP9ssAjj_cW8IaNPSvZmOI8hAzYhcZftucn8lO0jQba1n_fg84Xc3X3YrpvaPQQmrUmM0tLk1iLaJYoqDPulTrytuLUighfd4y8mrYdIgbEXYaRYYSSCdwSM1QSy_Q4_NNNvxZJFCqG4ldILLryXSjudV6S5sjLR3iZGRvCaYFUQOSF2rFnWWOA6year6CuOghslOI9gp0NPsWStWXEOnQier4aRb-gQydSuWYQ5EkW-StMIHrRAXa2ZzogzjMAjUBsQ3ripzZH67DR4micopFF9Z4_-va5ncPXg-MOwGB6Ojh7DNU7NmMP3sB3Ynk8X7glcsT_mZ7Pp0_BmMvh60Qj_A8qeeXI |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nj9MwEB0tXYSWA98fgQUMAnGK2thOYh8QKnQrql2qCi1oEYfgOPbuXpLSpiD-Gr-OsZOUVkjc9sA1tiIneZ55E8-8AXgec4s2j9mQsSgOuY50qFzSeCxShvQ0T6X2OrNH6XQqTk7kbAd-dbUwLq2ys4neUBeVdv_I-3grlkgnKNa3bVrEbDR-Pf8Wug5S7qS1a6fRQOTQ_PyB4dvy1WSE3_oFpeOD47fvwrbDQKgxLKxDyouosLmJEysspQajHRXFuRjkqWJxbqXNhc5ZbrjSBolCInIlqDESeU2B8Gd430uwi5Sc0x7szibvZ5_X4R7D6K_RMmJMDvpLJBtOSWew5QF9o4C_3cGGP9zO1dxwfuPr__NruwHXWspNhs0euQk7prwFVzeEGG_Dlw9m6avSyCdfa0-OOz5vFkSVBRkWau4cAxmv_PhwVVdOAtSlgROcR9A2Vj6ll4zMvD4jB2g6m6rQO_DxQp7uLvTKqjT3gXCtIiUkV6nWuA840i3BUi4jqwuqNQvgWQeFbN6oi2QYlTm8ZGu8BPDGgWQ9wQmC-wvV4jRr7UvGBNWcW0aZtVxII9PCsbEkj6TVkeIBvHQQy5zZQhxp1VZf4DqdAFg2FBSpOJJzGsB-h6SstWfL7A-MAni6HkZL5I6XVGmqlZ_Dkf6LOA7gXgPa9Zrd6XGCsXkAYgvOWw-1PVKen3m18wgpNvLy5MG_1_UEriCws6PJ9PAh7FHXpdn_KNuHXr1YmUdwWX-vz5eLx-02JfD1oiH-Gx1Kg8E |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Residual+Vision+Transformer+and+Adaptive+Fusion+Autoencoders+for+Monocular+Depth+Estimation&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Yang%2C+Wei-Jong&rft.au=Wu%2C+Chih-Chen&rft.au=Yang%2C+Jar-Ferr&rft.date=2025-01-01&rft.pub=MDPI&rft.eissn=1424-8220&rft.volume=25&rft.issue=1&rft_id=info:doi/10.3390%2Fs25010080&rft_id=info%3Apmid%2F39796871&rft.externalDocID=PMC11722566 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |