EA-EDNet: encapsulated attention encoder-decoder network for 3D reconstruction in low-light-level environment
3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform reconstruction, not yet about the Low-Light-Level (LLL) environment where the information is extremely scarce. The implementation of 3D reconstruction...
Gespeichert in:
| Veröffentlicht in: | Multimedia systems Jg. 29; H. 4; S. 2263 - 2279 |
|---|---|
| Hauptverfasser: | , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.08.2023
Springer Nature B.V |
| Schlagworte: | |
| ISSN: | 0942-4962, 1432-1882 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | 3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform reconstruction, not yet about the Low-Light-Level (LLL) environment where the information is extremely scarce. The implementation of 3D reconstruction in this environment is an urgent requirement for military, aerospace and other fields. Therefore, we introduce an Encapsulated Attention Encoder-Decoder Network (EA-EDNet) in this paper. It can incorporate multiple levels of semantic to adequately extract the limited information from images taken in the LLL environment and can reason out the defective morphological data as well as intensify the attention to the focused parts. The EA-EDNet adopts a two-stage combined coarse-to-fine training fashion. We additionally create a realistic LLL environment dataset 3LNet-12, and accompanying propose an analysis method for filtering this dataset. In experiments, the proposed method not only achieves results superior to the state-of-the-art methods, but also achieves more delicate reconstruction models. |
|---|---|
| AbstractList | 3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform reconstruction, not yet about the Low-Light-Level (LLL) environment where the information is extremely scarce. The implementation of 3D reconstruction in this environment is an urgent requirement for military, aerospace and other fields. Therefore, we introduce an Encapsulated Attention Encoder-Decoder Network (EA-EDNet) in this paper. It can incorporate multiple levels of semantic to adequately extract the limited information from images taken in the LLL environment and can reason out the defective morphological data as well as intensify the attention to the focused parts. The EA-EDNet adopts a two-stage combined coarse-to-fine training fashion. We additionally create a realistic LLL environment dataset 3LNet-12, and accompanying propose an analysis method for filtering this dataset. In experiments, the proposed method not only achieves results superior to the state-of-the-art methods, but also achieves more delicate reconstruction models. |
| Author | Zou, Guofeng Deng, Yulin Zhou, Hui Gao, Xiaoning Yin, Liju Wang, Zhenzhou |
| Author_xml | – sequence: 1 givenname: Yulin surname: Deng fullname: Deng, Yulin organization: Shandong University of Technology, School of Electrical and Electronic Engineering – sequence: 2 givenname: Liju surname: Yin fullname: Yin, Liju email: ljyin72@163.com organization: Shandong University of Technology, School of Electrical and Electronic Engineering – sequence: 3 givenname: Xiaoning surname: Gao fullname: Gao, Xiaoning organization: Shandong University of Technology, School of Electrical and Electronic Engineering – sequence: 4 givenname: Hui surname: Zhou fullname: Zhou, Hui organization: Shandong University of Technology, School of Electrical and Electronic Engineering – sequence: 5 givenname: Zhenzhou surname: Wang fullname: Wang, Zhenzhou organization: Shandong University of Technology, School of Electrical and Electronic Engineering – sequence: 6 givenname: Guofeng surname: Zou fullname: Zou, Guofeng organization: Shandong University of Technology, School of Electrical and Electronic Engineering |
| BookMark | eNp9kEtLQzEQhYMoWB9_wNUF19HJ4z7iTrQ-QHSj6xDTSb31NqlJavHfm7aC4MLVwOF8Z2bOAdn1wSMhJwzOGEB7ngBqARS4oMCKQvkOGTEpOGVdx3fJCJTkVKqG75ODlGYArG0EjMh8fEnH14-YLyr01izScjAZJ5XJGX3ug1_LYYKRTnAzK495FeJ75UKsxHUVi-xTjku7cfe-GsKKDv30LdMBP3EoAZ99DH5e8o7InjNDwuOfeUhebsbPV3f04en2_urygVrBVKYoyiNOSlPXdatAqa42DjoAK1vTOdU11jbG1QzbzkmUDGxxTUDx2rxKh-KQnG5zFzF8LDFlPQvL6MtKLXjdSMVaxYuLb102hpQiOr2I_dzEL81Ar2vV21p1uUZvatVrqPsD2T6b9e85mn74HxVbNJU9forx96p_qG9Omo6S |
| CitedBy_id | crossref_primary_10_1007_s00530_025_01859_6 crossref_primary_10_1016_j_dsp_2025_105176 crossref_primary_10_1177_30504554241297613 |
| Cites_doi | 10.1007/978-3-030-01252-6_40 10.1109/CVPR.2016.445 10.1109/3DV.2017.00054 10.1111/j.1467-8659.2009.01389.x 10.1109/CVPR.2010.5539824 10.1109/CVPR46437.2021.01563 10.1109/ACIRS.2018.8467245 10.1109/CVPR42600.2020.00016 10.1007/s41095-021-0229-5 10.1109/TIP.2018.2845697 10.1007/s00530-022-00938-2 10.1007/s10489-020-02175-4 10.1109/CVPR.2017.264 10.1016/j.cviu.2018.10.010 10.1145/1275808.1276406 10.1109/CVPR.2018.00306 10.1111/j.1467-8659.2009.01388.x 10.1109/TMM.2021.3074240 10.1007/s00530-022-00925-7 10.1007/978-3-319-46484-8_38 10.1109/CVPR.2018.00767 10.1109/CVPR46437.2021.01394 10.1109/CVPR.2019.00127 10.1007/s00530-021-00776-8 10.1007/978-3-319-46487-9_31 10.1109/ICCV48922.2021.00602 10.3390/s19112462 10.1109/CVPR.2017.243 10.1109/TPAMI.2021.3135117 10.1109/CVPR.2017.693 10.1007/s00530-022-00887-w 10.1109/3DV.2018.00020 10.1016/j.jvcir.2018.01.012 10.1109/ICCV.2019.00872 10.1109/CVPR.2018.00484 10.1109/CVPR.2019.00326 10.1109/ICCV.2017.230 10.1109/ICCV.2017.19 10.1109/ICRA.2014.6907298 10.1109/CVPR.2018.00745 10.1109/ICCV.2017.99 10.1145/2487228.2487237 10.1007/978-3-030-01234-2_1 10.1007/978-3-030-01267-0_23 10.1145/2980179.2980238 10.1109/ICCV.2019.00069 10.1109/CVPR.2018.00813 10.1109/TCI.2015.2453093 10.1109/CVPR.2016.612 |
| ContentType | Journal Article |
| Copyright | The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. |
| Copyright_xml | – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. |
| DBID | AAYXX CITATION 8FE 8FG ABJCF AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ HCIFZ JQ2 K7- L6V M7S P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS |
| DOI | 10.1007/s00530-023-01100-2 |
| DatabaseName | CrossRef ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials - QC ProQuest Central Database Suite (ProQuest) ProQuest Technology Collection ProQuest One ProQuest Central Korea ProQuest Central Student SciTech Collection (ProQuest) ProQuest Computer Science Collection Computer Science Database (ProQuest) ProQuest Engineering Collection Engineering Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
| DatabaseTitle | CrossRef Computer Science Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection SciTech Premium Collection ProQuest One Community College ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest Central Korea ProQuest Central (New) Engineering Collection Advanced Technologies & Aerospace Collection Engineering Database ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition Materials Science & Engineering Collection ProQuest One Academic ProQuest One Academic (New) |
| DatabaseTitleList | Computer Science Database |
| Database_xml | – sequence: 1 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1432-1882 |
| EndPage | 2279 |
| ExternalDocumentID | 10_1007_s00530_023_01100_2 |
| GrantInformation_xml | – fundername: Natural Science Foundation of Shandong Province grantid: ZR2020MF127 funderid: http://dx.doi.org/10.13039/501100007129 – fundername: National Natural Science Foundation of China grantid: 62101310 funderid: http://dx.doi.org/10.13039/501100001809 |
| GroupedDBID | --Z -4Z -59 -5G -BR -EM -ET -Y2 -~C -~X .4S .86 .DC .VR 06D 0R~ 0VY 123 1N0 1SB 203 28- 29M 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5QI 5VS 67Z 6NX 78A 85S 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYOK AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFFNX AFGCZ AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN B-. BA0 BBWZM BDATZ BGNMA BSONS CAG COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EDO EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ H~9 I-F I09 IHE IJ- IKXTQ ITG ITH ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV KOW LAS LLZTM M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM P19 P2P P9O PF0 PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RIG RNI RNS ROL RPX RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TN5 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YIN YLTOR Z45 Z7R Z7X Z83 Z88 Z8M Z8R Z8W Z92 ZMTXR ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABJCF ABRTQ ACSTC ADHKG AETEA AEZWR AFDZB AFFHD AFHIU AFKRA AFOHR AGQPQ AHPBZ AHWEU AIXLP ARAPS ATHPR AYFIA BENPR BGLVJ CCPQU CITATION HCIFZ K7- M7S PHGZM PHGZT PQGLB PTHSS 8FE 8FG AZQEC DWQXO GNUQQ JQ2 L6V P62 PKEHL PQEST PQQKQ PQUKI PRINS |
| ID | FETCH-LOGICAL-c319t-e3023f44a5557909985af0800c47a8f986cc6af51e78f4e410c909d0925ab4fe3 |
| IEDL.DBID | RSV |
| ISICitedReferencesCount | 3 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000985897800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0942-4962 |
| IngestDate | Fri Oct 03 06:01:24 EDT 2025 Sat Nov 29 03:45:59 EST 2025 Tue Nov 18 20:43:00 EST 2025 Fri Feb 21 02:42:04 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 4 |
| Keywords | Computer stereo vision Low-light-level environment imaging 3D reconstruction |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c319t-e3023f44a5557909985af0800c47a8f986cc6af51e78f4e410c909d0925ab4fe3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| PQID | 3256491792 |
| PQPubID | 2043725 |
| PageCount | 17 |
| ParticipantIDs | proquest_journals_3256491792 crossref_primary_10_1007_s00530_023_01100_2 crossref_citationtrail_10_1007_s00530_023_01100_2 springer_journals_10_1007_s00530_023_01100_2 |
| PublicationCentury | 2000 |
| PublicationDate | 20230800 2023-08-00 20230801 |
| PublicationDateYYYYMMDD | 2023-08-01 |
| PublicationDate_xml | – month: 8 year: 2023 text: 20230800 |
| PublicationDecade | 2020 |
| PublicationPlace | Berlin/Heidelberg |
| PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
| PublicationTitle | Multimedia systems |
| PublicationTitleAbbrev | Multimedia Systems |
| PublicationYear | 2023 |
| Publisher | Springer Berlin Heidelberg Springer Nature B.V |
| Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V |
| References | Zhu, Lei, et al. CED-Net: contextual encoder–decoder network for 3D face reconstruction. Multimedia Systems 28.5, 1713–1722 (2022) GuoM-HCaiJ-XLiuZ-NMuT-JMartinRRHuS-MPct: Point cloud transformerComputat. Visual Media20217218719910.1007/s41095-021-0229-5 YiLKimVGCeylanDShenI-CYanMSuHLuCHuangQShefferAGuibasLA scalable active framework for region annotation in 3d shape collectionsACM Transact. Graphics (ToG)201635611210.1145/2980179.2980238 Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017) Saito, S., Simon, T., Saragih, J., Joo, H.: Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 84–93 (2020) Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 371–386 (2018) SchnabelRDegenerPKleinRCompletion and reconstruction with primitive shapesComp Graphics Forum20092850351210.1111/j.1467-8659.2009.01389.x Han, X., Li, Z., Huang, H., Kalogerakis, E., Yu, Y.: High-resolution shape completion using deep neural networks for global structure and local geometry inference. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 85–93 (2017) Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018) Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018) Nguyen, D.T., Hua, B.-S., Tran, K., Pham, Q.-H., Yeung, S.-K.: A field model for repairing 3d shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5676–5684 (2016) Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2088–2096 (2017) AnayaJBarbuARenoir - a dataset for real low-light noise image reductionJ. Visual Communicat. Image Represent.20185114415410.1016/j.jvcir.2018.01.012 Tran, L., Liu, X.: Nonlinear 3d face morphable model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7346–7355 (2018) Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In: European Conference on Computer Vision, pp. 628–644 (2016). Springer Kausar, Asma, et al. 3D shallow deep neural network for fast and precise segmentation of left atrium. Multimedia Systems 1–11 (2021) Fan, Hehe, et al. Deep hierarchical representation of point cloud videos via spatio-temporal decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence 44.12, 9918–9930 (2021) Guennebaud, G., Gross, M.: Algebraic point set surfaces. In: ACM Siggraph 2007 Papers, p. 23 (2007) Tulsiani, S., Efros, A.A., Malik, J.: Multi-view consistency as supervisory signal for learning shape and pose prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2897–2905 (2018) Klokov, R., Lempitsky, V.: Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 863–872 (2017) Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015) JiangLZhangJDengBLiHLiuL3d face reconstruction with geometry details from a single imageIEEE Transact. Image Process.2018271047564770381993110.1109/TIP.2018.28456971409.94267 LiYYinLWangZPanJGaoMZouGLiuJWangLBayesian regularization restoration algorithm for photon counting imagesAppl. Intellig.20215185898591110.1007/s10489-020-02175-4 Kingma, Diederik P., and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980 (2014). Minemura, K., Liau, H., Monrroy, A., Kato, S.: Lmnet: Real-time multiclass object detection on cpu using 3d lidar. In: 2018 3rd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), pp. 28–34 (2018). IEEE Luo, Changwei, et al. Robust 3D face modeling and tracking from RGB-D images. Multimedia Systems 28.5, 1657–1666 (2022) Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019) Schonberger, J.L., Frahm, J.-M.: Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104–4113 (2016) Qiu, Shi, Saeed Anwar, and Nick Barnes. Geometric back-projection network for point cloud classification. IEEE Transactions on Multimedia 24, 1943–1955 (2021) Li, Z., Yu, T., Zheng, Z., Guo, K., Liu, Y.: Posefusion: Pose-guided selective fusion for single-view human volumetric capture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14162–14172 (2021) Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017) Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: European Conference on Computer Vision, pp. 501–518 (2016). Springer KazhdanMHoppeHScreened poisson surface reconstructionACM Transact Graph (ToG)201332311310.1145/2487228.24872371322.68228 Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612 (2019) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017) Cui, H., Shen, S., Gao, W., Wang, Z.: Progressive large-scale structure-from-motion with orthogonal msts. In: 2018 International Conference on 3D Vision (3DV), pp. 79–88 (2018). IEEE Nguyen, A.-D., Choi, S., Kim, W., Lee, S.: Graphx-convolution for point cloud deformation in 2d-to-3d conversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8628–8637 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018) ÖztireliACGuennebaudGGrossMFeature preserving point set surfaces based on non-linear kernel regressionComp. Graphics Forum20092849350110.1111/j.1467-8659.2009.01388.x Alldieck, T., Magnor, M., Bhatnagar, B.L., Theobalt, C., Pons-Moll, G.: Learning to reconstruct people in clothing from a single rgb camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1175–1186 (2019) Wu, J., Zhang, C., Zhang, X., Zhang, Z., Freeman, W.T., Tenenbaum, J.B.: Learning shape priors for single-view 3d completion and reconstruction. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 646–662 (2018) Lai, K., Bo, L., Fox, D.: Unsupervised feature learning for 3d scene labeling. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3050–3057 (2014). IEEE LiangQLiQNieWLiuA-APagn: perturbation adaption generation network for point cloud adversarial defenseMultimedia Syst.202228385185910.1007/s00530-022-00887-w Dai, A., Ruizhongtai Qi, C., Nießner, M.: Shape completion using 3d-encoder-predictor cnns and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868–5877 (2017) ShinDKirmaniAGoyalVKShapiroJHPhoton-efficient computational 3-d and reflectivity imaging with single-photon detectorsIEEE Transact. Computat. Imaging201512112125341268610.1109/TCI.2015.2453093 Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018) YIN, L.-j., CHEN, Q., GU, G.-h., GONG, S.-x.: Monte carlo simulation and implementation of photon counting image based on apd. Journal of Nanjing University of Science and Technology (Natural Science), 34(5), 649–652 (2010) Xu, H., Zhou, Z., Wang, Y., Kang, W., Sun, B., Li, H., Qiao, Y.: Digging into uncertainty in self-supervised multi-view stereo. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6078–6087 (2021) WangXYinLGaoMWangZShenJZouGDenoising method for passive photon counting images based on block-matching 3d filter and non-subsampled contourlet transformSensors20191911246210.3390/s19112462 Chauve, A.-L., Labatut, P., Pons, J.-P.: Robust piecewise-planar 3d reconstruction and completion from large-scale unstructured point data. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1261–1268 (2010). IEEE Zhang, X., Feng, Y., Li, S., Zou, C., Wan, H., Zhao, X., Guo, Y., Gao, Y.: View-guided point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15890–15899 (2021) LohYPChanCSGetting to know low-light images with the exclusively dark datasetComp. Vision Image Underst.2019178304210.1016/j.cviu.2018.10.010 Xie, S., Liu, S., Chen, Z., Tu, Z.: Attentional shapecontextnet for point cloud recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4606–4615 (2018) Iandola, F.N., Han, 1100_CR21 1100_CR22 1100_CR23 1100_CR26 1100_CR27 1100_CR28 AC Öztireli (1100_CR20) 2009; 28 1100_CR29 X Wang (1100_CR17) 2019; 19 L Jiang (1100_CR19) 2018; 27 M-H Guo (1100_CR50) 2021; 7 1100_CR51 R Schnabel (1100_CR24) 2009; 28 D Shin (1100_CR39) 2015; 1 1100_CR52 1100_CR53 1100_CR10 1100_CR54 1100_CR11 1100_CR55 1100_CR12 1100_CR13 1100_CR16 J Anaya (1100_CR14) 2018; 51 1100_CR40 1100_CR41 1100_CR43 1100_CR44 1100_CR45 1100_CR46 1100_CR47 1100_CR48 1100_CR49 Q Liang (1100_CR2) 2022; 28 YP Loh (1100_CR15) 2019; 178 L Yi (1100_CR42) 2016; 35 1100_CR3 1100_CR4 1100_CR5 M Kazhdan (1100_CR25) 2013; 32 1100_CR1 1100_CR30 1100_CR6 1100_CR31 1100_CR7 1100_CR32 1100_CR8 1100_CR33 1100_CR9 1100_CR34 1100_CR35 1100_CR36 1100_CR37 1100_CR38 Y Li (1100_CR18) 2021; 51 |
| References_xml | – reference: Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016) – reference: AnayaJBarbuARenoir - a dataset for real low-light noise image reductionJ. Visual Communicat. Image Represent.20185114415410.1016/j.jvcir.2018.01.012 – reference: Chauve, A.-L., Labatut, P., Pons, J.-P.: Robust piecewise-planar 3d reconstruction and completion from large-scale unstructured point data. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1261–1268 (2010). IEEE – reference: Xu, H., Zhou, Z., Wang, Y., Kang, W., Sun, B., Li, H., Qiao, Y.: Digging into uncertainty in self-supervised multi-view stereo. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6078–6087 (2021) – reference: Minemura, K., Liau, H., Monrroy, A., Kato, S.: Lmnet: Real-time multiclass object detection on cpu using 3d lidar. In: 2018 3rd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), pp. 28–34 (2018). IEEE – reference: Alldieck, T., Magnor, M., Bhatnagar, B.L., Theobalt, C., Pons-Moll, G.: Learning to reconstruct people in clothing from a single rgb camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1175–1186 (2019) – reference: Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018) – reference: Guennebaud, G., Gross, M.: Algebraic point set surfaces. In: ACM Siggraph 2007 Papers, p. 23 (2007) – reference: Cui, H., Shen, S., Gao, W., Wang, Z.: Progressive large-scale structure-from-motion with orthogonal msts. In: 2018 International Conference on 3D Vision (3DV), pp. 79–88 (2018). IEEE – reference: Han, X., Li, Z., Huang, H., Kalogerakis, E., Yu, Y.: High-resolution shape completion using deep neural networks for global structure and local geometry inference. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 85–93 (2017) – reference: Tulsiani, S., Efros, A.A., Malik, J.: Multi-view consistency as supervisory signal for learning shape and pose prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2897–2905 (2018) – reference: ÖztireliACGuennebaudGGrossMFeature preserving point set surfaces based on non-linear kernel regressionComp. Graphics Forum20092849350110.1111/j.1467-8659.2009.01388.x – reference: Zhu, Lei, et al. CED-Net: contextual encoder–decoder network for 3D face reconstruction. Multimedia Systems 28.5, 1713–1722 (2022) – reference: Nguyen, A.-D., Choi, S., Kim, W., Lee, S.: Graphx-convolution for point cloud deformation in 2d-to-3d conversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8628–8637 (2019) – reference: YiLKimVGCeylanDShenI-CYanMSuHLuCHuangQShefferAGuibasLA scalable active framework for region annotation in 3d shape collectionsACM Transact. Graphics (ToG)201635611210.1145/2980179.2980238 – reference: Dai, A., Ruizhongtai Qi, C., Nießner, M.: Shape completion using 3d-encoder-predictor cnns and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868–5877 (2017) – reference: Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 371–386 (2018) – reference: Fan, Hehe, et al. Deep hierarchical representation of point cloud videos via spatio-temporal decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence 44.12, 9918–9930 (2021) – reference: Kingma, Diederik P., and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980 (2014). – reference: LiangQLiQNieWLiuA-APagn: perturbation adaption generation network for point cloud adversarial defenseMultimedia Syst.202228385185910.1007/s00530-022-00887-w – reference: Saito, S., Simon, T., Saragih, J., Joo, H.: Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 84–93 (2020) – reference: Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018) – reference: Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612 (2019) – reference: Lai, K., Bo, L., Fox, D.: Unsupervised feature learning for 3d scene labeling. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3050–3057 (2014). IEEE – reference: Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In: European Conference on Computer Vision, pp. 628–644 (2016). Springer – reference: Li, Z., Yu, T., Zheng, Z., Guo, K., Liu, Y.: Posefusion: Pose-guided selective fusion for single-view human volumetric capture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14162–14172 (2021) – reference: Kausar, Asma, et al. 3D shallow deep neural network for fast and precise segmentation of left atrium. Multimedia Systems 1–11 (2021) – reference: YIN, L.-j., CHEN, Q., GU, G.-h., GONG, S.-x.: Monte carlo simulation and implementation of photon counting image based on apd. Journal of Nanjing University of Science and Technology (Natural Science), 34(5), 649–652 (2010) – reference: Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018) – reference: Xie, S., Liu, S., Chen, Z., Tu, Z.: Attentional shapecontextnet for point cloud recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4606–4615 (2018) – reference: Nguyen, D.T., Hua, B.-S., Tran, K., Pham, Q.-H., Yeung, S.-K.: A field model for repairing 3d shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5676–5684 (2016) – reference: Häne, C., Tulsiani, S., Malik, J.: Hierarchical surface prediction for 3d object reconstruction. In: 2017 International Conference on 3D Vision (3DV), pp. 412–420 (2017). IEEE – reference: Luo, Changwei, et al. Robust 3D face modeling and tracking from RGB-D images. Multimedia Systems 28.5, 1657–1666 (2022) – reference: Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: European Conference on Computer Vision, pp. 501–518 (2016). Springer – reference: Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017) – reference: Tran, L., Liu, X.: Nonlinear 3d face morphable model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7346–7355 (2018) – reference: ShinDKirmaniAGoyalVKShapiroJHPhoton-efficient computational 3-d and reflectivity imaging with single-photon detectorsIEEE Transact. Computat. Imaging201512112125341268610.1109/TCI.2015.2453093 – reference: Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017) – reference: Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015) – reference: GuoM-HCaiJ-XLiuZ-NMuT-JMartinRRHuS-MPct: Point cloud transformerComputat. Visual Media20217218719910.1007/s41095-021-0229-5 – reference: Wu, J., Zhang, C., Zhang, X., Zhang, Z., Freeman, W.T., Tenenbaum, J.B.: Learning shape priors for single-view 3d completion and reconstruction. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 646–662 (2018) – reference: LiYYinLWangZPanJGaoMZouGLiuJWangLBayesian regularization restoration algorithm for photon counting imagesAppl. Intellig.20215185898591110.1007/s10489-020-02175-4 – reference: Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017) – reference: JiangLZhangJDengBLiHLiuL3d face reconstruction with geometry details from a single imageIEEE Transact. Image Process.2018271047564770381993110.1109/TIP.2018.28456971409.94267 – reference: WangXYinLGaoMWangZShenJZouGDenoising method for passive photon counting images based on block-matching 3d filter and non-subsampled contourlet transformSensors20191911246210.3390/s19112462 – reference: Qiu, Shi, Saeed Anwar, and Nick Barnes. Geometric back-projection network for point cloud classification. IEEE Transactions on Multimedia 24, 1943–1955 (2021) – reference: KazhdanMHoppeHScreened poisson surface reconstructionACM Transact Graph (ToG)201332311310.1145/2487228.24872371322.68228 – reference: SchnabelRDegenerPKleinRCompletion and reconstruction with primitive shapesComp Graphics Forum20092850351210.1111/j.1467-8659.2009.01389.x – reference: Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019) – reference: Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018) – reference: LohYPChanCSGetting to know low-light images with the exclusively dark datasetComp. Vision Image Underst.2019178304210.1016/j.cviu.2018.10.010 – reference: Schonberger, J.L., Frahm, J.-M.: Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104–4113 (2016) – reference: Zhang, X., Feng, Y., Li, S., Zou, C., Wan, H., Zhao, X., Guo, Y., Gao, Y.: View-guided point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15890–15899 (2021) – reference: Klokov, R., Lempitsky, V.: Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 863–872 (2017) – reference: Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2088–2096 (2017) – ident: 1100_CR28 doi: 10.1007/978-3-030-01252-6_40 – ident: 1100_CR12 doi: 10.1109/CVPR.2016.445 – ident: 1100_CR32 doi: 10.1109/3DV.2017.00054 – volume: 28 start-page: 503 year: 2009 ident: 1100_CR24 publication-title: Comp Graphics Forum doi: 10.1111/j.1467-8659.2009.01389.x – ident: 1100_CR23 doi: 10.1109/CVPR.2010.5539824 – ident: 1100_CR37 doi: 10.1109/CVPR46437.2021.01563 – ident: 1100_CR6 doi: 10.1109/ACIRS.2018.8467245 – ident: 1100_CR31 doi: 10.1109/CVPR42600.2020.00016 – volume: 7 start-page: 187 issue: 2 year: 2021 ident: 1100_CR50 publication-title: Computat. Visual Media doi: 10.1007/s41095-021-0229-5 – volume: 27 start-page: 4756 issue: 10 year: 2018 ident: 1100_CR19 publication-title: IEEE Transact. Image Process. doi: 10.1109/TIP.2018.2845697 – ident: 1100_CR1 doi: 10.1007/s00530-022-00938-2 – volume: 51 start-page: 5898 issue: 8 year: 2021 ident: 1100_CR18 publication-title: Appl. Intellig. doi: 10.1007/s10489-020-02175-4 – ident: 1100_CR35 doi: 10.1109/CVPR.2017.264 – ident: 1100_CR52 – volume: 178 start-page: 30 year: 2019 ident: 1100_CR15 publication-title: Comp. Vision Image Underst. doi: 10.1016/j.cviu.2018.10.010 – ident: 1100_CR21 doi: 10.1145/1275808.1276406 – ident: 1100_CR9 doi: 10.1109/CVPR.2018.00306 – volume: 28 start-page: 493 year: 2009 ident: 1100_CR20 publication-title: Comp. Graphics Forum doi: 10.1111/j.1467-8659.2009.01388.x – ident: 1100_CR41 doi: 10.1109/TMM.2021.3074240 – ident: 1100_CR3 doi: 10.1007/s00530-022-00925-7 – ident: 1100_CR5 doi: 10.1007/978-3-319-46484-8_38 – ident: 1100_CR7 doi: 10.1109/CVPR.2018.00767 – ident: 1100_CR38 doi: 10.1109/CVPR46437.2021.01394 – ident: 1100_CR49 – ident: 1100_CR8 doi: 10.1109/CVPR.2019.00127 – ident: 1100_CR4 doi: 10.1007/s00530-021-00776-8 – ident: 1100_CR22 doi: 10.1007/978-3-319-46487-9_31 – ident: 1100_CR11 doi: 10.1109/ICCV48922.2021.00602 – volume: 19 start-page: 2462 issue: 11 year: 2019 ident: 1100_CR17 publication-title: Sensors doi: 10.3390/s19112462 – ident: 1100_CR53 doi: 10.1109/CVPR.2017.243 – ident: 1100_CR10 doi: 10.1109/TPAMI.2021.3135117 – ident: 1100_CR29 doi: 10.1109/CVPR.2017.693 – volume: 28 start-page: 851 issue: 3 year: 2022 ident: 1100_CR2 publication-title: Multimedia Syst. doi: 10.1007/s00530-022-00887-w – ident: 1100_CR13 doi: 10.1109/3DV.2018.00020 – volume: 51 start-page: 144 year: 2018 ident: 1100_CR14 publication-title: J. Visual Communicat. Image Represent. doi: 10.1016/j.jvcir.2018.01.012 – ident: 1100_CR36 doi: 10.1109/ICCV.2019.00872 – ident: 1100_CR51 doi: 10.1109/CVPR.2018.00484 – ident: 1100_CR47 doi: 10.1109/CVPR.2019.00326 – ident: 1100_CR33 doi: 10.1109/ICCV.2017.230 – ident: 1100_CR30 doi: 10.1109/ICCV.2017.19 – ident: 1100_CR44 – ident: 1100_CR43 doi: 10.1109/ICRA.2014.6907298 – ident: 1100_CR46 doi: 10.1109/CVPR.2018.00745 – ident: 1100_CR55 doi: 10.1109/ICCV.2017.99 – volume: 32 start-page: 1 issue: 3 year: 2013 ident: 1100_CR25 publication-title: ACM Transact Graph (ToG) doi: 10.1145/2487228.2487237 – ident: 1100_CR45 doi: 10.1007/978-3-030-01234-2_1 – ident: 1100_CR34 doi: 10.1007/978-3-030-01267-0_23 – ident: 1100_CR54 – ident: 1100_CR16 – volume: 35 start-page: 1 issue: 6 year: 2016 ident: 1100_CR42 publication-title: ACM Transact. Graphics (ToG) doi: 10.1145/2980179.2980238 – ident: 1100_CR48 doi: 10.1109/ICCV.2019.00069 – ident: 1100_CR40 doi: 10.1109/CVPR.2018.00813 – volume: 1 start-page: 112 issue: 2 year: 2015 ident: 1100_CR39 publication-title: IEEE Transact. Computat. Imaging doi: 10.1109/TCI.2015.2453093 – ident: 1100_CR27 doi: 10.1109/CVPR.2016.612 – ident: 1100_CR26 |
| SSID | ssj0017630 |
| Score | 2.354354 |
| Snippet | 3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform... |
| SourceID | proquest crossref springer |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 2263 |
| SubjectTerms | Computer Communication Networks Computer Graphics Computer Science Cryptology Data Storage Representation Datasets Deep learning Encapsulation Encoders-Decoders Image reconstruction Morphology Multimedia Information Systems Neural networks Operating Systems Regular Paper Training |
| SummonAdditionalLinks | – databaseName: Advanced Technologies & Aerospace Database dbid: P5Z link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3PS8MwFA46PXhx_sTplBy8abBNk6bxIsNteJCxg8LwUrL8gEHtphv675tkaaeCu3gqpGkofO_lvSQv3wfAZcQNibSIkDKSISLiBI25SJBI2JhRLWyK7lVLHtlgkI1GfBg23OahrLKaE_1ErabS7ZHfJDY2E7u24Phu9oacapQ7XQ0SGptgy7EkOOmGIX2pTxGs7_g9Fk4wIjzF4dKMvzrnrC9CNmIhz5qG8M_AtMo2fx2Q-rjTb_73j_fAbsg4YWdpIvtgQ5cHoFmpOcDg3IfgtddBve5AL26hbRB29VzYPFRBR8DpSyJds1NOQ0r7JyyXJeTQ5r0w6UK_tq75aOGkhMX0ExWeqaRwtUnw2626I_Dc7z3dP6AgxoCk9dIF0k5dyBAiKKWM27wyo8K4dFMSJjLDs1TKVBgaa5YZokkcSdtLRRxTMSZGJ8egUU5LfQIg44amUsXYxglC6djCw5SkkVSaEZHQFogrJHIZmMqdYEaR1xzLHr3c_lHu0ctxC1zV38yWPB1re7cryPLgs_N8hVcLXFegr17_Pdrp-tHOwA72duaqBtugYXHQ52Bbfiwm8_cLb7Ff3n7wTg priority: 102 providerName: ProQuest |
| Title | EA-EDNet: encapsulated attention encoder-decoder network for 3D reconstruction in low-light-level environment |
| URI | https://link.springer.com/article/10.1007/s00530-023-01100-2 https://www.proquest.com/docview/3256491792 |
| Volume | 29 |
| WOSCitedRecordID | wos000985897800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVPQU databaseName: Advanced Technologies & Aerospace Database customDbUrl: eissn: 1432-1882 dateEnd: 20241213 omitProxy: false ssIdentifier: ssj0017630 issn: 0942-4962 databaseCode: P5Z dateStart: 20230201 isFulltext: true titleUrlDefault: https://search.proquest.com/hightechjournals providerName: ProQuest – providerCode: PRVPQU databaseName: Computer Science Database (ProQuest) customDbUrl: eissn: 1432-1882 dateEnd: 20241213 omitProxy: false ssIdentifier: ssj0017630 issn: 0942-4962 databaseCode: K7- dateStart: 20230201 isFulltext: true titleUrlDefault: http://search.proquest.com/compscijour providerName: ProQuest – providerCode: PRVPQU databaseName: Engineering Database customDbUrl: eissn: 1432-1882 dateEnd: 20241213 omitProxy: false ssIdentifier: ssj0017630 issn: 0942-4962 databaseCode: M7S dateStart: 20230201 isFulltext: true titleUrlDefault: http://search.proquest.com providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 1432-1882 dateEnd: 20241213 omitProxy: false ssIdentifier: ssj0017630 issn: 0942-4962 databaseCode: BENPR dateStart: 20230201 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVAVX databaseName: Springer LINK customDbUrl: eissn: 1432-1882 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0017630 issn: 0942-4962 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NS8MwFA-6efDi_MTpHDl400A_kqXxNt2GoIyxqQwvJUsTGNRO3NB_35es7VRU0EsLSRpC3nt5v9e8D4ROPWGop6VHEqM4odIPyUTIkMiQTzjTEiC6q1pyy_v9aDwWgzwobF54uxdXku6kLoPdLL94BHQMcXnOCBy8VVB33PLycPRQ3h2AxLg_K4IGhIpWkIfKfD_HZ3W0wphfrkWdtunV_rfObbSVo0vcXrLDDlrT2S6qFZUbcC7Ie-ip2ybdTl8vLjA0SLCUU8CcCbbJNp37o222VdJIot0bZ0t3cQwYF4cd7OzoMvcsnmY4nb2R1GUlSa0fEv4QQbeP7nvdu6trkhdeIAokckG0rSRkKJWMMS4AQ0ZMGgstFeUyMiJqKdWShvmaR4Zq6nsKRiWeCJicUKPDA1TJZpk-RJgLw1oq8QPQCZSxCRCFJ4p5KtGcypDVkV_sf6zyrOS2OEYal_mU3X7GsKLY7Wcc1NFZ-c3zMifHr6MbBVnjXD7ncQhIj4KlKqD7vCDjqvvn2Y7-NvwYbQaOE6zHYANVgC76BG2o18V0_tJE1ctufzBsovUbTprW7XQEzwF7bDqOfgc19Oo8 |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Rb9MwED5tZdJ4oQyGKBTmB3jarCWOXcdICFW006aVCGmbtLfgOrY0qaTdWlHxp_iNnN0kHUjsrQ88RXIcx4k_353tu_sA3kXK8cjqiBbOSMp1nNCx0gnViRxLYTWa6IG1ZCSzLL2-Vl-34FcdC-PdKmuZGAR1MTV-j_w4Qd3McW2h2KfZLfWsUf50tabQWMHi3P5c4pJt_vFsgOP7nrGT4eXnU1qxClCDcFtQ62lyHOdaCCEVGkip0M7bTYZLnTqV9ozpaSdiK1PHLY8jg7WKSDGhx9zZBNvdhkdoRrA0uApeNKcWOFfDno7ijHLVY1WQTgjV82iPKL6ahixtlP2pCNfW7V8HskHPnbT_tz_0FJ5UFjXpr6bAHmzZ8hm0a7YKUgmv5_B92KfDQWYXHwgW6NncE5fZgvgEo8Hl0xd7Zjha2HAl5cpFnqBdT5IBCXsHTb5dclOSyXRJJyETy8T7XpF7UYP7cLWRj34BrXJa2pdApHKiZ4qYoR7kQowRDrIwIjKFlVwnogNxPfK5qTKxe0KQSd7kkA5oybFHeUBLzjpw2DwzW-UhebB2t4ZIXsmkeb7GRweOapCtb_-7tVcPt3YAu6eXX0b56Cw7fw2PWcC495DsQgvHxL6BHfNjcTO_extmC4FvmwbfbxasS38 |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dS8MwEA-iIr44P3E6NQ--aVg_kqXxbbgNxTGGX-ytZGkCg9oNN_Tf95K13RQVxKdCkoaQu-v90tz9DqFzTxjqaemRxChOqPRDMhQyJDLkQ860BIjuqpZ0ea8XDQaiv5TF76LdiyvJeU6DZWnKZvVJYupl4pvVHY-AvyGO84zAR3iN-l5k2fPvH57LewSwHveXRdCAUNEI8rSZ7-f47JoWePPLFanzPJ3K_9e8jbZy1ImbczXZQSs620WVoqIDzg18D720m6Td6unZFYYGCSfoFLBogi0JpwuLtM22ehpJtHvibB5GjgH74rCF3fm65KTFowyn43eSOraS1MYn4aXMun301Gk_Xt-QvCADUWCpM6JthSFDqWSMcQHYMmLSWMipKJeREVFDqYY0zNc8MlSDQBSMSjwRMDmkRocHaDUbZ_oQYS4Ma6jED8BXUMaGICCeKOapRHMqQ1ZFfiGLWOVs5bZoRhqXPMtuP2NYUez2Mw6q6KJ8ZzLn6vh1dK0QcZzb7TQOAQFSOMEK6L4sRLro_nm2o78NP0Mb_VYn7t727o7RZuCUwgYV1tAqiEifoHX1NhtNX0-dOn8Ayi3xmQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EA-EDNet%3A+encapsulated+attention+encoder-decoder+network+for+3D+reconstruction+in+low-light-level+environment&rft.jtitle=Multimedia+systems&rft.au=Deng%2C+Yulin&rft.au=Yin%2C+Liju&rft.au=Gao%2C+Xiaoning&rft.au=Zhou%2C+Hui&rft.date=2023-08-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=0942-4962&rft.eissn=1432-1882&rft.volume=29&rft.issue=4&rft.spage=2263&rft.epage=2279&rft_id=info:doi/10.1007%2Fs00530-023-01100-2&rft.externalDocID=10_1007_s00530_023_01100_2 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0942-4962&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0942-4962&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0942-4962&client=summon |