EA-EDNet: encapsulated attention encoder-decoder network for 3D reconstruction in low-light-level environment

3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform reconstruction, not yet about the Low-Light-Level (LLL) environment where the information is extremely scarce. The implementation of 3D reconstruction...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia systems Vol. 29; no. 4; pp. 2263 - 2279
Main Authors: Deng, Yulin, Yin, Liju, Gao, Xiaoning, Zhou, Hui, Wang, Zhenzhou, Zou, Guofeng
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.08.2023
Springer Nature B.V
Subjects:
ISSN:0942-4962, 1432-1882
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract 3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform reconstruction, not yet about the Low-Light-Level (LLL) environment where the information is extremely scarce. The implementation of 3D reconstruction in this environment is an urgent requirement for military, aerospace and other fields. Therefore, we introduce an Encapsulated Attention Encoder-Decoder Network (EA-EDNet) in this paper. It can incorporate multiple levels of semantic to adequately extract the limited information from images taken in the LLL environment and can reason out the defective morphological data as well as intensify the attention to the focused parts. The EA-EDNet adopts a two-stage combined coarse-to-fine training fashion. We additionally create a realistic LLL environment dataset 3LNet-12, and accompanying propose an analysis method for filtering this dataset. In experiments, the proposed method not only achieves results superior to the state-of-the-art methods, but also achieves more delicate reconstruction models.
AbstractList 3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform reconstruction, not yet about the Low-Light-Level (LLL) environment where the information is extremely scarce. The implementation of 3D reconstruction in this environment is an urgent requirement for military, aerospace and other fields. Therefore, we introduce an Encapsulated Attention Encoder-Decoder Network (EA-EDNet) in this paper. It can incorporate multiple levels of semantic to adequately extract the limited information from images taken in the LLL environment and can reason out the defective morphological data as well as intensify the attention to the focused parts. The EA-EDNet adopts a two-stage combined coarse-to-fine training fashion. We additionally create a realistic LLL environment dataset 3LNet-12, and accompanying propose an analysis method for filtering this dataset. In experiments, the proposed method not only achieves results superior to the state-of-the-art methods, but also achieves more delicate reconstruction models.
Author Zou, Guofeng
Deng, Yulin
Zhou, Hui
Gao, Xiaoning
Yin, Liju
Wang, Zhenzhou
Author_xml – sequence: 1
  givenname: Yulin
  surname: Deng
  fullname: Deng, Yulin
  organization: Shandong University of Technology, School of Electrical and Electronic Engineering
– sequence: 2
  givenname: Liju
  surname: Yin
  fullname: Yin, Liju
  email: ljyin72@163.com
  organization: Shandong University of Technology, School of Electrical and Electronic Engineering
– sequence: 3
  givenname: Xiaoning
  surname: Gao
  fullname: Gao, Xiaoning
  organization: Shandong University of Technology, School of Electrical and Electronic Engineering
– sequence: 4
  givenname: Hui
  surname: Zhou
  fullname: Zhou, Hui
  organization: Shandong University of Technology, School of Electrical and Electronic Engineering
– sequence: 5
  givenname: Zhenzhou
  surname: Wang
  fullname: Wang, Zhenzhou
  organization: Shandong University of Technology, School of Electrical and Electronic Engineering
– sequence: 6
  givenname: Guofeng
  surname: Zou
  fullname: Zou, Guofeng
  organization: Shandong University of Technology, School of Electrical and Electronic Engineering
BookMark eNp9kEtLQzEQhYMoWB9_wNUF19HJ4z7iTrQ-QHSj6xDTSb31NqlJavHfm7aC4MLVwOF8Z2bOAdn1wSMhJwzOGEB7ngBqARS4oMCKQvkOGTEpOGVdx3fJCJTkVKqG75ODlGYArG0EjMh8fEnH14-YLyr01izScjAZJ5XJGX3ug1_LYYKRTnAzK495FeJ75UKsxHUVi-xTjku7cfe-GsKKDv30LdMBP3EoAZ99DH5e8o7InjNDwuOfeUhebsbPV3f04en2_urygVrBVKYoyiNOSlPXdatAqa42DjoAK1vTOdU11jbG1QzbzkmUDGxxTUDx2rxKh-KQnG5zFzF8LDFlPQvL6MtKLXjdSMVaxYuLb102hpQiOr2I_dzEL81Ar2vV21p1uUZvatVrqPsD2T6b9e85mn74HxVbNJU9forx96p_qG9Omo6S
CitedBy_id crossref_primary_10_1007_s00530_025_01859_6
crossref_primary_10_1016_j_dsp_2025_105176
crossref_primary_10_1177_30504554241297613
Cites_doi 10.1007/978-3-030-01252-6_40
10.1109/CVPR.2016.445
10.1109/3DV.2017.00054
10.1111/j.1467-8659.2009.01389.x
10.1109/CVPR.2010.5539824
10.1109/CVPR46437.2021.01563
10.1109/ACIRS.2018.8467245
10.1109/CVPR42600.2020.00016
10.1007/s41095-021-0229-5
10.1109/TIP.2018.2845697
10.1007/s00530-022-00938-2
10.1007/s10489-020-02175-4
10.1109/CVPR.2017.264
10.1016/j.cviu.2018.10.010
10.1145/1275808.1276406
10.1109/CVPR.2018.00306
10.1111/j.1467-8659.2009.01388.x
10.1109/TMM.2021.3074240
10.1007/s00530-022-00925-7
10.1007/978-3-319-46484-8_38
10.1109/CVPR.2018.00767
10.1109/CVPR46437.2021.01394
10.1109/CVPR.2019.00127
10.1007/s00530-021-00776-8
10.1007/978-3-319-46487-9_31
10.1109/ICCV48922.2021.00602
10.3390/s19112462
10.1109/CVPR.2017.243
10.1109/TPAMI.2021.3135117
10.1109/CVPR.2017.693
10.1007/s00530-022-00887-w
10.1109/3DV.2018.00020
10.1016/j.jvcir.2018.01.012
10.1109/ICCV.2019.00872
10.1109/CVPR.2018.00484
10.1109/CVPR.2019.00326
10.1109/ICCV.2017.230
10.1109/ICCV.2017.19
10.1109/ICRA.2014.6907298
10.1109/CVPR.2018.00745
10.1109/ICCV.2017.99
10.1145/2487228.2487237
10.1007/978-3-030-01234-2_1
10.1007/978-3-030-01267-0_23
10.1145/2980179.2980238
10.1109/ICCV.2019.00069
10.1109/CVPR.2018.00813
10.1109/TCI.2015.2453093
10.1109/CVPR.2016.612
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
– notice: The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023.
DBID AAYXX
CITATION
8FE
8FG
ABJCF
AFKRA
ARAPS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
GNUQQ
HCIFZ
JQ2
K7-
L6V
M7S
P5Z
P62
PHGZM
PHGZT
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PTHSS
DOI 10.1007/s00530-023-01100-2
DatabaseName CrossRef
ProQuest SciTech Collection
ProQuest Technology Collection
Materials Science & Engineering Collection
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials - QC
ProQuest Central
ProQuest Technology Collection
ProQuest One Community College
ProQuest Central
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
Computer Science Database
ProQuest Engineering Collection
Engineering Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
Engineering Collection
DatabaseTitle CrossRef
Computer Science Database
ProQuest Central Student
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Engineering Collection
ProQuest Central Korea
ProQuest Central (New)
Engineering Collection
Advanced Technologies & Aerospace Collection
Engineering Database
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest SciTech Collection
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
Materials Science & Engineering Collection
ProQuest One Academic
ProQuest One Academic (New)
DatabaseTitleList Computer Science Database

Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1432-1882
EndPage 2279
ExternalDocumentID 10_1007_s00530_023_01100_2
GrantInformation_xml – fundername: Natural Science Foundation of Shandong Province
  grantid: ZR2020MF127
  funderid: http://dx.doi.org/10.13039/501100007129
– fundername: National Natural Science Foundation of China
  grantid: 62101310
  funderid: http://dx.doi.org/10.13039/501100001809
GroupedDBID --Z
-4Z
-59
-5G
-BR
-EM
-ET
-Y2
-~C
-~X
.4S
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
203
28-
29M
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5QI
5VS
67Z
6NX
78A
85S
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYOK
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFFNX
AFGCZ
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BBWZM
BDATZ
BGNMA
BSONS
CAG
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EDO
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
H~9
I-F
I09
IHE
IJ-
IKXTQ
ITG
ITH
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
KDC
KOV
KOW
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
P19
P2P
P9O
PF0
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RSV
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TN5
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YIN
YLTOR
Z45
Z7R
Z7X
Z83
Z88
Z8M
Z8R
Z8W
Z92
ZMTXR
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABJCF
ABRTQ
ACSTC
ADHKG
AETEA
AEZWR
AFDZB
AFFHD
AFHIU
AFKRA
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ARAPS
ATHPR
AYFIA
BENPR
BGLVJ
CCPQU
CITATION
HCIFZ
K7-
M7S
PHGZM
PHGZT
PQGLB
PTHSS
8FE
8FG
AZQEC
DWQXO
GNUQQ
JQ2
L6V
P62
PKEHL
PQEST
PQQKQ
PQUKI
PRINS
ID FETCH-LOGICAL-c319t-e3023f44a5557909985af0800c47a8f986cc6af51e78f4e410c909d0925ab4fe3
IEDL.DBID RSV
ISICitedReferencesCount 3
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000985897800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0942-4962
IngestDate Fri Oct 03 06:01:24 EDT 2025
Sat Nov 29 03:45:59 EST 2025
Tue Nov 18 20:43:00 EST 2025
Fri Feb 21 02:42:04 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords Computer stereo vision
Low-light-level environment imaging
3D reconstruction
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-e3023f44a5557909985af0800c47a8f986cc6af51e78f4e410c909d0925ab4fe3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 3256491792
PQPubID 2043725
PageCount 17
ParticipantIDs proquest_journals_3256491792
crossref_primary_10_1007_s00530_023_01100_2
crossref_citationtrail_10_1007_s00530_023_01100_2
springer_journals_10_1007_s00530_023_01100_2
PublicationCentury 2000
PublicationDate 20230800
2023-08-00
20230801
PublicationDateYYYYMMDD 2023-08-01
PublicationDate_xml – month: 8
  year: 2023
  text: 20230800
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationTitle Multimedia systems
PublicationTitleAbbrev Multimedia Systems
PublicationYear 2023
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Zhu, Lei, et al. CED-Net: contextual encoder–decoder network for 3D face reconstruction. Multimedia Systems 28.5, 1713–1722 (2022)
GuoM-HCaiJ-XLiuZ-NMuT-JMartinRRHuS-MPct: Point cloud transformerComputat. Visual Media20217218719910.1007/s41095-021-0229-5
YiLKimVGCeylanDShenI-CYanMSuHLuCHuangQShefferAGuibasLA scalable active framework for region annotation in 3d shape collectionsACM Transact. Graphics (ToG)201635611210.1145/2980179.2980238
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)
Saito, S., Simon, T., Saragih, J., Joo, H.: Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 84–93 (2020)
Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 371–386 (2018)
SchnabelRDegenerPKleinRCompletion and reconstruction with primitive shapesComp Graphics Forum20092850351210.1111/j.1467-8659.2009.01389.x
Han, X., Li, Z., Huang, H., Kalogerakis, E., Yu, Y.: High-resolution shape completion using deep neural networks for global structure and local geometry inference. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 85–93 (2017)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)
Nguyen, D.T., Hua, B.-S., Tran, K., Pham, Q.-H., Yeung, S.-K.: A field model for repairing 3d shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5676–5684 (2016)
Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2088–2096 (2017)
AnayaJBarbuARenoir - a dataset for real low-light noise image reductionJ. Visual Communicat. Image Represent.20185114415410.1016/j.jvcir.2018.01.012
Tran, L., Liu, X.: Nonlinear 3d face morphable model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7346–7355 (2018)
Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In: European Conference on Computer Vision, pp. 628–644 (2016). Springer
Kausar, Asma, et al. 3D shallow deep neural network for fast and precise segmentation of left atrium. Multimedia Systems 1–11 (2021)
Fan, Hehe, et al. Deep hierarchical representation of point cloud videos via spatio-temporal decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence 44.12, 9918–9930 (2021)
Guennebaud, G., Gross, M.: Algebraic point set surfaces. In: ACM Siggraph 2007 Papers, p. 23 (2007)
Tulsiani, S., Efros, A.A., Malik, J.: Multi-view consistency as supervisory signal for learning shape and pose prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2897–2905 (2018)
Klokov, R., Lempitsky, V.: Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 863–872 (2017)
Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015)
JiangLZhangJDengBLiHLiuL3d face reconstruction with geometry details from a single imageIEEE Transact. Image Process.2018271047564770381993110.1109/TIP.2018.28456971409.94267
LiYYinLWangZPanJGaoMZouGLiuJWangLBayesian regularization restoration algorithm for photon counting imagesAppl. Intellig.20215185898591110.1007/s10489-020-02175-4
Kingma, Diederik P., and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980 (2014).
Minemura, K., Liau, H., Monrroy, A., Kato, S.: Lmnet: Real-time multiclass object detection on cpu using 3d lidar. In: 2018 3rd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), pp. 28–34 (2018). IEEE
Luo, Changwei, et al. Robust 3D face modeling and tracking from RGB-D images. Multimedia Systems 28.5, 1657–1666 (2022)
Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019)
Schonberger, J.L., Frahm, J.-M.: Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104–4113 (2016)
Qiu, Shi, Saeed Anwar, and Nick Barnes. Geometric back-projection network for point cloud classification. IEEE Transactions on Multimedia 24, 1943–1955 (2021)
Li, Z., Yu, T., Zheng, Z., Guo, K., Liu, Y.: Posefusion: Pose-guided selective fusion for single-view human volumetric capture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14162–14172 (2021)
Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017)
Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: European Conference on Computer Vision, pp. 501–518 (2016). Springer
KazhdanMHoppeHScreened poisson surface reconstructionACM Transact Graph (ToG)201332311310.1145/2487228.24872371322.68228
Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612 (2019)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Cui, H., Shen, S., Gao, W., Wang, Z.: Progressive large-scale structure-from-motion with orthogonal msts. In: 2018 International Conference on 3D Vision (3DV), pp. 79–88 (2018). IEEE
Nguyen, A.-D., Choi, S., Kim, W., Lee, S.: Graphx-convolution for point cloud deformation in 2d-to-3d conversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8628–8637 (2019)
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
ÖztireliACGuennebaudGGrossMFeature preserving point set surfaces based on non-linear kernel regressionComp. Graphics Forum20092849350110.1111/j.1467-8659.2009.01388.x
Alldieck, T., Magnor, M., Bhatnagar, B.L., Theobalt, C., Pons-Moll, G.: Learning to reconstruct people in clothing from a single rgb camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1175–1186 (2019)
Wu, J., Zhang, C., Zhang, X., Zhang, Z., Freeman, W.T., Tenenbaum, J.B.: Learning shape priors for single-view 3d completion and reconstruction. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 646–662 (2018)
Lai, K., Bo, L., Fox, D.: Unsupervised feature learning for 3d scene labeling. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3050–3057 (2014). IEEE
LiangQLiQNieWLiuA-APagn: perturbation adaption generation network for point cloud adversarial defenseMultimedia Syst.202228385185910.1007/s00530-022-00887-w
Dai, A., Ruizhongtai Qi, C., Nießner, M.: Shape completion using 3d-encoder-predictor cnns and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868–5877 (2017)
ShinDKirmaniAGoyalVKShapiroJHPhoton-efficient computational 3-d and reflectivity imaging with single-photon detectorsIEEE Transact. Computat. Imaging201512112125341268610.1109/TCI.2015.2453093
Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)
YIN, L.-j., CHEN, Q., GU, G.-h., GONG, S.-x.: Monte carlo simulation and implementation of photon counting image based on apd. Journal of Nanjing University of Science and Technology (Natural Science), 34(5), 649–652 (2010)
Xu, H., Zhou, Z., Wang, Y., Kang, W., Sun, B., Li, H., Qiao, Y.: Digging into uncertainty in self-supervised multi-view stereo. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6078–6087 (2021)
WangXYinLGaoMWangZShenJZouGDenoising method for passive photon counting images based on block-matching 3d filter and non-subsampled contourlet transformSensors20191911246210.3390/s19112462
Chauve, A.-L., Labatut, P., Pons, J.-P.: Robust piecewise-planar 3d reconstruction and completion from large-scale unstructured point data. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1261–1268 (2010). IEEE
Zhang, X., Feng, Y., Li, S., Zou, C., Wan, H., Zhao, X., Guo, Y., Gao, Y.: View-guided point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15890–15899 (2021)
LohYPChanCSGetting to know low-light images with the exclusively dark datasetComp. Vision Image Underst.2019178304210.1016/j.cviu.2018.10.010
Xie, S., Liu, S., Chen, Z., Tu, Z.: Attentional shapecontextnet for point cloud recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4606–4615 (2018)
Iandola, F.N., Han,
1100_CR21
1100_CR22
1100_CR23
1100_CR26
1100_CR27
1100_CR28
AC Öztireli (1100_CR20) 2009; 28
1100_CR29
X Wang (1100_CR17) 2019; 19
L Jiang (1100_CR19) 2018; 27
M-H Guo (1100_CR50) 2021; 7
1100_CR51
R Schnabel (1100_CR24) 2009; 28
D Shin (1100_CR39) 2015; 1
1100_CR52
1100_CR53
1100_CR10
1100_CR54
1100_CR11
1100_CR55
1100_CR12
1100_CR13
1100_CR16
J Anaya (1100_CR14) 2018; 51
1100_CR40
1100_CR41
1100_CR43
1100_CR44
1100_CR45
1100_CR46
1100_CR47
1100_CR48
1100_CR49
Q Liang (1100_CR2) 2022; 28
YP Loh (1100_CR15) 2019; 178
L Yi (1100_CR42) 2016; 35
1100_CR3
1100_CR4
1100_CR5
M Kazhdan (1100_CR25) 2013; 32
1100_CR1
1100_CR30
1100_CR6
1100_CR31
1100_CR7
1100_CR32
1100_CR8
1100_CR33
1100_CR9
1100_CR34
1100_CR35
1100_CR36
1100_CR37
1100_CR38
Y Li (1100_CR18) 2021; 51
References_xml – reference: Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)
– reference: AnayaJBarbuARenoir - a dataset for real low-light noise image reductionJ. Visual Communicat. Image Represent.20185114415410.1016/j.jvcir.2018.01.012
– reference: Chauve, A.-L., Labatut, P., Pons, J.-P.: Robust piecewise-planar 3d reconstruction and completion from large-scale unstructured point data. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1261–1268 (2010). IEEE
– reference: Xu, H., Zhou, Z., Wang, Y., Kang, W., Sun, B., Li, H., Qiao, Y.: Digging into uncertainty in self-supervised multi-view stereo. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6078–6087 (2021)
– reference: Minemura, K., Liau, H., Monrroy, A., Kato, S.: Lmnet: Real-time multiclass object detection on cpu using 3d lidar. In: 2018 3rd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), pp. 28–34 (2018). IEEE
– reference: Alldieck, T., Magnor, M., Bhatnagar, B.L., Theobalt, C., Pons-Moll, G.: Learning to reconstruct people in clothing from a single rgb camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1175–1186 (2019)
– reference: Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
– reference: Guennebaud, G., Gross, M.: Algebraic point set surfaces. In: ACM Siggraph 2007 Papers, p. 23 (2007)
– reference: Cui, H., Shen, S., Gao, W., Wang, Z.: Progressive large-scale structure-from-motion with orthogonal msts. In: 2018 International Conference on 3D Vision (3DV), pp. 79–88 (2018). IEEE
– reference: Han, X., Li, Z., Huang, H., Kalogerakis, E., Yu, Y.: High-resolution shape completion using deep neural networks for global structure and local geometry inference. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 85–93 (2017)
– reference: Tulsiani, S., Efros, A.A., Malik, J.: Multi-view consistency as supervisory signal for learning shape and pose prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2897–2905 (2018)
– reference: ÖztireliACGuennebaudGGrossMFeature preserving point set surfaces based on non-linear kernel regressionComp. Graphics Forum20092849350110.1111/j.1467-8659.2009.01388.x
– reference: Zhu, Lei, et al. CED-Net: contextual encoder–decoder network for 3D face reconstruction. Multimedia Systems 28.5, 1713–1722 (2022)
– reference: Nguyen, A.-D., Choi, S., Kim, W., Lee, S.: Graphx-convolution for point cloud deformation in 2d-to-3d conversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8628–8637 (2019)
– reference: YiLKimVGCeylanDShenI-CYanMSuHLuCHuangQShefferAGuibasLA scalable active framework for region annotation in 3d shape collectionsACM Transact. Graphics (ToG)201635611210.1145/2980179.2980238
– reference: Dai, A., Ruizhongtai Qi, C., Nießner, M.: Shape completion using 3d-encoder-predictor cnns and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868–5877 (2017)
– reference: Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 371–386 (2018)
– reference: Fan, Hehe, et al. Deep hierarchical representation of point cloud videos via spatio-temporal decomposition. IEEE Transactions on Pattern Analysis and Machine Intelligence 44.12, 9918–9930 (2021)
– reference: Kingma, Diederik P., and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980 (2014).
– reference: LiangQLiQNieWLiuA-APagn: perturbation adaption generation network for point cloud adversarial defenseMultimedia Syst.202228385185910.1007/s00530-022-00887-w
– reference: Saito, S., Simon, T., Saragih, J., Joo, H.: Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 84–93 (2020)
– reference: Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
– reference: Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603–612 (2019)
– reference: Lai, K., Bo, L., Fox, D.: Unsupervised feature learning for 3d scene labeling. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3050–3057 (2014). IEEE
– reference: Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In: European Conference on Computer Vision, pp. 628–644 (2016). Springer
– reference: Li, Z., Yu, T., Zheng, Z., Guo, K., Liu, Y.: Posefusion: Pose-guided selective fusion for single-view human volumetric capture. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14162–14172 (2021)
– reference: Kausar, Asma, et al. 3D shallow deep neural network for fast and precise segmentation of left atrium. Multimedia Systems 1–11 (2021)
– reference: YIN, L.-j., CHEN, Q., GU, G.-h., GONG, S.-x.: Monte carlo simulation and implementation of photon counting image based on apd. Journal of Nanjing University of Science and Technology (Natural Science), 34(5), 649–652 (2010)
– reference: Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)
– reference: Xie, S., Liu, S., Chen, Z., Tu, Z.: Attentional shapecontextnet for point cloud recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4606–4615 (2018)
– reference: Nguyen, D.T., Hua, B.-S., Tran, K., Pham, Q.-H., Yeung, S.-K.: A field model for repairing 3d shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5676–5684 (2016)
– reference: Häne, C., Tulsiani, S., Malik, J.: Hierarchical surface prediction for 3d object reconstruction. In: 2017 International Conference on 3D Vision (3DV), pp. 412–420 (2017). IEEE
– reference: Luo, Changwei, et al. Robust 3D face modeling and tracking from RGB-D images. Multimedia Systems 28.5, 1657–1666 (2022)
– reference: Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: European Conference on Computer Vision, pp. 501–518 (2016). Springer
– reference: Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)
– reference: Tran, L., Liu, X.: Nonlinear 3d face morphable model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7346–7355 (2018)
– reference: ShinDKirmaniAGoyalVKShapiroJHPhoton-efficient computational 3-d and reflectivity imaging with single-photon detectorsIEEE Transact. Computat. Imaging201512112125341268610.1109/TCI.2015.2453093
– reference: Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
– reference: Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015)
– reference: GuoM-HCaiJ-XLiuZ-NMuT-JMartinRRHuS-MPct: Point cloud transformerComputat. Visual Media20217218719910.1007/s41095-021-0229-5
– reference: Wu, J., Zhang, C., Zhang, X., Zhang, Z., Freeman, W.T., Tenenbaum, J.B.: Learning shape priors for single-view 3d completion and reconstruction. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 646–662 (2018)
– reference: LiYYinLWangZPanJGaoMZouGLiuJWangLBayesian regularization restoration algorithm for photon counting imagesAppl. Intellig.20215185898591110.1007/s10489-020-02175-4
– reference: Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017)
– reference: JiangLZhangJDengBLiHLiuL3d face reconstruction with geometry details from a single imageIEEE Transact. Image Process.2018271047564770381993110.1109/TIP.2018.28456971409.94267
– reference: WangXYinLGaoMWangZShenJZouGDenoising method for passive photon counting images based on block-matching 3d filter and non-subsampled contourlet transformSensors20191911246210.3390/s19112462
– reference: Qiu, Shi, Saeed Anwar, and Nick Barnes. Geometric back-projection network for point cloud classification. IEEE Transactions on Multimedia 24, 1943–1955 (2021)
– reference: KazhdanMHoppeHScreened poisson surface reconstructionACM Transact Graph (ToG)201332311310.1145/2487228.24872371322.68228
– reference: SchnabelRDegenerPKleinRCompletion and reconstruction with primitive shapesComp Graphics Forum20092850351210.1111/j.1467-8659.2009.01389.x
– reference: Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019)
– reference: Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)
– reference: LohYPChanCSGetting to know low-light images with the exclusively dark datasetComp. Vision Image Underst.2019178304210.1016/j.cviu.2018.10.010
– reference: Schonberger, J.L., Frahm, J.-M.: Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4104–4113 (2016)
– reference: Zhang, X., Feng, Y., Li, S., Zou, C., Wan, H., Zhao, X., Guo, Y., Gao, Y.: View-guided point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15890–15899 (2021)
– reference: Klokov, R., Lempitsky, V.: Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 863–872 (2017)
– reference: Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2088–2096 (2017)
– ident: 1100_CR28
  doi: 10.1007/978-3-030-01252-6_40
– ident: 1100_CR12
  doi: 10.1109/CVPR.2016.445
– ident: 1100_CR32
  doi: 10.1109/3DV.2017.00054
– volume: 28
  start-page: 503
  year: 2009
  ident: 1100_CR24
  publication-title: Comp Graphics Forum
  doi: 10.1111/j.1467-8659.2009.01389.x
– ident: 1100_CR23
  doi: 10.1109/CVPR.2010.5539824
– ident: 1100_CR37
  doi: 10.1109/CVPR46437.2021.01563
– ident: 1100_CR6
  doi: 10.1109/ACIRS.2018.8467245
– ident: 1100_CR31
  doi: 10.1109/CVPR42600.2020.00016
– volume: 7
  start-page: 187
  issue: 2
  year: 2021
  ident: 1100_CR50
  publication-title: Computat. Visual Media
  doi: 10.1007/s41095-021-0229-5
– volume: 27
  start-page: 4756
  issue: 10
  year: 2018
  ident: 1100_CR19
  publication-title: IEEE Transact. Image Process.
  doi: 10.1109/TIP.2018.2845697
– ident: 1100_CR1
  doi: 10.1007/s00530-022-00938-2
– volume: 51
  start-page: 5898
  issue: 8
  year: 2021
  ident: 1100_CR18
  publication-title: Appl. Intellig.
  doi: 10.1007/s10489-020-02175-4
– ident: 1100_CR35
  doi: 10.1109/CVPR.2017.264
– ident: 1100_CR52
– volume: 178
  start-page: 30
  year: 2019
  ident: 1100_CR15
  publication-title: Comp. Vision Image Underst.
  doi: 10.1016/j.cviu.2018.10.010
– ident: 1100_CR21
  doi: 10.1145/1275808.1276406
– ident: 1100_CR9
  doi: 10.1109/CVPR.2018.00306
– volume: 28
  start-page: 493
  year: 2009
  ident: 1100_CR20
  publication-title: Comp. Graphics Forum
  doi: 10.1111/j.1467-8659.2009.01388.x
– ident: 1100_CR41
  doi: 10.1109/TMM.2021.3074240
– ident: 1100_CR3
  doi: 10.1007/s00530-022-00925-7
– ident: 1100_CR5
  doi: 10.1007/978-3-319-46484-8_38
– ident: 1100_CR7
  doi: 10.1109/CVPR.2018.00767
– ident: 1100_CR38
  doi: 10.1109/CVPR46437.2021.01394
– ident: 1100_CR49
– ident: 1100_CR8
  doi: 10.1109/CVPR.2019.00127
– ident: 1100_CR4
  doi: 10.1007/s00530-021-00776-8
– ident: 1100_CR22
  doi: 10.1007/978-3-319-46487-9_31
– ident: 1100_CR11
  doi: 10.1109/ICCV48922.2021.00602
– volume: 19
  start-page: 2462
  issue: 11
  year: 2019
  ident: 1100_CR17
  publication-title: Sensors
  doi: 10.3390/s19112462
– ident: 1100_CR53
  doi: 10.1109/CVPR.2017.243
– ident: 1100_CR10
  doi: 10.1109/TPAMI.2021.3135117
– ident: 1100_CR29
  doi: 10.1109/CVPR.2017.693
– volume: 28
  start-page: 851
  issue: 3
  year: 2022
  ident: 1100_CR2
  publication-title: Multimedia Syst.
  doi: 10.1007/s00530-022-00887-w
– ident: 1100_CR13
  doi: 10.1109/3DV.2018.00020
– volume: 51
  start-page: 144
  year: 2018
  ident: 1100_CR14
  publication-title: J. Visual Communicat. Image Represent.
  doi: 10.1016/j.jvcir.2018.01.012
– ident: 1100_CR36
  doi: 10.1109/ICCV.2019.00872
– ident: 1100_CR51
  doi: 10.1109/CVPR.2018.00484
– ident: 1100_CR47
  doi: 10.1109/CVPR.2019.00326
– ident: 1100_CR33
  doi: 10.1109/ICCV.2017.230
– ident: 1100_CR30
  doi: 10.1109/ICCV.2017.19
– ident: 1100_CR44
– ident: 1100_CR43
  doi: 10.1109/ICRA.2014.6907298
– ident: 1100_CR46
  doi: 10.1109/CVPR.2018.00745
– ident: 1100_CR55
  doi: 10.1109/ICCV.2017.99
– volume: 32
  start-page: 1
  issue: 3
  year: 2013
  ident: 1100_CR25
  publication-title: ACM Transact Graph (ToG)
  doi: 10.1145/2487228.2487237
– ident: 1100_CR45
  doi: 10.1007/978-3-030-01234-2_1
– ident: 1100_CR34
  doi: 10.1007/978-3-030-01267-0_23
– ident: 1100_CR54
– ident: 1100_CR16
– volume: 35
  start-page: 1
  issue: 6
  year: 2016
  ident: 1100_CR42
  publication-title: ACM Transact. Graphics (ToG)
  doi: 10.1145/2980179.2980238
– ident: 1100_CR48
  doi: 10.1109/ICCV.2019.00069
– ident: 1100_CR40
  doi: 10.1109/CVPR.2018.00813
– volume: 1
  start-page: 112
  issue: 2
  year: 2015
  ident: 1100_CR39
  publication-title: IEEE Transact. Computat. Imaging
  doi: 10.1109/TCI.2015.2453093
– ident: 1100_CR27
  doi: 10.1109/CVPR.2016.612
– ident: 1100_CR26
SSID ssj0017630
Score 2.354354
Snippet 3D reconstruction via neural networks has become striking nowadays. However, the existing works are based on information-rich environment to perform...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2263
SubjectTerms Computer Communication Networks
Computer Graphics
Computer Science
Cryptology
Data Storage Representation
Datasets
Deep learning
Encapsulation
Encoders-Decoders
Image reconstruction
Morphology
Multimedia Information Systems
Neural networks
Operating Systems
Regular Paper
Training
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1NS8MwGA66efDi_MTplBy8aXBN06bxItN1eCpDFHYraZqCULvphv5932Rpp4K7eCqkaSi838mb50Howrg86mkFJp73CfNkQTJOGVGZNDEhF7zILdkET5JoMhFjt-E2d22VtU-0jjqfKrNHfu1DbGZQWwh6O3sjhjXKnK46Co1N1DZIZaDn7bs4GT825whgPXaXRTBKmAipuzZjL88Z_esTiFnE4qYR-jM0rfLNX0ekNvKMOv_9512043JOPFgqyR7a0NU-6tR8DtiZ9wF6jQckHiZ6cYNhQEL9XEImmmMDwWmbIs2w4U4jubZPXC2byDFkvtgfYltdN4i0-KXC5fSTlBarpDTdSfjbvbpD9DyKn-4fiKNjIArsdEG04RcqGJNBEHABmWUUyMIknIpxGRUiCpUKZRF4mkcF08zrK5iV9wUNZMYK7R-hVjWt9DHCUjOoSyOuQ6EZE1R6NAtzcAcZzzKu_C7yakmkymGVG8qMMm1Qlq30Uvij1EovpV102XwzWyJ1rJ3dq0WWOqudpyt5ddFVLfTV679XO1m_2inaplbPTN9gD7VADvoMbamPxcv8_dzp7BdCc_FJ
  priority: 102
  providerName: ProQuest
Title EA-EDNet: encapsulated attention encoder-decoder network for 3D reconstruction in low-light-level environment
URI https://link.springer.com/article/10.1007/s00530-023-01100-2
https://www.proquest.com/docview/3256491792
Volume 29
WOSCitedRecordID wos000985897800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVPQU
  databaseName: Advanced Technologies & Aerospace Database
  customDbUrl:
  eissn: 1432-1882
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0017630
  issn: 0942-4962
  databaseCode: P5Z
  dateStart: 20230201
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/hightechjournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Computer Science Database
  customDbUrl:
  eissn: 1432-1882
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0017630
  issn: 0942-4962
  databaseCode: K7-
  dateStart: 20230201
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/compscijour
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Engineering Database
  customDbUrl:
  eissn: 1432-1882
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0017630
  issn: 0942-4962
  databaseCode: M7S
  dateStart: 20230201
  isFulltext: true
  titleUrlDefault: http://search.proquest.com
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1432-1882
  dateEnd: 20241214
  omitProxy: false
  ssIdentifier: ssj0017630
  issn: 0942-4962
  databaseCode: BENPR
  dateStart: 20230201
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLink Contemporary Journals
  customDbUrl:
  eissn: 1432-1882
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017630
  issn: 0942-4962
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dS8MwEA-6-eCL8xOnc-TBNw20Wdo0vk3XIQhlOJXhS0nTFAa1Ezf03_eStZ2KCvrSQpKGkNzl7np3v0Po1Fx51NUKWDx1CHNlRhJOGVGJNDIhFTxLbbEJHkXBZCJGZVLYvIp2r1yS9qauk90MvTgEZAyxOGcELt6mZ9BmjI0-fqh9B8Ax9s-KYJQw4dMyVeb7OT6Lo5WO-cUtaqXNsPW_dW6jrVK7xP0lOeygNV3solZVuQGXjLyHnsI-CQeRXlxgaJBgKeegc6bYgG3a8EfTbKqkkVTbNy6W4eIYdFzcG2BrR9fYs3ha4Hz2RnKLSpKbOCT8IYNuH90Pw7ura1IWXiAKOHJBtKkklDEmPc_jAnTIwJOZUS0V4zLIROAr5cvMczUPMqaZ6ygYlTqCejJhme4doEYxK_QhwlIzsEADrn2hGRNUujTxU2D8hCcJV702cqv9j1WJSm6KY-Rxjads9zOGFcV2P2PaRmf1N89LTI5fR3eqY41L_pzHPdD0GFiqArrPq2Ncdf8829Hfhh-jTWopwUQMdlADzkWfoA31upjOX7qoeRlGo9suWr_hpGvCTsfwHHmPXUvR76By6fY
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NT9swFH-CDgkudLChlfHhAzsNa43jxDESQogWgcqqHUDiljmOIyF1aaEViH-Kv3HvuUnLJsGNA6dIieN8-Pc-bL_3fgB7pPJE4CyKeN7mMjAFz5SQ3GaGbEKuVZF7sgnV7yfX1_rXAjzVuTAUVlnrRK-o86GlNfIfIdpmiXMLLY5Gt5xYo2h3tabQmMKi5x4fcMo2Pjzv4Ph-E-K0e3lyxitWAW4RbhPuiCankNJEUaQ0OkhJZArym6xUJil0ElsbmyIKnEoK6WTQttgqb2sRmUwWLsR-F-GDDJOYJKqn-GzXAmXVr-loKbjUsaiSdHyqHqG9zfHR3Fdp4-JfQzj3bv_bkPV27rT53v7QR1itPGp2PBWBNVhw5To0a7YKVimvT_Cne8y7nb6bHDA8YUZjIi5zOaMCoz7kk04TMxzPnT-ychoiz9CvZ2GH-bWDWb1ddlOywfCBD3wllgHFXrFnWYOf4epNPnoDGuWwdF-AGSdx1p0oF2snpRYmEFmco7LLVJYpG7YgqEc-tVUldiIEGaSzGtIeLSm-UerRkooWfJ_dM5rWIXm19VYNkbTSSeN0jo8W7Ncgm19-ubfN13vbheWzy58X6cV5v_cVVoTHOEVIbkEDx8Rtw5K9n9yM73a8tDD4_dbg-wvrREr3
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dS8MwEA8yRXxxfuJ0ah5807A1S5vGt-E2FGUMv9hbSdMEBrUbbui_7yVruykqiE-FJA0hd5e7S-5-h9CZPfKopxWIeNIkzJOGxJwyomJpdUIiuElcsQne74fDoRgsZfG7aPfiSXKe02BRmrJZY5KYRpn4ZnmnSUDfEId5RuAQXmXgydigrvuH5_IdAaTH3bIIRgkTAc3TZr6f47NqWtibX55InebpVf-_5i20mVuduD1nk220orMdVC0qOuBcwHfRS7dNup2-nl1iaJDgQadgiybYgnC6sEjbbKunkUS7L87mYeQYbF_c6mDnX5eYtHiU4XT8TlKHVpLa-CS8lFm3h5563cera5IXZCAKJHVGtK0wZBiTvu9zAbZl6EtjTU7FuAyNCAOlAml8T_PQMA0EUTAqaQrqy5gZ3dpHlWyc6QOEpWbgmYZcB0IzJqj0aBwkcCDEPI65atWQV9AiUjlauS2akUYlzrLbzwhWFLn9jGgNnZf_TOZYHb-OrhckjnK5nUYtsAAZeLACui8Kki66f57t8G_DT9H6oNOL7m76t0dogzqmsEGFdVQBEuljtKbeZqPp64lj5w9QGfFc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EA-EDNet%3A+encapsulated+attention+encoder-decoder+network+for+3D+reconstruction+in+low-light-level+environment&rft.jtitle=Multimedia+systems&rft.au=Deng%2C+Yulin&rft.au=Yin%2C+Liju&rft.au=Gao%2C+Xiaoning&rft.au=Zhou%2C+Hui&rft.date=2023-08-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=0942-4962&rft.eissn=1432-1882&rft.volume=29&rft.issue=4&rft.spage=2263&rft.epage=2279&rft_id=info:doi/10.1007%2Fs00530-023-01100-2&rft.externalDocID=10_1007_s00530_023_01100_2
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0942-4962&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0942-4962&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0942-4962&client=summon