Inference-Reconstruction Variational Autoencoder for Light Field Image Reconstruction

Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular res...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 31; pp. 5629 - 5644
Main Authors: Han, Kang, Xiang, Wei
Format: Journal Article
Language:English
Published: New York IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1057-7149, 1941-0042, 1941-0042
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular resolution images simultaneously. In this paper, we propose an inference-reconstruction variational autoencoder (IR-VAE) to reconstruct a dense light field image out of four corner reference views in a light field image. The proposed IR-VAE is comprised of one inference network and one reconstruction network, where the inference network infers novel views from existing reference views and viewpoint conditions, and the reconstruction network reconstructs novel views from a latent variable that contains the information of reference views, novel views, and viewpoints. The conditional latent variable in the inference network is regularized by the latent variable in the reconstruction network to facilitate information flow between the conditional latent variable and novel views. We also propose a statistic distance measurement dubbed the mean local maximum mean discrepancy (MLMMD) to enable the measurement of the statistic distance between two distributions with high-resolution latent variables, which can capture richer information than their low-resolution counterparts. Finally, we propose a viewpoint-dependent indirect view synthesis method to synthesize novel views more efficiently by leveraging adaptive convolution. Experimental results show that our proposed methods outperform state-of-the-art methods on different light field datasets.
AbstractList Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular resolution images simultaneously. In this paper, we propose an inference-reconstruction variational autoencoder (IR-VAE) to reconstruct a dense light field image out of four corner reference views in a light field image. The proposed IR-VAE is comprised of one inference network and one reconstruction network, where the inference network infers novel views from existing reference views and viewpoint conditions, and the reconstruction network reconstructs novel views from a latent variable that contains the information of reference views, novel views, and viewpoints. The conditional latent variable in the inference network is regularized by the latent variable in the reconstruction network to facilitate information flow between the conditional latent variable and novel views. We also propose a statistic distance measurement dubbed the mean local maximum mean discrepancy (MLMMD) to enable the measurement of the statistic distance between two distributions with high-resolution latent variables, which can capture richer information than their low-resolution counterparts. Finally, we propose a viewpoint-dependent indirect view synthesis method to synthesize novel views more efficiently by leveraging adaptive convolution. Experimental results show that our proposed methods outperform state-of-the-art methods on different light field datasets.
Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular resolution images simultaneously. In this paper, we propose an inference-reconstruction variational autoencoder (IR-VAE) to reconstruct a dense light field image out of four corner reference views in a light field image. The proposed IR-VAE is comprised of one inference network and one reconstruction network, where the inference network infers novel views from existing reference views and viewpoint conditions, and the reconstruction network reconstructs novel views from a latent variable that contains the information of reference views, novel views, and viewpoints. The conditional latent variable in the inference network is regularized by the latent variable in the reconstruction network to facilitate information flow between the conditional latent variable and novel views. We also propose a statistic distance measurement dubbed the mean local maximum mean discrepancy (MLMMD) to enable the measurement of the statistic distance between two distributions with high-resolution latent variables, which can capture richer information than their low-resolution counterparts. Finally, we propose a viewpoint-dependent indirect view synthesis method to synthesize novel views more efficiently by leveraging adaptive convolution. Experimental results show that our proposed methods outperform state-of-the-art methods on different light field datasets.Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry perception. However, existing sub-aperture based light field cameras are limited by their sensor resolution to obtain high spatial and angular resolution images simultaneously. In this paper, we propose an inference-reconstruction variational autoencoder (IR-VAE) to reconstruct a dense light field image out of four corner reference views in a light field image. The proposed IR-VAE is comprised of one inference network and one reconstruction network, where the inference network infers novel views from existing reference views and viewpoint conditions, and the reconstruction network reconstructs novel views from a latent variable that contains the information of reference views, novel views, and viewpoints. The conditional latent variable in the inference network is regularized by the latent variable in the reconstruction network to facilitate information flow between the conditional latent variable and novel views. We also propose a statistic distance measurement dubbed the mean local maximum mean discrepancy (MLMMD) to enable the measurement of the statistic distance between two distributions with high-resolution latent variables, which can capture richer information than their low-resolution counterparts. Finally, we propose a viewpoint-dependent indirect view synthesis method to synthesize novel views more efficiently by leveraging adaptive convolution. Experimental results show that our proposed methods outperform state-of-the-art methods on different light field datasets.
Author Xiang, Wei
Han, Kang
Author_xml – sequence: 1
  givenname: Kang
  orcidid: 0000-0003-3626-7818
  surname: Han
  fullname: Han, Kang
  email: kang.han@my.jcu.edu.au
  organization: College of Science and Engineering, James Cook University, Cairns, QLD, Australia
– sequence: 2
  givenname: Wei
  orcidid: 0000-0002-0608-065X
  surname: Xiang
  fullname: Xiang, Wei
  email: w.xiang@latrobe.edu.au
  organization: School of Computing, Engineering and Mathematical Sciences, La Trobe University, Melbourne, VIC, Australia
BookMark eNp9kDtPwzAURi1URB-wI7FEYmFJ8StxPFYVhUqVQKhljRznprhK42InA_8e9yEkOjDdO5zv6rtniHqNbQChW4LHhGD5uJy_jSmmdMyIFFKkF2hAJCcxxpz2wo4TEQvCZR8Nvd9gTHhC0ivUZ4mUPGFkgFbzpgIHjYb4HbRtfOs63RrbRB_KGbXfVB1NutYGxpbgosq6aGHWn200M1CX0Xyr1hD9DV-jy0rVHm5Oc4RWs6fl9CVevD7Pp5NFrBnlbUxTwnHJKYYirTJVSMUzAJyJUC7RQmBZZoTR0LrQIHTKE80yVSWEaZBFqdgIPRzv7pz96sC3-dZ4DXWtGrCdz6kIBrhMJQ3o_Rm6sZ0Lzx2oLOUsYTxQ6ZHSznrvoMq1aQ8WWqdMnROc78XnQXy-F5-fxIcgPgvunNkq9_1f5O4YMQDwi8tQhWaM_QDdg437
CODEN IIPRE4
CitedBy_id crossref_primary_10_1016_j_nucengdes_2023_112712
crossref_primary_10_1109_TIP_2023_3329663
crossref_primary_10_1109_TMM_2023_3328176
crossref_primary_10_1016_j_image_2023_117031
crossref_primary_10_1016_j_nucengdes_2025_114433
crossref_primary_10_1109_TCI_2024_3507634
crossref_primary_10_1371_journal_pone_0316642
crossref_primary_10_1109_JSEN_2023_3324220
crossref_primary_10_1109_TIP_2025_3592546
Cites_doi 10.1007/978-3-030-58545-7_4
10.1007/978-3-030-01216-8_21
10.1109/CVPR.2016.90
10.1109/TPAMI.2020.2968521
10.1609/aaai.v34i07.6847
10.1609/aaai.v33i01.33015885
10.1109/CVPR.2018.00938
10.1109/ICCV.2013.89
10.1126/science.aar6170
10.1145/1141911.1141976
10.1109/ICCV.2017.37
10.1109/CVPR.2018.00262
10.1109/CVPR.2018.00931
10.1109/ICCVW.2015.17
10.1109/TPAMI.2022.3152488
10.1109/TIP.2021.3122089
10.1038/nature14539
10.1109/TIP.2003.819861
10.1109/JBHI.2021.3087407
10.1109/TPAMI.2020.2983686
10.1109/CVPR46437.2021.00466
10.1109/ICIP42928.2021.9506586
10.1007/s10489-022-03759-y
10.1109/TIP.2021.3066293
10.1109/TPAMI.2019.2945027
10.1109/ICCV.2017.478
10.1145/3306346.3322980
10.1109/CVPR.2018.00068
10.1109/TIP.2018.2791864
10.1109/CVPR.2017.178
10.1145/3197517.3201388
10.1109/CVPR.2018.00183
10.1109/TPAMI.2020.3026039
10.1007/978-3-319-46493-0_18
10.1109/ICMEW.2018.8551583
10.1109/TPAMI.2019.2941941
10.1145/2980179.2980251
10.1109/TIP.2019.2895463
10.1109/CVPR.2015.7298762
10.1109/JSTSP.2017.2747126
10.1109/CVPR42600.2020.00263
10.1109/TIP.2019.2923323
10.1109/ICCV.2015.398
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2022.3197976
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList Technology Research Database
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 5644
ExternalDocumentID 10_1109_TIP_2022_3197976
9864283
Genre orig-research
GrantInformation_xml – fundername: Australian Government
  funderid: 10.13039/100015539
– fundername: Australian Research Council’s Discovery Projects Funding Scheme
  grantid: DP220101634
  funderid: 10.13039/501100000923
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c324t-26140d420eb6f8ab9a48ee0879455c7709d8132145bce7c645c38af513ce9bda3
IEDL.DBID RIE
ISICitedReferencesCount 13
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000848262600005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1057-7149
1941-0042
IngestDate Sun Sep 28 02:02:13 EDT 2025
Mon Jun 30 10:14:01 EDT 2025
Sat Nov 29 03:21:16 EST 2025
Tue Nov 18 22:35:39 EST 2025
Wed Aug 27 02:29:22 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c324t-26140d420eb6f8ab9a48ee0879455c7709d8132145bce7c645c38af513ce9bda3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-0608-065X
0000-0003-3626-7818
PMID 35994531
PQID 2708643534
PQPubID 85429
PageCount 16
ParticipantIDs proquest_journals_2708643534
crossref_citationtrail_10_1109_TIP_2022_3197976
ieee_primary_9864283
crossref_primary_10_1109_TIP_2022_3197976
proquest_miscellaneous_2705749692
PublicationCentury 2000
PublicationDate 20220000
2022-00-00
20220101
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – year: 2022
  text: 20220000
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
lecun (ref19) 2015; 521
ref12
ref59
ref58
ref14
ref53
ref52
gretton (ref39) 2006
ref54
ref10
ref17
ref16
ng (ref3) 2005
wang (ref41) 2020; 43
hinton (ref18) 2015
kingma (ref55) 2014
ref46
ref45
ref48
ref47
paszke (ref56) 2017
ref44
niklaus (ref15) 2017
ref49
goodfellow (ref35) 2014; 27
ref8
ref7
ref9
ref4
ref6
ref5
ref34
ref37
ref36
johnson (ref43) 2016
ref30
ref33
honauer (ref60) 2016
ref2
ref1
ref38
simonyan (ref42) 2014
(ref51) 2016
li (ref40) 2015
sohn (ref32) 2015
ref24
ref23
ref26
ref25
ref20
ref22
ref21
ref28
ref27
wing fung yeung (ref29) 2018
rerabek (ref50) 2016
kingma (ref11) 2013
ivanov (ref31) 2018
References_xml – ident: ref34
  doi: 10.1007/978-3-030-58545-7_4
– year: 2015
  ident: ref18
  article-title: Distilling the knowledge in a neural network
  publication-title: ArXiv 1503 02531
– start-page: 3483
  year: 2015
  ident: ref32
  article-title: Learning structured output representation using deep conditional generative models
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– ident: ref28
  doi: 10.1007/978-3-030-01216-8_21
– ident: ref20
  doi: 10.1109/CVPR.2016.90
– ident: ref49
  doi: 10.1109/TPAMI.2020.2968521
– year: 2014
  ident: ref55
  article-title: Adam: A method for stochastic optimization
  publication-title: arXiv 1412 6980
– start-page: 1
  year: 2005
  ident: ref3
  article-title: Light field photography with a hand-held plenoptic camera
– ident: ref7
  doi: 10.1609/aaai.v34i07.6847
– start-page: 670
  year: 2017
  ident: ref15
  article-title: Video frame interpolation via adaptive convolution
  publication-title: Proc IEEE Int Conf Comput Vis Pattern Recognit (CVPR)
– ident: ref33
  doi: 10.1609/aaai.v33i01.33015885
– start-page: 1
  year: 2016
  ident: ref50
  article-title: New light field image dataset
  publication-title: Proc 8th Int Conf Quality Multimedia Exper
– ident: ref16
  doi: 10.1109/CVPR.2018.00938
– ident: ref5
  doi: 10.1109/ICCV.2013.89
– ident: ref12
  doi: 10.1126/science.aar6170
– ident: ref53
  doi: 10.1145/1141911.1141976
– ident: ref36
  doi: 10.1109/ICCV.2017.37
– volume: 27
  start-page: 1
  year: 2014
  ident: ref35
  article-title: Generative adversarial nets
  publication-title: Proc Adv Neural Inf Process Syst
– start-page: 1
  year: 2017
  ident: ref56
  article-title: Automatic differentiation in PyTorch
  publication-title: Proc Annu Conf Neural Inf Process Syst
– ident: ref48
  doi: 10.1109/CVPR.2018.00262
– ident: ref47
  doi: 10.1109/CVPR.2018.00931
– ident: ref26
  doi: 10.1109/ICCVW.2015.17
– ident: ref30
  doi: 10.1109/TPAMI.2022.3152488
– ident: ref59
  doi: 10.1109/TIP.2021.3122089
– volume: 521
  start-page: 436
  year: 2015
  ident: ref19
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
– ident: ref57
  doi: 10.1109/TIP.2003.819861
– ident: ref45
  doi: 10.1109/JBHI.2021.3087407
– year: 2013
  ident: ref11
  article-title: Auto-encoding variational Bayes
  publication-title: arXiv 1312 6114
– volume: 43
  start-page: 3349
  year: 2020
  ident: ref41
  article-title: Deep high-resolution representation learning for visual recognition
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2020.2983686
– ident: ref25
  doi: 10.1109/CVPR46437.2021.00466
– ident: ref22
  doi: 10.1109/ICIP42928.2021.9506586
– year: 2014
  ident: ref42
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: arXiv 1409 1556
– year: 2016
  ident: ref51
  publication-title: The Stanford Light Field Archive
– ident: ref58
  doi: 10.1007/s10489-022-03759-y
– ident: ref10
  doi: 10.1109/TIP.2021.3066293
– start-page: 1718
  year: 2015
  ident: ref40
  article-title: Generative moment matching networks
  publication-title: Proc Int Conf Mach Learn (ICML)
– start-page: 513
  year: 2006
  ident: ref39
  article-title: A kernel method for the two-sample-problem
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref8
  doi: 10.1109/TPAMI.2019.2945027
– ident: ref14
  doi: 10.1109/ICCV.2017.478
– start-page: 694
  year: 2016
  ident: ref43
  article-title: Perceptual losses for real-time style transfer and super-resolution
  publication-title: Proc 14th Eur Conf Comput Vis (ECCV)
– ident: ref24
  doi: 10.1145/3306346.3322980
– ident: ref44
  doi: 10.1109/CVPR.2018.00068
– ident: ref52
  doi: 10.1109/TIP.2018.2791864
– ident: ref27
  doi: 10.1109/CVPR.2017.178
– ident: ref46
  doi: 10.1145/3197517.3201388
– start-page: 19
  year: 2016
  ident: ref60
  article-title: A dataset and evaluation methodology for depth estimation on 4D light fields
  publication-title: Proc Asian Conf Comput Vis
– start-page: 1
  year: 2018
  ident: ref31
  article-title: Variational autoencoder with arbitrary conditioning
  publication-title: Proc Int Conf Learn Represent
– ident: ref37
  doi: 10.1109/CVPR.2018.00183
– ident: ref23
  doi: 10.1109/TPAMI.2020.3026039
– ident: ref17
  doi: 10.1007/978-3-319-46493-0_18
– ident: ref38
  doi: 10.1109/ICMEW.2018.8551583
– ident: ref13
  doi: 10.1109/TPAMI.2019.2941941
– ident: ref6
  doi: 10.1145/2980179.2980251
– ident: ref9
  doi: 10.1109/TIP.2019.2895463
– ident: ref2
  doi: 10.1109/CVPR.2015.7298762
– ident: ref4
  doi: 10.1109/JSTSP.2017.2747126
– ident: ref21
  doi: 10.1109/CVPR42600.2020.00263
– ident: ref54
  doi: 10.1109/TIP.2019.2923323
– start-page: 137
  year: 2018
  ident: ref29
  article-title: Fast light field reconstruction with deep coarse-to-fine modeling of spatial-angular clues
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– ident: ref1
  doi: 10.1109/ICCV.2015.398
SSID ssj0014516
Score 2.4477594
Snippet Light field cameras can capture the radiance and direction of light rays by a single exposure, providing a new perspective to photography and 3D geometry...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 5629
SubjectTerms adaptive convolution
Angular resolution
Cameras
Convolution
Distance measurement
Feature extraction
Field cameras
Image reconstruction
indirect view synthesis
Inference
Information flow
Light
Light field image reconstruction
Spatial resolution
statistic distance measurement
Superresolution
Training
variational autoencoder
Title Inference-Reconstruction Variational Autoencoder for Light Field Image Reconstruction
URI https://ieeexplore.ieee.org/document/9864283
https://www.proquest.com/docview/2708643534
https://www.proquest.com/docview/2705749692
Volume 31
WOSCitedRecordID wos000848262600005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fSxwxEB5UpNgHbf2BV61E6Euh8fZ2k03mUaRHD0R8ULm3JZudBcHeFe-uf38nudyqVATfNmyyu2QyO9-XycwAfGtVWTMNaKTRSFLVqpbYEklnbBMSjpVtjEq7uzRXV3Y8xus1-NHFwhBRPHxGZ-Ey-vKbqV-ErbJ-SCXO5nAd1o0pl7FanccgFJyNnk1tpGHYv3JJZti_GV0zEcxz5qdo2PxuwYdCIypdDF5Yo1he5b9_cjQ0w533feIn2E6AUpwvV8BnWKPJLuwkcCmS6s524eOzzIN7cDtaBfrJQECf0siKOybPaYNQnC_m05DosqFHweBWXAYmL4bh0JsY_eY_kXg5eB9uhz9vLn7JVGFBegZSc8n0SWWNyjOqy9a6Gp2yRJllJdXaG5NhYwehlJGuPRlfKu0L61o9KDxh3bjiADYm0wkdgnAKm1Y7hi_OK8wKdLbkluM2MmdxPeivZrryKf14qILxUEUakmHFYqqCmKokph5870b8WabeeKPvXpBF1y-JoQfHK2FWSTdnVW6YxjFKLFQPTrvbrFXBVeImNF3EPtooLDH_8vqTj2ArvH-5GXMMGzzR9BU2_d_5_ezxhBfo2J7EBfoP_QPgMw
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1RT9swED4xNm3wMAYdoowNI_EyaV7TxI5zj2haRbVS8VAQb5HjXKRJ0CLa7vfv7LoZaGjS3mLFjiKfz3efz_cdwGmj8ophQC2NRpKqUpXEhkhaU9SecCxvQlba9ciMx8XNDV5uwJc2F4aIwuUz-uofQyy_nrmlPyrreSpxNocv4KVWKk1W2VptzMCXnA2xTW2kYcd_HZRMsDcZXjIUTFNGqGjYAG_B60wjKp31n9ijUGDlr105mJrBzv_95Dt4G11KcbZaA7uwQdM92InupYjKO9-D7Ufcgx24Gq5T_aSHoH-IZMU1w-d4RCjOlouZp7qs6UGweytGHsuLgb_2JoZ3vBeJp4Pfw9Xg--TbuYw1FqRjV2ohGUCppObJpCpvCluhVQVRUrCaau2MSbAu-r6Yka4cGZcr7bLCNrqfOcKqttk-bE5nUzoAYRXWjbbswFinMMnQFjm3LLeRUYvtQm8906WLBOS-DsZtGYBIgiWLqfRiKqOYuvC5HXG_It_4R9-Ol0XbL4qhC0drYZZRO-dlahjIsZ-YqS6ctK9Zr3ywxE5ptgx9tFGYY3r4_JeP4c355GJUjobjHx9gy__L6mjmCDZ50ukjvHK_Fj_nD5_CMv0NhITikg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Inference-Reconstruction+Variational+Autoencoder+for+Light+Field+Image+Reconstruction&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Han%2C+Kang&rft.au=Xiang%2C+Wei&rft.date=2022&rft.pub=IEEE&rft.issn=1057-7149&rft.volume=31&rft.spage=5629&rft.epage=5644&rft_id=info:doi/10.1109%2FTIP.2022.3197976&rft_id=info%3Apmid%2F35994531&rft.externalDocID=9864283
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon