EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time

Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the o...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International journal of computer vision Ročník 126; číslo 12; s. 1394 - 1414
Hlavní autoři: Rebecq, Henri, Gallego, Guillermo, Mueggler, Elias, Scaramuzza, Davide
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.12.2018
Springer Nature B.V
Témata:
ISSN:0920-5691, 1573-1405
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.
AbstractList Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.
Author Rebecq, Henri
Gallego, Guillermo
Mueggler, Elias
Scaramuzza, Davide
Author_xml – sequence: 1
  givenname: Henri
  orcidid: 0000-0002-6577-9735
  surname: Rebecq
  fullname: Rebecq, Henri
  email: rebecq@ifi.uzh.ch
  organization: Robotics and Perception Group, Department of Informatics, University of Zurich, Robotics and Perception Group, Department of Neuroinformatics, University of Zurich and ETH Zurich
– sequence: 2
  givenname: Guillermo
  orcidid: 0000-0002-2672-9241
  surname: Gallego
  fullname: Gallego, Guillermo
  organization: Robotics and Perception Group, Department of Informatics, University of Zurich, Robotics and Perception Group, Department of Neuroinformatics, University of Zurich and ETH Zurich
– sequence: 3
  givenname: Elias
  orcidid: 0000-0002-8008-443X
  surname: Mueggler
  fullname: Mueggler, Elias
  organization: Robotics and Perception Group, Department of Informatics, University of Zurich, Robotics and Perception Group, Department of Neuroinformatics, University of Zurich and ETH Zurich
– sequence: 4
  givenname: Davide
  orcidid: 0000-0002-3831-6778
  surname: Scaramuzza
  fullname: Scaramuzza, Davide
  organization: Robotics and Perception Group, Department of Informatics, University of Zurich, Robotics and Perception Group, Department of Neuroinformatics, University of Zurich and ETH Zurich
BookMark eNp9kMtKAzEUQIMo2Kof4G7AdTSPyaPutNYHWAQfBVchnbmjKW2mJhmLOz_CL_RLnDKCIOgqm3PuvTl9tOlrDwjtU3JICVFHkVImOSZUYUoEwXID9ahQHNOciE3UIwNGsJADuo36Mc4IIUwz3kOPo_Hk7jgbvYJP-NRGKLNxM08OTxyssrsEAerP9w9-lt1CUfuYQlMkV_ts5dJzZn1nZkO7gGAz51vMzvG9W8Au2qrsPMLe97uDHs5H98NLfH1zcTU8ucYFU7nEpabTXAuQIKqpECWvSkkFp4VVFSuJrHKrcyVKogoBAyuEzIHnUFhZKQDG-Q466OYuQ_3SQExmVjfBtysNo0wPtNJat5TqqCLUMQaoTOGSXf8kBevmhhKz7mi6jqbtaNYdjWxN-stcBrew4e1fh3VObFn_BOHnpr-lLwv5hi8
CitedBy_id crossref_primary_10_1109_TITS_2023_3276328
crossref_primary_10_4018_IJITSA_320826
crossref_primary_10_1007_s00348_022_03441_6
crossref_primary_10_1109_TCSVT_2023_3249195
crossref_primary_10_1016_j_isprsjprs_2025_03_013
crossref_primary_10_1109_TASE_2023_3324365
crossref_primary_10_3390_s24051382
crossref_primary_10_3389_fncir_2021_610446
crossref_primary_10_1080_01691864_2020_1821770
crossref_primary_10_1109_TPAMI_2021_3053243
crossref_primary_10_1109_TPAMI_2024_3396116
crossref_primary_10_3390_s24206527
crossref_primary_10_1109_TPAMI_2020_3008413
crossref_primary_10_1109_ACCESS_2020_3003160
crossref_primary_10_3390_jsan14010007
crossref_primary_10_1109_TVCG_2025_3576305
crossref_primary_10_3390_s24237493
crossref_primary_10_3390_s21041475
crossref_primary_10_1063_5_0090714
crossref_primary_10_1109_TNNLS_2021_3061122
crossref_primary_10_1016_j_rcim_2022_102419
crossref_primary_10_1007_s11263_019_01209_w
crossref_primary_10_1109_ACCESS_2025_3551073
crossref_primary_10_1007_s11263_025_02418_2
crossref_primary_10_1109_TIM_2021_3097862
crossref_primary_10_1177_1729881420972278
crossref_primary_10_1109_TIM_2023_3325508
crossref_primary_10_3390_s23156655
crossref_primary_10_1016_j_eswa_2023_122743
crossref_primary_10_1007_s00348_024_03877_y
crossref_primary_10_1109_LRA_2021_3096161
crossref_primary_10_1109_TIP_2024_3445736
crossref_primary_10_3390_s22186732
crossref_primary_10_1109_TIM_2022_3144229
crossref_primary_10_1109_TIM_2025_3568945
crossref_primary_10_1016_j_inffus_2025_103697
crossref_primary_10_1109_LRA_2025_3609095
crossref_primary_10_1109_TPAMI_2022_3227448
crossref_primary_10_1016_j_neunet_2025_107591
crossref_primary_10_1109_TCSVT_2025_3559299
crossref_primary_10_1109_TIV_2023_3323378
crossref_primary_10_2351_7_0001526
crossref_primary_10_3390_s22155687
crossref_primary_10_1109_TRO_2024_3378443
crossref_primary_10_3389_fmats_2023_1269992
crossref_primary_10_1109_JIOT_2020_3007866
crossref_primary_10_1109_TRO_2021_3062252
crossref_primary_10_1007_s11263_025_02438_y
crossref_primary_10_1109_TPAMI_2024_3474858
crossref_primary_10_3390_eng6070153
crossref_primary_10_1002_aisy_202400243
crossref_primary_10_1109_LRA_2023_3311374
crossref_primary_10_1038_s41597_024_02920_1
crossref_primary_10_3390_s23146475
crossref_primary_10_1007_s11340_023_00966_7
crossref_primary_10_1109_TRO_2018_2858287
crossref_primary_10_1109_ACCESS_2019_2914033
crossref_primary_10_1109_TCSVT_2024_3378742
crossref_primary_10_1109_TNNLS_2022_3201830
crossref_primary_10_1002_aisy_202200221
crossref_primary_10_1109_TPAMI_2025_3586559
crossref_primary_10_1109_TPAMI_2021_3085783
crossref_primary_10_3389_fnbot_2019_00028
crossref_primary_10_1002_aisy_202400353
crossref_primary_10_1016_j_fmre_2023_08_004
crossref_primary_10_3389_fnins_2018_00774
crossref_primary_10_1016_j_neucom_2025_130136
crossref_primary_10_1109_LRA_2024_3355765
crossref_primary_10_1002_aisy_202100054
crossref_primary_10_3390_electronics14061078
crossref_primary_10_1109_TCSII_2021_3108798
crossref_primary_10_1109_TPAMI_2021_3130049
crossref_primary_10_1016_j_mejo_2021_105312
crossref_primary_10_1109_TPAMI_2024_3380648
crossref_primary_10_1007_s11263_023_01959_8
crossref_primary_10_1155_2022_4037625
crossref_primary_10_1109_TRO_2024_3355370
crossref_primary_10_1109_TVCG_2023_3320271
crossref_primary_10_1109_LRA_2022_3187266
crossref_primary_10_1109_TRO_2025_3584544
crossref_primary_10_1145_3656469
crossref_primary_10_1109_ACCESS_2018_2879337
crossref_primary_10_1109_LRA_2021_3088793
crossref_primary_10_1007_s11263_020_01410_2
crossref_primary_10_1364_PRJ_432292
crossref_primary_10_3390_s21237840
crossref_primary_10_3390_s22145190
crossref_primary_10_3390_s25030887
crossref_primary_10_1007_s10846_022_01753_7
crossref_primary_10_1109_LSP_2020_3016251
crossref_primary_10_1109_TMM_2025_3542978
crossref_primary_10_1109_TRO_2025_3548523
crossref_primary_10_1109_LRA_2023_3269950
crossref_primary_10_3390_s24237752
crossref_primary_10_3390_biomimetics9070444
crossref_primary_10_1109_ACCESS_2021_3133533
crossref_primary_10_1109_ACCESS_2023_3282637
crossref_primary_10_1007_s11263_025_02379_6
crossref_primary_10_1016_j_inffus_2024_102646
crossref_primary_10_2478_amns_2025_0060
crossref_primary_10_3390_app13042224
crossref_primary_10_1016_j_ast_2025_110338
crossref_primary_10_1002_aisy_202200251
crossref_primary_10_1007_s11263_025_02488_2
crossref_primary_10_1016_j_autcon_2025_106007
crossref_primary_10_1109_TVCG_2024_3360468
crossref_primary_10_1109_TIM_2024_3470063
crossref_primary_10_1016_j_cviu_2023_103817
crossref_primary_10_1117_1_AP_6_2_024001
Cites_doi 10.1109/CVPR.2006.19
10.1109/IROS.2016.7758089
10.1016/j.imavis.2011.01.006
10.1109/ESSDERC.2016.7599576
10.1007/978-3-642-39402-7_14
10.1109/ISCAS.2010.5537289
10.1109/34.888718
10.5772/12941
10.1109/ICRA.2014.6906584
10.1177/0278364917691115
10.1109/ROBIO.2012.6491077
10.1109/IJCNN.2011.6033299
10.5244/C.28.26
10.1109/ISCAS.2012.6272144
10.1109/ISCAS.2007.378038
10.1109/ICCVW.2013.13
10.1109/ICRA.2011.5980567
10.3389/fnins.2014.00048
10.1109/TNNLS.2011.2180025
10.3389/fnins.2013.00223
10.1109/ITSC.2006.1706816
10.1023/A:1008192912624
10.1016/j.neunet.2011.11.001
10.5244/C.30.63
10.1109/CVPRW.2012.6238898
10.1109/TNNLS.2014.2308551
10.1109/CVPR.2016.102
10.1109/IROS.2014.6942940
10.1007/978-3-319-10605-2_54
10.1109/TPAMI.2015.2392947
10.1109/TPAMI.2016.2574707
10.1007/978-3-642-24028-7_62
10.1007/978-3-319-46466-4_21
10.1109/CVPRW.2012.6238892
10.1109/LRA.2016.2645143
10.1109/JSSC.2007.914337
10.1109/ICCPHOT.2015.7168370
10.1109/ICRA.2014.6907233
10.1109/ISCAS.2014.6865228
10.5244/C.30.9
10.1109/CVPR.1996.517097
10.1109/TPAMI.2017.2658577
10.1109/TNNLS.2013.2273537
10.1109/TPAMI.2017.2769655
10.3389/fnins.2016.00176
10.1109/JSSC.2014.2342715
10.1007/s00348-011-1207-y
10.1109/CVPR.2015.7298644
ContentType Journal Article
Copyright Springer Science+Business Media, LLC 2017
International Journal of Computer Vision is a copyright of Springer, (2017). All Rights Reserved.
Copyright_xml – notice: Springer Science+Business Media, LLC 2017
– notice: International Journal of Computer Vision is a copyright of Springer, (2017). All Rights Reserved.
DBID AAYXX
CITATION
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PYYUZ
Q9U
DOI 10.1007/s11263-017-1050-6
DatabaseName CrossRef
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials
ProQuest Central
Business Premium Collection
ProQuest Technology Collection
ProQuest One
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
Proquest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ABI/INFORM Collection China
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ABI/INFORM China
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
Business Premium Collection (Alumni)
DatabaseTitleList ABI/INFORM Global (Corporate)

Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 1414
ExternalDocumentID 10_1007_s11263_017_1050_6
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
28-
29J
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACIHN
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACUHS
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8P
Z8Q
Z8R
Z8S
Z8T
Z8W
Z92
ZMTXR
~8M
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ICD
PHGZM
PHGZT
PQGLB
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
PKEHL
PQEST
PQUKI
Q9U
ID FETCH-LOGICAL-c2746-d81b485e6e5fb55d3fd61531ca7f2d06f4a8475d07c5e9a5564e34eca6f7ee233
IEDL.DBID M0C
ISSN 0920-5691
IngestDate Tue Nov 04 22:17:57 EST 2025
Tue Nov 18 20:42:56 EST 2025
Sat Nov 29 06:42:27 EST 2025
Fri Feb 21 02:35:17 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 12
Keywords Multi-view stereo
Event cameras
3D reconstruction
Event-based vision
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c2746-d81b485e6e5fb55d3fd61531ca7f2d06f4a8475d07c5e9a5564e34eca6f7ee233
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-2672-9241
0000-0002-6577-9735
0000-0002-8008-443X
0000-0002-3831-6778
PQID 2128987888
PQPubID 1456341
PageCount 21
ParticipantIDs proquest_journals_2128987888
crossref_citationtrail_10_1007_s11263_017_1050_6
crossref_primary_10_1007_s11263_017_1050_6
springer_journals_10_1007_s11263_017_1050_6
PublicationCentury 2000
PublicationDate 20181200
2018-12-00
20181201
PublicationDateYYYYMMDD 2018-12-01
PublicationDate_xml – month: 12
  year: 2018
  text: 20181200
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2018
Publisher Springer US
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer Nature B.V
References HartleyRZissermanAMultiple view geometry in computer vision20032CambridgeCambridge University Press0956.68149
RebecqHHorstschäferTGallegoGScaramuzzaDEVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real-timeIEEE Robotics and Automation Letters2017259360010.1109/LRA.2016.2645143
DrazenDLichtsteinerPHafligerPDelbruckTJensenAToward real-time particle tracking using an event-based dynamic vision sensorExperiments in Fluids20115151465146910.1007/s00348-011-1207-y
Rebecq, H., Gallego, G., & Scaramuzza, D. (2016). EMVS: Event-based multi-view stereo. In British Machine Vision Conference (BMVC). https://doi.org/10.5244/C.30.63.
BenosmanRIengS-HClercqCBartolozziCSrinivasanMAsynchronous frameless event-based optical flowNeural Networks201227323710.1016/j.neunet.2011.11.001
Kueng, B., Mueggler, E., Gallego, G., & Scaramuzza, D. (2016). Low-latency visual odometry using event-based feature tracks. In IEEE/RSJ International Conference on IIntelligent Robots and Systems (IROS) (pp. 16–23). Daejeon, Korea. https://doi.org/10.1109/IROS.2016.7758089.
Schraml, S., Belbachir, A. N., Milosevic, N., & Schön, P. (2010). Dynamic stereo vision system for real-time tracking. In IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 1409–1412). https://doi.org/10.1109/ISCAS.2010.5537289.
LichtsteinerPPoschCDelbruckTA 128 ×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times $$\end{document} 128 120 dB 15 μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu $$\end{document}s latency asynchronous temporal contrast vision sensorIEEE Journal of Solid-State Circuits200843256657610.1109/JSSC.2007.914337
KoglerJurgenHumenbergerMartinSulzbachnerChristophEvent-Based Stereo Matching Approaches for Frameless Address Event Stereo DataAdvances in Visual Computing2011Berlin, HeidelbergSpringer Berlin Heidelberg67468510.1007/978-3-642-24028-7_62
Litzenberger, M., Belbachir, A. N., Donath, N., Gritsch, G., Garn, H., Kohn, B., Posch, C., & Schraml, S. (2006). Estimation of vehicle speed based on asynchronous data from a silicon retina optical sensor. In IEEE Intelligent Transportation Systems Conference (pp. 653–658). https://doi.org/10.1109/ITSC.2006.1706816.
SzeliskiRComputer vision: Algorithms and applications2010LondonSpringer1219.68009
LeeJun HaengDelbruckTobiPfeifferMichaelParkPaul K. J.ShinChang-WooRyuHyunsurkKangByung ChangReal-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon RetinasIEEE Transactions on Neural Networks and Learning Systems201425122250226310.1109/TNNLS.2014.2308551
Weikersdorfer, D., & Conradt, J. (2012). Event-based particle filtering for robot self-localization. In IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 866–870). https://doi.org/10.1109/ROBIO.2012.6491077.
EngelJakobSchöpsThomasCremersDanielLSD-SLAM: Large-Scale Direct Monocular SLAMComputer Vision – ECCV 20142014ChamSpringer International Publishing834849
Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). SVO: Fast semi-direct monocular visual odometry. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 15–22). https://doi.org/10.1109/ICRA.2014.6906584.
Censi, A., & Scaramuzza, D. (2014). Low-latency event-based visual odometry. In IEEE International Conference on Robotics and Automation (ICRA). https://doi.org/10.1109/IROS.2016.7758089.
Delbruck, T., & Lichtsteiner, P. (2007). Fast sensory motor control based on event-based hybrid neuromorphic-procedural system. In IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 845–848). https://doi.org/10.1109/ISCAS.2007.378038.
Collins, R. T. (1996). A space-sweep approach to true multi-image matching. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (pp. 358–363). https://doi.org/10.1109/CVPR.1996.517097.
Seitz, S. M., Curless, B., Diebel, J., Scharstein, D., & Szeliski, R. (2006). A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2006.19.
RogisterPBenosmanRIengS-HLichtsteinerPDelbruckTAsynchronous event-based binocular stereo matchingIEEE Transactions on Neural Networks and Learning Systems201223234735310.1109/TNNLS.2011.2180025
Pizzoli, M., Forster, C., & Scaramuzza, D. (2014). REMODE: Probabilistic, monocular dense reconstruction in real time. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 2609–2616). https://doi.org/10.1109/ICRA.2014.6907233.
MuegglerERebecqHGallegoGDelbruckTScaramuzzaDThe event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAMInternational Journal of Robotics Research20173614214910.1177/0278364917691115
Rusu, R. B., & Cousins, S. (2011). 3D is here: Point cloud library (PCL). In IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China. https://doi.org/10.1109/ICRA.2011.5980567.
Brandli, C., Muller, L., & Delbruck, T. (2014a) Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor. In International Symposium Circuits and Systems (ISCAS) (pp. 686–689). https://doi.org/10.1109/ISCAS.2014.6865228.
Piatkowska, E., Belbachir, A. N., Schraml, S., & Gelautz, M. (2012). Spatiotemporal multiple persons tracking using dynamic vision sensor. In IEEE International Conference on Computer Vision and Pattern Recognition Workshop (pp. 35–40). https://doi.org/10.1109/CVPRW.2012.6238892.
Bardow, P., Davison, A. J., & Leutenegger, S. (2016). Simultaneous optical flow and intensity estimation from an event camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2016.102.
Cook, M., Gugelmann, L., Jug, F., Krautz, C., & Steger, A. (2011). Interacting maps for fast visual interpretation. In International Joint Conference Neural Networks (IJCNN) (pp. 770–776). https://doi.org/10.1109/IJCNN.2011.6033299.
Lee, J., Delbruck, T., Park, P. K. J., Pfeiffer, M., Shin, C.-W., Ryu, H., & Kang, B. C. (2012). Live demonstration: Gesture-based remote control using stereo pair of dynamic vision sensors. In IEEE International Symposium on Circuits and Systems (ISCAS). https://doi.org/10.1109/ISCAS.2012.6272144.
Kim, H., Handa, A., Benosman, R., Ieng, S.-H., & Davison, A. J. (2014). Simultaneous mosaicing and tracking with an event camera. In British Machine Vision Conference (BMVC). https://doi.org/10.5244/C.28.26.
LagorceXOrchardGGallupiFShiBEBenosmanRHOTS: A hierarchy of event-based time-surfaces for pattern recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence201610.1109/TPAMI.2016.2574707
Reinbacher, C., Graber, G., & Pock, T. (2016). Real-time intensity-image reconstruction for event cameras using manifold regularisation. In British Machine Vision Conference (BMVC). https://doi.org/10.5244/C.30.9.
KoglerJrgenSulzbachnerChristophHumenbergerMartinEibensteinerFlorianAddress-Event Based Stereo Vision with Bio-Inspired Silicon Retina ImagersAdvances in Theory and Applications of Stereo Vision2011
WolbergGDigital image warping1990CaliforniaWiley-IEEE Computer Society Press
EngelJakobKoltunVladlenCremersDanielDirect Sparse OdometryIEEE Transactions on Pattern Analysis and Machine Intelligence201840361162510.1109/TPAMI.2017.2658577
BrandliChristianBernerRaphaelMinhao YangShih-Chii LiuDelbruckTobiA 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision SensorIEEE Journal of Solid-State Circuits201449102333234110.1109/JSSC.2014.2342715
Mueggler, E., Huber, B., & Scaramuzza, D. (2014). Event-based, 6-DOF pose tracking for high-speed maneuvers. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2761–2768). https://doi.org/10.1109/IROS.2014.6942940.
DelbruckTLangMRobotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensorFrontiers in Neuroscience201310.3389/fnins.2013.00223
Schraml, S., Belbachir, A. N., & Bischof, H. (2015). Event-driven stereo matching for real-time 3D panoramic vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 466–474). https://doi.org/10.1109/CVPR.2015.7298644.
KimHanmeLeuteneggerStefanDavisonAndrew J.Real-Time 3D Reconstruction and 6-DoF Tracking with an Event CameraComputer Vision – ECCV 20162016ChamSpringer International Publishing34936410.1007/978-3-319-46466-4_21
Wiesmann, G., Schraml, S., Litzenberger, M., Belbachir, A. N., Hofstatter, M., & Bartolozzi, C. (2012). Event-driven embodied system for feature extraction and object recognition in robotic applications. In IEEE International Conference on Computer Vision and Pattern Recognition Workshop (pp. 76–82). https://doi.org/10.1109/CVPRW.2012.6238898.
Matsuda, N., Cossairt, O., & Gupta. M. (2015). MC3D: Motion contrast 3D scanning. In IEEE International Conference on Computational Photography (ICCP) (pp. 1–10). https://doi.org/10.1109/ICCPHOT.2015.7168370.
GallegoGuillermoLundJon E.A.MuegglerEliasRebecqHenriDelbruckTobiScaramuzzaDavideEvent-Based, 6-DOF Camera Tracking from Photometric Depth MapsIEEE Transactions on Pattern Analysis and Machine Intelligence201840102402241210.1109/TPAMI.2017.2769655
OrchardGMeyerCEtienne-CummingsRPoschCThakorNBenosmanRHFirst: A temporal approach to object recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence201537102028204010.1109/TPAMI.2015.2392947
RueckauerBDelbruckTEvaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensorFrontiers in Neuroscience201610.3389/fnins.2016.00176
Camunas-MesaLASerrano-GotarredonaTIengSHBenosm
G Wolberg (1050_CR51) 1990
1050_CR38
1050_CR35
1050_CR36
1050_CR33
1050_CR34
E Mueggler (1050_CR31) 2017; 36
H Rebecq (1050_CR37) 2017; 2
1050_CR30
R Hartley (1050_CR18) 2003
P Rogister (1050_CR39) 2012; 23
1050_CR28
1050_CR29
1050_CR26
1050_CR25
1050_CR22
1050_CR23
1050_CR20
1050_CR21
G Orchard (1050_CR32) 2015; 37
R Szeliski (1050_CR45) 2010
1050_CR19
Z Zhang (1050_CR52) 2000; 22
1050_CR17
P Lichtsteiner (1050_CR27) 2008; 43
1050_CR15
1050_CR16
1050_CR14
R Szeliski (1050_CR46) 1999; 32
R Benosman (1050_CR3) 2012; 27
1050_CR11
1050_CR10
1050_CR50
D Drazen (1050_CR13) 2011; 51
1050_CR1
B Rueckauer (1050_CR40) 2016
G Vogiatzis (1050_CR47) 2011; 29
1050_CR8
1050_CR9
T Delbruck (1050_CR12) 2013
1050_CR7
1050_CR4
1050_CR48
1050_CR5
1050_CR49
1050_CR44
1050_CR42
1050_CR43
1050_CR41
LA Camunas-Mesa (1050_CR6) 2014; 8
X Lagorce (1050_CR24) 2016
R Benosman (1050_CR2) 2014; 25
References_xml – reference: DrazenDLichtsteinerPHafligerPDelbruckTJensenAToward real-time particle tracking using an event-based dynamic vision sensorExperiments in Fluids20115151465146910.1007/s00348-011-1207-y
– reference: Lee, J., Delbruck, T., Park, P. K. J., Pfeiffer, M., Shin, C.-W., Ryu, H., & Kang, B. C. (2012). Live demonstration: Gesture-based remote control using stereo pair of dynamic vision sensors. In IEEE International Symposium on Circuits and Systems (ISCAS). https://doi.org/10.1109/ISCAS.2012.6272144.
– reference: SzeliskiRGollandPStereo matching with transparency and mattingInternational Journal of Computer Vision1999321456110.1023/A:1008192912624
– reference: RueckauerBDelbruckTEvaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensorFrontiers in Neuroscience201610.3389/fnins.2016.00176
– reference: Rusu, R. B., & Cousins, S. (2011). 3D is here: Point cloud library (PCL). In IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China. https://doi.org/10.1109/ICRA.2011.5980567.
– reference: EngelJakobSchöpsThomasCremersDanielLSD-SLAM: Large-Scale Direct Monocular SLAMComputer Vision – ECCV 20142014ChamSpringer International Publishing834849
– reference: Bardow, P., Davison, A. J., & Leutenegger, S. (2016). Simultaneous optical flow and intensity estimation from an event camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2016.102.
– reference: ZhangZA flexible new technique for camera calibrationIEEE Transactions on Pattern Analysis and Machine Intelligence200022111330133410.1109/34.888718ISSN 0162-8828
– reference: KimHanmeLeuteneggerStefanDavisonAndrew J.Real-Time 3D Reconstruction and 6-DoF Tracking with an Event CameraComputer Vision – ECCV 20162016ChamSpringer International Publishing34936410.1007/978-3-319-46466-4_21
– reference: Seitz, S. M., Curless, B., Diebel, J., Scharstein, D., & Szeliski, R. (2006). A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2006.19.
– reference: Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). SVO: Fast semi-direct monocular visual odometry. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 15–22). https://doi.org/10.1109/ICRA.2014.6906584.
– reference: LagorceXOrchardGGallupiFShiBEBenosmanRHOTS: A hierarchy of event-based time-surfaces for pattern recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence201610.1109/TPAMI.2016.2574707
– reference: Rebecq, H., Gallego, G., & Scaramuzza, D. (2016). EMVS: Event-based multi-view stereo. In British Machine Vision Conference (BMVC). https://doi.org/10.5244/C.30.63.
– reference: BrandliChristianBernerRaphaelMinhao YangShih-Chii LiuDelbruckTobiA 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision SensorIEEE Journal of Solid-State Circuits201449102333234110.1109/JSSC.2014.2342715
– reference: RogisterPBenosmanRIengS-HLichtsteinerPDelbruckTAsynchronous event-based binocular stereo matchingIEEE Transactions on Neural Networks and Learning Systems201223234735310.1109/TNNLS.2011.2180025
– reference: DelbruckTLangMRobotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensorFrontiers in Neuroscience201310.3389/fnins.2013.00223
– reference: Kim, H., Handa, A., Benosman, R., Ieng, S.-H., & Davison, A. J. (2014). Simultaneous mosaicing and tracking with an event camera. In British Machine Vision Conference (BMVC). https://doi.org/10.5244/C.28.26.
– reference: Schraml, S., Belbachir, A. N., Milosevic, N., & Schön, P. (2010). Dynamic stereo vision system for real-time tracking. In IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 1409–1412). https://doi.org/10.1109/ISCAS.2010.5537289.
– reference: MuegglerERebecqHGallegoGDelbruckTScaramuzzaDThe event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAMInternational Journal of Robotics Research20173614214910.1177/0278364917691115
– reference: KoglerJrgenSulzbachnerChristophHumenbergerMartinEibensteinerFlorianAddress-Event Based Stereo Vision with Bio-Inspired Silicon Retina ImagersAdvances in Theory and Applications of Stereo Vision2011
– reference: WeikersdorferDavidHoffmannRaoulConradtJörgSimultaneous Localization and Mapping for Event-Based Vision SystemsLecture Notes in Computer Science2013Berlin, HeidelbergSpringer Berlin Heidelberg133142
– reference: Mueggler, E., Huber, B., & Scaramuzza, D. (2014). Event-based, 6-DOF pose tracking for high-speed maneuvers. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2761–2768). https://doi.org/10.1109/IROS.2014.6942940.
– reference: Pizzoli, M., Forster, C., & Scaramuzza, D. (2014). REMODE: Probabilistic, monocular dense reconstruction in real time. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 2609–2616). https://doi.org/10.1109/ICRA.2014.6907233.
– reference: EngelJakobKoltunVladlenCremersDanielDirect Sparse OdometryIEEE Transactions on Pattern Analysis and Machine Intelligence201840361162510.1109/TPAMI.2017.2658577
– reference: HartleyRZissermanAMultiple view geometry in computer vision20032CambridgeCambridge University Press0956.68149
– reference: LeeJun HaengDelbruckTobiPfeifferMichaelParkPaul K. J.ShinChang-WooRyuHyunsurkKangByung ChangReal-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon RetinasIEEE Transactions on Neural Networks and Learning Systems201425122250226310.1109/TNNLS.2014.2308551
– reference: Schraml, S., Belbachir, A. N., & Bischof, H. (2015). Event-driven stereo matching for real-time 3D panoramic vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 466–474). https://doi.org/10.1109/CVPR.2015.7298644.
– reference: Wiesmann, G., Schraml, S., Litzenberger, M., Belbachir, A. N., Hofstatter, M., & Bartolozzi, C. (2012). Event-driven embodied system for feature extraction and object recognition in robotic applications. In IEEE International Conference on Computer Vision and Pattern Recognition Workshop (pp. 76–82). https://doi.org/10.1109/CVPRW.2012.6238898.
– reference: RebecqHHorstschäferTGallegoGScaramuzzaDEVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real-timeIEEE Robotics and Automation Letters2017259360010.1109/LRA.2016.2645143
– reference: Brandli, C., Muller, L., & Delbruck, T. (2014a) Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor. In International Symposium Circuits and Systems (ISCAS) (pp. 686–689). https://doi.org/10.1109/ISCAS.2014.6865228.
– reference: GallegoGuillermoLundJon E.A.MuegglerEliasRebecqHenriDelbruckTobiScaramuzzaDavideEvent-Based, 6-DOF Camera Tracking from Photometric Depth MapsIEEE Transactions on Pattern Analysis and Machine Intelligence201840102402241210.1109/TPAMI.2017.2769655
– reference: OrchardGMeyerCEtienne-CummingsRPoschCThakorNBenosmanRHFirst: A temporal approach to object recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence201537102028204010.1109/TPAMI.2015.2392947
– reference: WolbergGDigital image warping1990CaliforniaWiley-IEEE Computer Society Press
– reference: Camunas-MesaLASerrano-GotarredonaTIengSHBenosmanRBLinares-BarrancoBOn the use of orientation filters for 3D reconstruction in event-driven stereo visionFrontiers in Neuroscience201484810.3389/fnins.2014.00048
– reference: Censi, A., & Scaramuzza, D. (2014). Low-latency event-based visual odometry. In IEEE International Conference on Robotics and Automation (ICRA). https://doi.org/10.1109/IROS.2016.7758089.
– reference: BenosmanRIengS-HClercqCBartolozziCSrinivasanMAsynchronous frameless event-based optical flowNeural Networks201227323710.1016/j.neunet.2011.11.001
– reference: BenosmanRClercqCLagorceXIengS-HBartolozziCEvent-based visual flowIEEE Transactions on Neural Networks and Learning Systems201425240741710.1109/TNNLS.2013.2273537
– reference: Weikersdorfer, D., & Conradt, J. (2012). Event-based particle filtering for robot self-localization. In IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 866–870). https://doi.org/10.1109/ROBIO.2012.6491077.
– reference: VogiatzisGHernándezCVideo-based, real-time multi view stereoImage and Vision Computing201129743444110.1016/j.imavis.2011.01.006
– reference: Piatkowska, E., Belbachir, A. N., Schraml, S., & Gelautz, M. (2012). Spatiotemporal multiple persons tracking using dynamic vision sensor. In IEEE International Conference on Computer Vision and Pattern Recognition Workshop (pp. 35–40). https://doi.org/10.1109/CVPRW.2012.6238892.
– reference: Kueng, B., Mueggler, E., Gallego, G., & Scaramuzza, D. (2016). Low-latency visual odometry using event-based feature tracks. In IEEE/RSJ International Conference on IIntelligent Robots and Systems (IROS) (pp. 16–23). Daejeon, Korea. https://doi.org/10.1109/IROS.2016.7758089.
– reference: Delbruck, T., & Lichtsteiner, P. (2007). Fast sensory motor control based on event-based hybrid neuromorphic-procedural system. In IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 845–848). https://doi.org/10.1109/ISCAS.2007.378038.
– reference: LichtsteinerPPoschCDelbruckTA 128 ×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times $$\end{document} 128 120 dB 15 μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu $$\end{document}s latency asynchronous temporal contrast vision sensorIEEE Journal of Solid-State Circuits200843256657610.1109/JSSC.2007.914337
– reference: Matsuda, N., Cossairt, O., & Gupta. M. (2015). MC3D: Motion contrast 3D scanning. In IEEE International Conference on Computational Photography (ICCP) (pp. 1–10). https://doi.org/10.1109/ICCPHOT.2015.7168370.
– reference: SzeliskiRComputer vision: Algorithms and applications2010LondonSpringer1219.68009
– reference: Cook, M., Gugelmann, L., Jug, F., Krautz, C., & Steger, A. (2011). Interacting maps for fast visual interpretation. In International Joint Conference Neural Networks (IJCNN) (pp. 770–776). https://doi.org/10.1109/IJCNN.2011.6033299.
– reference: KoglerJurgenHumenbergerMartinSulzbachnerChristophEvent-Based Stereo Matching Approaches for Frameless Address Event Stereo DataAdvances in Visual Computing2011Berlin, HeidelbergSpringer Berlin Heidelberg67468510.1007/978-3-642-24028-7_62
– reference: Delbruck, T. (2016). Neuromorophic vision sensing and processing. In European Solid-State Device Research Conferernce (ESSDERC) (pp. 7–14). https://doi.org/10.1109/ESSDERC.2016.7599576.
– reference: Reinbacher, C., Graber, G., & Pock, T. (2016). Real-time intensity-image reconstruction for event cameras using manifold regularisation. In British Machine Vision Conference (BMVC). https://doi.org/10.5244/C.30.9.
– reference: Collins, R. T. (1996). A space-sweep approach to true multi-image matching. In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (pp. 358–363). https://doi.org/10.1109/CVPR.1996.517097.
– reference: Litzenberger, M., Belbachir, A. N., Donath, N., Gritsch, G., Garn, H., Kohn, B., Posch, C., & Schraml, S. (2006). Estimation of vehicle speed based on asynchronous data from a silicon retina optical sensor. In IEEE Intelligent Transportation Systems Conference (pp. 653–658). https://doi.org/10.1109/ITSC.2006.1706816.
– reference: Piatkowska, E., Belbachir, A. N., & Gelautz, M. (2013). Asynchronous stereo vision for event-driven dynamic stereo sensor using an adaptive cooperative approach. In International Conference on Computer Vision Workshops (ICCVW) (pp. 45–50). https://doi.org/10.1109/ICCVW.2013.13.
– ident: 1050_CR44
  doi: 10.1109/CVPR.2006.19
– ident: 1050_CR23
  doi: 10.1109/IROS.2016.7758089
– volume: 29
  start-page: 434
  issue: 7
  year: 2011
  ident: 1050_CR47
  publication-title: Image and Vision Computing
  doi: 10.1016/j.imavis.2011.01.006
– ident: 1050_CR10
  doi: 10.1109/ESSDERC.2016.7599576
– ident: 1050_CR49
  doi: 10.1007/978-3-642-39402-7_14
– ident: 1050_CR43
  doi: 10.1109/ISCAS.2010.5537289
– volume: 22
  start-page: 1330
  issue: 11
  year: 2000
  ident: 1050_CR52
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/34.888718
– ident: 1050_CR22
  doi: 10.5772/12941
– ident: 1050_CR16
  doi: 10.1109/ICRA.2014.6906584
– volume: 36
  start-page: 142
  year: 2017
  ident: 1050_CR31
  publication-title: International Journal of Robotics Research
  doi: 10.1177/0278364917691115
– ident: 1050_CR48
  doi: 10.1109/ROBIO.2012.6491077
– ident: 1050_CR9
  doi: 10.1109/IJCNN.2011.6033299
– ident: 1050_CR19
  doi: 10.5244/C.28.26
– ident: 1050_CR25
  doi: 10.1109/ISCAS.2012.6272144
– ident: 1050_CR11
  doi: 10.1109/ISCAS.2007.378038
– ident: 1050_CR33
  doi: 10.1109/ICCVW.2013.13
– ident: 1050_CR41
  doi: 10.1109/ICRA.2011.5980567
– volume: 8
  start-page: 48
  year: 2014
  ident: 1050_CR6
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2014.00048
– ident: 1050_CR7
  doi: 10.1109/IROS.2016.7758089
– volume: 23
  start-page: 347
  issue: 2
  year: 2012
  ident: 1050_CR39
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2011.2180025
– year: 2013
  ident: 1050_CR12
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2013.00223
– ident: 1050_CR28
  doi: 10.1109/ITSC.2006.1706816
– volume: 32
  start-page: 45
  issue: 1
  year: 1999
  ident: 1050_CR46
  publication-title: International Journal of Computer Vision
  doi: 10.1023/A:1008192912624
– volume: 27
  start-page: 32
  year: 2012
  ident: 1050_CR3
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2011.11.001
– ident: 1050_CR36
  doi: 10.5244/C.30.63
– ident: 1050_CR50
  doi: 10.1109/CVPRW.2012.6238898
– ident: 1050_CR26
  doi: 10.1109/TNNLS.2014.2308551
– volume-title: Multiple view geometry in computer vision
  year: 2003
  ident: 1050_CR18
– ident: 1050_CR1
  doi: 10.1109/CVPR.2016.102
– ident: 1050_CR30
  doi: 10.1109/IROS.2014.6942940
– volume-title: Digital image warping
  year: 1990
  ident: 1050_CR51
– ident: 1050_CR14
  doi: 10.1007/978-3-319-10605-2_54
– volume: 37
  start-page: 2028
  issue: 10
  year: 2015
  ident: 1050_CR32
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2015.2392947
– year: 2016
  ident: 1050_CR24
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2016.2574707
– ident: 1050_CR21
  doi: 10.1007/978-3-642-24028-7_62
– ident: 1050_CR20
  doi: 10.1007/978-3-319-46466-4_21
– ident: 1050_CR34
  doi: 10.1109/CVPRW.2012.6238892
– volume: 2
  start-page: 593
  year: 2017
  ident: 1050_CR37
  publication-title: IEEE Robotics and Automation Letters
  doi: 10.1109/LRA.2016.2645143
– volume: 43
  start-page: 566
  issue: 2
  year: 2008
  ident: 1050_CR27
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.2007.914337
– ident: 1050_CR29
  doi: 10.1109/ICCPHOT.2015.7168370
– volume-title: Computer vision: Algorithms and applications
  year: 2010
  ident: 1050_CR45
– ident: 1050_CR35
  doi: 10.1109/ICRA.2014.6907233
– ident: 1050_CR4
  doi: 10.1109/ISCAS.2014.6865228
– ident: 1050_CR38
  doi: 10.5244/C.30.9
– ident: 1050_CR8
  doi: 10.1109/CVPR.1996.517097
– ident: 1050_CR15
  doi: 10.1109/TPAMI.2017.2658577
– volume: 25
  start-page: 407
  issue: 2
  year: 2014
  ident: 1050_CR2
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2013.2273537
– ident: 1050_CR17
  doi: 10.1109/TPAMI.2017.2769655
– year: 2016
  ident: 1050_CR40
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2016.00176
– ident: 1050_CR5
  doi: 10.1109/JSSC.2014.2342715
– volume: 51
  start-page: 1465
  issue: 5
  year: 2011
  ident: 1050_CR13
  publication-title: Experiments in Fluids
  doi: 10.1007/s00348-011-1207-y
– ident: 1050_CR42
  doi: 10.1109/CVPR.2015.7298644
SSID ssj0002823
Score 2.6652126
Snippet Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1394
SubjectTerms Algorithms
Artificial Intelligence
Cameras
Computer Imaging
Computer Science
Computer vision
Image Processing and Computer Vision
Image reconstruction
Pattern Recognition
Pattern Recognition and Graphics
Real time
Vision
SummonAdditionalLinks – databaseName: SpringerLINK Contemporary 1997-Present
  dbid: RSV
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NTsMwDI5gcODC-BWDgXLgBIrUrUmacIMxxIUJMZjGqUqTVJo0dYgiuPIQPCFPgtM2q0CABNfEiVo7iT_HsY3QIdUUUEWgidQyIRR0CElSqgjjCdc8MC5dYFFsIhoMxHgsr6s47ty_dvcuyeKkroPdOl3nc4RTFTAB2DyLaAm0nXD1Gm6Go_nxCzZEWT8e7CLGZce7Mr-b4rMyqhHmF6dooWsumv_6yjW0WkFLfFquhXW0YLMN1KxgJq42cQ5NvpKDb9tE9_2r0fAE993jR3IGis3gIjKXjCb2BQ-B1s7eX9_Cc-zM1TrpLHbXuFhl5UjcU-6KC08yIFNT4sJLttDdRf-2d0mqogtEg4HKiQEcSwWz3LI0YcyEqXGYsKNVlHZNwEGQoNCYCSLNrFSMcWpDarXiaWRtNwy3USObZXYHYRk5dCESK5Mu1QCFjAkNDZSQKuFK2BYKPPdjXWUkd4UxpnGdS9lxMwZuxo6bMW-ho_mQhzIdx2_EbS_SuNqZeQyqWkjhDP8WOvYirLt_nGz3T9R7aAX-XZTvXtqoAWKx-2hZPz9N8seDYsF-AIsd4rQ
  priority: 102
  providerName: Springer Nature
Title EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time
URI https://link.springer.com/article/10.1007/s11263-017-1050-6
https://www.proquest.com/docview/2128987888
Volume 126
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LT-MwEB7xOuyF94ryqHzY0yKLtLGdhAuCUoSEqCq6Wx6XyLEdCQmlQBFnfgS_kF_CTOIQ7Upw4WIpie0k-uyZbzz2DMAvYQSyisDwxCQZF6hDeJYLzaXKlFGBpXCBZbKJaDCIr66SoV9wm_ptlbVMLAW1nRhaI99DERujfYwG28H9A6esUeRd9Sk0ZmGemA1t6TsPeh-SGM2JKpU8mkhSJZ3aq1kenet0yYOJMhoZBlpQ_-qlhmz-5x8t1c7J0nc_eBkWPeFkh9UIWYEZV6zCkiefzE_tKd6q8zvU99bgun8-Hu2zPm2J5Eeo7iwrz-vyMb6ZjbCum7y9vIbHjIzYJhQto8VdpouqJetpWvhitwVW03ecDp2sw9-T_p_eKfepGLhBs1Vxi-xWxNIpJ_NMShvmlphix-go79pAIbyo5qQNIiNdoqVUwoXCGa3yyLluGP6EuWJSuA1gSUScI85cknWFQYJkbWhFoONEZ0rHrgVBDURqfJxySpdxlzYRlgm7FLFLCbtUteD3R5P7KkjHV5W3a7xSP1-naQNWC3ZrxJvHn3a2-XVnW_ADfzautr9swxzi4HZgwTw_3U4f2zAbXV63Yf6oPxhe4NVZxNvlAMZyKG-wvBiN3wGUvPMN
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NTtwwEB4hqASXQqGILX8-lAvIajaxnQQJVS0sAi2skKCInoJjOxISytIuAnHjIXgOHoon6UwSE1Gp3Dj0mthWHI9nvm_GngH4LIxAVBEYnpo05wJtCM8LoblUuTIqsJQusCo2EQ8GydlZejQGj_4uDB2r9DqxUtR2aMhH_gVVbIL8GAnb16tfnKpGUXTVl9CoxaLv7m6Rso229ndwfdfCcLd3sr3Hm6oC3CADU9wiUBOJdMrJIpfSRoUl0NM1Oi5CGyj8UtTY0gaxkS7VUirhIuGMVkXsXEgOUFT5EyJKYtpX_Zg_a36kL3XpeqRkUqVdH0Wtrup1Q4qYok1ARIOM7aUdbMHtX_HYysztTv9vP2gG3jeAmn2rd8AHGHPlLEw34Jo1qmuEj3z9Cv9sDn72Dk-PN1mPjnzy72jOLavuI_NTnCk7xrZu-HT_EO0wIultql1Gzmumy7on29bk2GMXJTbTl5wu1XyEH28y53kYL4elWwCWxoSpktyleSgMAkBrIysCnaQ6VzpxHQj8wmemycNO5UAuszaDNMlKhrKSkaxkqgPrz12u6iQkrzVe8vKRNfpolLXC0YENL2Ht638O9un1wVZhcu_k8CA72B_0F2EKJ57UR32WYBzXxC3DO3NzfTH6vVJtFAbnby14fwBXLkqr
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NTtwwEB4hqBCX0j_Elm3rQ7kUWc1mbSeuVFXt_ghEWa1Ei2gvwbEdCQllgUVUvfUh-jQ8Dk_SmSQmaiW4ceg1sa04_jzzjWc8A_BaWIGsIrJcW51zgTqE54UwXKpcWRU5ShdYFZtIJpP08FBPF-Aq3IWhsMogEytB7WaWzsjfoohN0T4mg61owiKmw_GH0zNOFaTI0xrKadQQ2fU_f6D5Nn-_M8S13ozj8ejLYJs3FQa4RWtMcYekTaTSKy-LXErXLxwRoJ41SRG7SOFXo_SWLkqs9NpIqYTvC2-NKhLvYzoMRfG_hENpCiecyu83WgBNmbqMPZpnUule8KhW1_Z6MXlPUT8gu0Hr7W-d2BLdf3yzlcobr_7PP-sRPGyINvtY74zHsODLJ7DakG7WiLQ5Pgp1LcKzp_BttHew_46NKBSUf0I171h1T5kf4KzZPrb1s-tfv_tDRsZ7m4KX0aE2M2Xdkw0MHfix4xKbmRNOl22ewdd7mfMaLJaz0q8D0wlxrTT3Oo-FRWLoXN-JyKTa5MqkvgNRAEFmm_zsVCbkJGszSxNuMsRNRrjJVAfe3HQ5rZOT3NW4G7CSNXJqnrVA6cBWQFv7-tbBnt892CtYRrxln3cmuxuwgvNO6wigLizikvgX8MBeXhzPz19We4bB0X3j7g_JvFPP
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EMVS%3A+Event-Based+Multi-View+Stereo%E2%80%943D+Reconstruction+with+an+Event+Camera+in+Real-Time&rft.jtitle=International+journal+of+computer+vision&rft.au=Rebecq%2C+Henri&rft.au=Gallego%2C+Guillermo&rft.au=Mueggler%2C+Elias&rft.au=Scaramuzza%2C+Davide&rft.date=2018-12-01&rft.pub=Springer+Nature+B.V&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=126&rft.issue=12&rft.spage=1394&rft.epage=1414&rft_id=info:doi/10.1007%2Fs11263-017-1050-6&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon