Deep Bingham Networks: Dealing with Uncertainty and Ambiguity in Pose Estimation
In this work, we introduce Deep Bingham Networks (DBN) , a generic framework that can naturally handle pose-related uncertainties and ambiguities arising in almost all real life applications concerning 3D data. While existing works strive to find a single solution to the pose estimation problem, we...
Saved in:
| Published in: | International journal of computer vision Vol. 130; no. 7; pp. 1627 - 1654 |
|---|---|
| Main Authors: | , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
New York
Springer US
01.07.2022
Springer Springer Nature B.V |
| Subjects: | |
| ISSN: | 0920-5691, 1573-1405 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | In this work, we introduce
Deep Bingham Networks (DBN)
, a generic framework that can naturally handle pose-related uncertainties and ambiguities arising in almost all real life applications concerning 3D data. While existing works strive to find a single solution to the pose estimation problem, we make peace with the ambiguities causing high uncertainty around which solutions to identify as the best. Instead, we report a
family of poses
which capture the nature of the solution space. DBN extends the state of the art direct pose regression networks by (i) a multi-hypotheses prediction head which can yield different distribution modes; and (ii) novel loss functions that benefit from Bingham distributions on rotations. This way, DBN can work both in unambiguous cases providing uncertainty information, and in ambiguous scenes where an uncertainty per mode is desired. On a technical front, our network regresses continuous
Bingham mixture models
and is applicable to both 2D data such as images and to 3D data such as point clouds. We proposed new training strategies so as to avoid mode or posterior collapse during training and to improve numerical stability. Our methods are thoroughly tested on two different applications exploiting two different modalities: (i) 6D camera relocalization from images; and (ii) object pose estimation from 3D point clouds, demonstrating decent advantages over the state of the art. For the former we contributed our own dataset composed of five indoor scenes where it is unavoidable to capture images corresponding to views that are hard to uniquely identify. For the latter we achieve the top results especially for symmetric objects of ModelNet dataset (Wu et al., in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920, 2015). The code and dataset accompanying this paper is provided under
https://multimodal3dvision.github.io
. |
|---|---|
| AbstractList | In this work, we introduce Deep Bingham Networks (DBN), a generic framework that can naturally handle pose-related uncertainties and ambiguities arising in almost all real life applications concerning 3D data. While existing works strive to find a single solution to the pose estimation problem, we make peace with the ambiguities causing high uncertainty around which solutions to identify as the best. Instead, we report a family of poses which capture the nature of the solution space. DBN extends the state of the art direct pose regression networks by (i) a multi-hypotheses prediction head which can yield different distribution modes; and (ii) novel loss functions that benefit from Bingham distributions on rotations. This way, DBN can work both in unambiguous cases providing uncertainty information, and in ambiguous scenes where an uncertainty per mode is desired. On a technical front, our network regresses continuous Bingham mixture models and is applicable to both 2D data such as images and to 3D data such as point clouds. We proposed new training strategies so as to avoid mode or posterior collapse during training and to improve numerical stability. Our methods are thoroughly tested on two different applications exploiting two different modalities: (i) 6D camera relocalization from images; and (ii) object pose estimation from 3D point clouds, demonstrating decent advantages over the state of the art. For the former we contributed our own dataset composed of five indoor scenes where it is unavoidable to capture images corresponding to views that are hard to uniquely identify. For the latter we achieve the top results especially for symmetric objects of ModelNet dataset (Wu et al., in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912-1920, 2015). The code and dataset accompanying this paper is provided under In this work, we introduce Deep Bingham Networks (DBN), a generic framework that can naturally handle pose-related uncertainties and ambiguities arising in almost all real life applications concerning 3D data. While existing works strive to find a single solution to the pose estimation problem, we make peace with the ambiguities causing high uncertainty around which solutions to identify as the best. Instead, we report a family of poses which capture the nature of the solution space. DBN extends the state of the art direct pose regression networks by (i) a multi-hypotheses prediction head which can yield different distribution modes; and (ii) novel loss functions that benefit from Bingham distributions on rotations. This way, DBN can work both in unambiguous cases providing uncertainty information, and in ambiguous scenes where an uncertainty per mode is desired. On a technical front, our network regresses continuous Bingham mixture models and is applicable to both 2D data such as images and to 3D data such as point clouds. We proposed new training strategies so as to avoid mode or posterior collapse during training and to improve numerical stability. Our methods are thoroughly tested on two different applications exploiting two different modalities: (i) 6D camera relocalization from images; and (ii) object pose estimation from 3D point clouds, demonstrating decent advantages over the state of the art. For the former we contributed our own dataset composed of five indoor scenes where it is unavoidable to capture images corresponding to views that are hard to uniquely identify. For the latter we achieve the top results especially for symmetric objects of ModelNet dataset (Wu et al., in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920, 2015). The code and dataset accompanying this paper is provided under https://multimodal3dvision.github.io. In this work, we introduce Deep Bingham Networks (DBN) , a generic framework that can naturally handle pose-related uncertainties and ambiguities arising in almost all real life applications concerning 3D data. While existing works strive to find a single solution to the pose estimation problem, we make peace with the ambiguities causing high uncertainty around which solutions to identify as the best. Instead, we report a family of poses which capture the nature of the solution space. DBN extends the state of the art direct pose regression networks by (i) a multi-hypotheses prediction head which can yield different distribution modes; and (ii) novel loss functions that benefit from Bingham distributions on rotations. This way, DBN can work both in unambiguous cases providing uncertainty information, and in ambiguous scenes where an uncertainty per mode is desired. On a technical front, our network regresses continuous Bingham mixture models and is applicable to both 2D data such as images and to 3D data such as point clouds. We proposed new training strategies so as to avoid mode or posterior collapse during training and to improve numerical stability. Our methods are thoroughly tested on two different applications exploiting two different modalities: (i) 6D camera relocalization from images; and (ii) object pose estimation from 3D point clouds, demonstrating decent advantages over the state of the art. For the former we contributed our own dataset composed of five indoor scenes where it is unavoidable to capture images corresponding to views that are hard to uniquely identify. For the latter we achieve the top results especially for symmetric objects of ModelNet dataset (Wu et al., in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1912–1920, 2015). The code and dataset accompanying this paper is provided under https://multimodal3dvision.github.io . |
| Audience | Academic |
| Author | Birdal, Tolga Navab, Nassir Ilic, Slobodan Bui, Mai Deng, Haowen Guibas, Leonidas |
| Author_xml | – sequence: 1 givenname: Haowen surname: Deng fullname: Deng, Haowen organization: Informatics at Technische Universität München, Corporate Technology Siemens AG – sequence: 2 givenname: Mai surname: Bui fullname: Bui, Mai organization: Informatics at Technische Universität München – sequence: 3 givenname: Nassir surname: Navab fullname: Navab, Nassir organization: Informatics at Technische Universität München – sequence: 4 givenname: Leonidas surname: Guibas fullname: Guibas, Leonidas organization: Computer Science Department, Stanford University – sequence: 5 givenname: Slobodan surname: Ilic fullname: Ilic, Slobodan organization: Corporate Technology Siemens AG – sequence: 6 givenname: Tolga orcidid: 0000-0001-7915-7964 surname: Birdal fullname: Birdal, Tolga email: t.birdal@stanford.edu organization: Computer Science Department, Stanford University |
| BookMark | eNp9kU9PGzEQxS1EJQLtF-BkqacelvrPrtfbWwqUIkUtgnK2Jt7ZjdPEm9qOUr49JouE4IDmYHn0fp7xe8fk0A8eCTnl7IwzVn-NnAslCyZEwbjiotgdkAmvalnwklWHZMIawYpKNfyIHMe4ZIwJLeSE3Fwgbuh35_sFrOkvTLsh_I3f6AXCKjfpzqUFvfcWQwLn0wMF39Lpeu76rcs35-nNEJFexuTWkNzgP5IPHawifno-T8j9j8s_5z-L2e-r6_PprLCyEalo876gEDsJqq1sW-ZS8xZtbUFXHBqwDZ-XooJaViU0Ws5V3TZWdx0gKC5PyOfx3U0Y_m0xJrMctsHnkUYozYWWWj-pzkZVDys0zndDCmBztbh2NnvYudyf1kyXWqiyycCXV0DWJPyfetjGaK7vbl9r9ai1YYgxYGesS3sT8hC3MpyZp3DMGI7J4Zh9OGaXUfEG3YRsYHh4H5IjFLPY9xhevvwO9QgDb6PW |
| CitedBy_id | crossref_primary_10_1109_TAES_2023_3325801 crossref_primary_10_1016_j_inffus_2025_103001 crossref_primary_10_1109_LRA_2022_3222998 crossref_primary_10_1109_TPAMI_2025_3532450 crossref_primary_10_1109_TIT_2024_3468212 crossref_primary_10_1016_j_ins_2024_120231 crossref_primary_10_1016_j_neucom_2024_129033 crossref_primary_10_1103_PhysRevResearch_6_023245 crossref_primary_10_1109_TIP_2024_3372457 crossref_primary_10_1007_s00138_024_01657_6 crossref_primary_10_1088_2632_2153_ad5f13 |
| Cites_doi | 10.1007/978-3-642-37331-2_42 10.1109/CVPR.2019.00999 10.1002/rob.21831 10.1109/CVPR.2017.29 10.1109/ICCV.2017.169 10.1109/CVPR.2019.00336 10.1109/CVPR.2019.00733 10.1109/TRO.2014.2298059 10.1109/ICCV.2015.337 10.1109/CVPR42600.2020.00164 10.1109/CVPR.2017.267 10.1023/A:1010933404324 10.1109/CVPR.2018.00684 10.1109/ICCV.2019.00694 10.1109/TRO.2021.3056043 10.1109/3DV.2019.00073 10.1109/ICCV.2019.00013 10.1109/CVPR.2016.445 10.1109/CVPR42600.2020.01393 10.1109/CVPR.2013.377 10.1109/ICCVW.2017.287 10.1109/CVPR.2015.7299069 10.1109/ICCV.2015.336 10.1109/CVPR52688.2022.00653 10.1109/ICCVW.2019.00470 10.15607/RSS.2011.VII.015 10.1109/ICRA.2017.7989598 10.1109/CVPR.2019.01136 10.1109/ICCV.2019.00937 10.1109/IROS.2017.8202207 10.1109/ICCV.2015.310 10.1214/aos/1176342874 10.1109/CVPR.2016.366 10.1109/CVPR.2017.694 10.1109/IROS45743.2020.9340860 10.1007/978-3-030-01240-3_33 10.1145/358669.358692 10.1109/CVPR.2018.00472 10.1109/3DV.2015.65 10.1007/978-3-030-01264-9_46 10.1088/1751-8113/41/6/065004 10.1109/ICCV.2015.243 10.1109/WACV.2016.7477630 10.2307/2289892 10.1109/CVPR.2019.00731 10.15607/RSS.2020.XVI.007 10.1109/CVPR.2019.00589 10.1109/ICCV.2019.00203 10.1109/CVPR.2013.178 10.1109/ICCVW.2017.254 10.1109/ICRA.2016.7487679 10.1080/10867651.1998.10487493 10.1109/CVPR.2018.00489 10.1103/PhysRev.106.620 10.15607/RSS.2018.XIV.019 10.1109/3DV.2018.00051 10.1109/IROS.2018.8594282 10.1109/CVPR.2016.308 10.1109/CVPR.2016.90 10.1109/CVPR.2018.00526 10.1007/978-3-030-58452-8_1 10.1109/CVPR.2018.00277 10.1109/ICRA.2014.6907460 10.1177/0278364918778353 10.1109/ICCV.2017.388 10.1109/CVPR.2019.00342 10.1093/biomet/92.2.465 10.1007/978-3-030-01231-1_43 10.1109/CVPR.2017.284 10.1109/ICCV.2019.00362 10.1109/CVPR.2018.00028 10.1007/s11263-008-0195-8 10.1080/01418619608243708 10.1007/978-3-030-58523-5_9 10.1016/0734-189X(89)90052-2 10.1016/j.patcog.2017.09.013 10.1109/CVPR.2016.439 10.1023/A:1026543900054 10.1109/TRO.2016.2624754 10.1109/ICCV.2017.24 10.1109/ICCV.2017.77 10.1007/978-3-030-01228-1_37 10.1109/MRA.2006.1638022 10.2307/1969810 10.1109/CVPR46437.2021.00326 10.1109/CVPR.2009.5206848 |
| ContentType | Journal Article |
| Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 COPYRIGHT 2022 Springer The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. |
| Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 – notice: COPYRIGHT 2022 Springer – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. |
| DBID | AAYXX CITATION ISR 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8FD 8FE 8FG 8FK 8FL ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI PYYUZ Q9U |
| DOI | 10.1007/s11263-022-01612-w |
| DatabaseName | CrossRef Gale In Context: Science ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials - QC ProQuest Central Business Premium Collection ProQuest Technology Collection ProQuest One Community College ProQuest Central Korea Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database (ProQuest) ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection Proquest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ABI/INFORM Collection China ProQuest Central Basic |
| DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ABI/INFORM China ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest Central (Alumni) ProQuest One Academic (New) Business Premium Collection (Alumni) |
| DatabaseTitleList | ABI/INFORM Global (Corporate) |
| Database_xml | – sequence: 1 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Computer Science |
| EISSN | 1573-1405 |
| EndPage | 1654 |
| ExternalDocumentID | A708482649 10_1007_s11263_022_01612_w |
| GrantInformation_xml | – fundername: Samsung GRO – fundername: Stanford SAIL Toyota Research – fundername: Directorate for Computer and Information Science and Engineering grantid: 1763268 funderid: http://dx.doi.org/10.13039/100000083 – fundername: Vannevar Bush Faculty Fellowship – fundername: Bacatec – fundername: Stanford-Ford Alliance |
| GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29J 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACIHN ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACUHS ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEAQA AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. B0M BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EAS EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IAO IHE IJ- IKXTQ ISR ITC ITM IWAJR IXC IZIGR IZQ I~X I~Y I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TEORI TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7V Z7W Z7X Z7Y Z7Z Z83 Z86 Z88 Z8M Z8N Z8P Z8Q Z8R Z8S Z8T Z8W Z92 ZMTXR ~8M ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFFHD AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION ICD PHGZM PHGZT PQGLB 7SC 7XB 8AL 8FD 8FK JQ2 L.- L7M L~C L~D PKEHL PQEST PQUKI Q9U |
| ID | FETCH-LOGICAL-c392t-d126a6eef3a6d5cd4d4d6bdec7ca851a9ac91b425a7354a983b67d9c8ffaea613 |
| IEDL.DBID | K7- |
| ISICitedReferencesCount | 19 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000790633300001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0920-5691 |
| IngestDate | Tue Nov 04 23:13:51 EST 2025 Sat Nov 29 10:36:11 EST 2025 Mon Nov 24 14:35:43 EST 2025 Sat Nov 29 06:42:29 EST 2025 Tue Nov 18 21:42:58 EST 2025 Fri Feb 21 02:46:29 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Keywords | Camera pose Bingham distribution Uncertainty Point clouds 3D computer vision Uncertainty estimation Camera relocalization Object pose Rotation 6D Posterior distribution Ambiguity |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c392t-d126a6eef3a6d5cd4d4d6bdec7ca851a9ac91b425a7354a983b67d9c8ffaea613 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0001-7915-7964 |
| PQID | 2681283881 |
| PQPubID | 1456341 |
| PageCount | 28 |
| ParticipantIDs | proquest_journals_2681283881 gale_infotracacademiconefile_A708482649 gale_incontextgauss_ISR_A708482649 crossref_citationtrail_10_1007_s11263_022_01612_w crossref_primary_10_1007_s11263_022_01612_w springer_journals_10_1007_s11263_022_01612_w |
| PublicationCentury | 2000 |
| PublicationDate | 20220700 2022-07-00 20220701 |
| PublicationDateYYYYMMDD | 2022-07-01 |
| PublicationDate_xml | – month: 7 year: 2022 text: 20220700 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | International journal of computer vision |
| PublicationTitleAbbrev | Int J Comput Vis |
| PublicationYear | 2022 |
| Publisher | Springer US Springer Springer Nature B.V |
| Publisher_xml | – name: Springer US – name: Springer – name: Springer Nature B.V |
| References | KumeAWoodATSaddlepoint approximations for the Bingham and fisher-Bingham normalising constantsBiometrika2005922465476220137110.1093/biomet/92.2.465 Zeisl, B., Sattler, T., & Pollefeys, M. (2015). Camera pose voting for large-scale image-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 2704–2712). Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. (2019). On the continuity of rotation representations in neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5745–5753). Besl, P. J., McKay, N. D. (1992). Method for registration of 3-d shapes. In Sensor fusion IV: Control paradigms and data structures (vol. 1611, pp. 586–606). International Society for Optics and Photonics. Zhao, Y., Birdal, T., Lenssen, J.E., Menegatti, E., Guibas, L., & Tombari, F. (2020). Quaternion equivariant capsule networks for 3d point clouds. In European conference on computer vision (pp. 1–19). Springer. Birdal, T., & Simsekli, U. (2019). Probabilistic permutation synchronization using the riemannian structure of the birkhoff polytope. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11105–11116). Kendall, A., Grimes, M., & Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the international conference on computer vision (ICCV). Chen, J., Yin, Y., Birdal, T., Chen, B., Guibas, L., & Wang, H. (2022). Projective manifold gradient layer for deep rotation regression. In The IEEE conference on computer vision and pattern recognition (CVPR). Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International conference on machine learning (pp. 1050–1059). BreimanLRandom forestsMachine Learning200145153210.1023/A:1010933404324 Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 Horaud, R., Conio, B., Leboulleux, O., & Lacolle, B. (1989). An analytic solution for the perspective 4-point problem. In Proceedings CVPR’89: IEEE computer society conference on computer vision and pattern recognition (pp. 500–507). IEEE. Guzman-Rivera, A., Batra, D., & Kohli, P. (2012). Multiple choice learning: Learning to produce multiple structured outputs. In Advances in neural information processing systems (pp. 1799–1807). Peretroukhin, V., Giamou, M., Rosen, D. M., Greene, W. N., Roy, N., & Kelly, J. (2020). A smooth representation of SO(3) for deep rotation learning with uncertainty. In Proceedings of robotics: Science and systems (RSS’20). Balntas, V., Li, S., & Prisacariu, V. (2018). Relocnet: Continuous metric learning relocalisation using neural nets. In Proceedings of the European conference on computer vision (ECCV) (pp. 751–767). Peretroukhin, V., Wagstaff, B., Giamou, M., & Kelly, J. (2019). Probabilistic regression of rotations using quaternion averaging and a deep multi-headed network. arXiv preprint arXiv:1904.03182 Kendall, A., & Cipolla, R. (2016). Modelling uncertainty in deep learning for camera relocalization. In Proceedings of the international conference on robotics and automation (ICRA). SubbaraoRMeerPNonlinear mean shift over Riemannian manifoldsInternational Journal of Computer Vision2009841110.1007/s11263-008-0195-8 Berger, J. O. (2013). Statistical decision theory and Bayesian analysis. Springer. Sattler, T., Havlena, M., Radenovic, F., Schindler, K., & Pollefeys, M. (2015). Hyperpoints and fine vocabularies for large-scale location recognition. In Proceedings of the IEEE international conference on computer vision (pp. 2102–2110). Liao, S., Gavves, E., & Snoek, C. G. (2019). Spherical regression: Learning viewpoints, surface normals and 3d rotations on n-spheres. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9759–9767). Falorsi, L., de Haan, P., Davidson, T. R., Forré, P.: Reparameterizing distributions on lie groups. arXiv preprint arXiv:1903.02958 (2019) Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017a) Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 652–660). Yuan, W., Held, D., Mertz, C., & Hebert, M. (2018). Iterative transformer network for 3d point cloud. arXiv preprint arXiv:1811.11209 Shotton, J., Glocker, B., Zach, C., Izadi, S., Criminisi, A., & Fitzgibbon, A. (2013). Scene coordinate regression forests for camera relocalization in RGB-D images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2930–2937). Liu, W., Luo, W., Lian, D., & Gao, S. (2018). Future frame prediction for anomaly detection—A new baseline. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6536–6545). Sundermeyer, M., Marton, Z. C., Durner, M., Brucker, M., & Triebel, R. (2018) Implicit 3d orientation learning for 6d object detection from RGB images. In Proceedings of the European conference on computer vision (ECCV) (pp. 699–715). Kurz, G., Gilitschenski, I., Julier, S., & Hanebeck, U. D. (2013). Recursive estimation of orientation based on the Bingham distribution. In 2013 16th International conference on information fusion (FUSION) (pp. 1487–1494). IEEE. Kehl, W., Manhardt, F., Tombari, F., Ilic, S., & Navab, N. (2017). Ssd-6d: Making RGB-based 3d detection and 6d pose estimation great again. In Proceedings of the IEEE international conference on computer vision (pp. 1521–1529). Xiang, Y., Schmidt, T., Narayanan, V., & Fox, D. (2018). Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. In Robotics: Science and systems (RSS). Rupprecht, C., Laina, I., DiPietro, R., Baust, M., Tombari, F., Navab, N., & Hager, G. D. (2017). Learning in an uncertain world: Representing ambiguity through multiple hypotheses. In Proceedings of the IEEE international conference on computer vision (pp. 3591–3600). Massiceti, D., Krull, A., Brachmann, E., Rother, C., Torr, P. H. (2017). Random forests versus neural networks—What’s best for camera localization? In 2017 IEEE international conference on robotics and automation (ICRA) (pp. 5118–5125). IEEE. Glover, J., Bradski, G., & Rusu, R. B. (2012). Monte Carlo pose estimation with quaternion kernels and the Bingham distribution. In Robotics: Science and systems (vol. 7, p. 97). Kendall, A., & Cipolla, R. (2016). Modelling uncertainty in deep learning for camera relocalization. In 2016 IEEE international conference on robotics and automation (ICRA) (pp. 4762–4769). IEEE. Bishop, C. M.: Mixture density networks (1994) https://research.aston.ac.uk/en/publications/mixture-density-networks Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818–2826). McLachlan, G. J., & Basford, K. E. (1988). Mixture models: Inference and applications to clustering (vol. 84). M. Dekker. Brahmbhatt, S., Gu, J., Kim, K., Hays, J., & Kautz, J. (2018). Geometry-aware learning of maps for camera localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2616–2625). Pitteri, G., Ramamonjisoa, M., Ilic, S., & Lepetit, V. (2019). On object symmetries and 6d pose estimation from images. In 2019 International conference on 3D vision (3DV) (pp. 614–622). IEEE. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition pp. 770–778. Mahendran, S., Ali, H., & Vidal, R. (2017). 3d pose regression using convolutional neural networks. In Proceedings of the IEEE international conference on computer vision (pp. 2174–2182). Clark, R., Wang, S., Markham, A., Trigoni, N., & Wen, H. (2017). Vidloc: A deep spatio-temporal model for 6-dof video-clip relocalization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6856–6864). Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. Zeng, A., Song, S., Nießner, M., Fisher, M., Xiao, J., & Funkhouser, T. (2017). 3Dmatch: Learning local geometric descriptors from RGB-D reconstructions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1802–1811). Haarbach, A., Birdal, T., & Ilic, S. (2018). Survey of higher order rigid body motion interpolation methods for keyframe animation and continuous-time trajectory estimation. In 2018 Sixth international conference on 3D vision (3DV) (pp. 381–389). IEEE. https://doi.org/10.1109/3DV.2018.00051 Valentin, J., Nießner, M., Shotton, J., Fitzgibbon, A., Izadi, S., & Torr, P. H. (2015). Exploiting uncertainty in regression forests for accurate camera relocalization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4400–4408). Kurz, G., Gilitschenski, I., Pfaff, F., Drude, L., Hanebeck, U. D., Haeb-Umbach, R., & Siegwart, R. Y. (2017). Directional statistics and filtering using libdirectional. arXiv preprint arXiv:1712.09718 Okorn, B., Xu, M., Hebert, M., & Held, D. (2020). Learning orientation distributions for object pose estimation. In 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 10580–10587). IEEE. Zakharov, S., Kehl, W., Planche, B., Hutter, A., & Ilic, S. (2017). 3d object instance recognition and pose estimation using triplet loss with dynamic margin. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp 552–559). IEEE. Zhou, Y., & Tuzel, O. (2018). Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4490–4499). Birdal, T., Simsekli, U., Eken, M. O., & Ilic, S. (2018). Bayesian pose graph optimi A Kume (1612_CR60) 2005; 92 1612_CR90 1612_CR86 1612_CR87 1612_CR84 1612_CR82 1612_CR83 1612_CR80 1612_CR81 1612_CR88 1612_CR89 CS Herz (1612_CR47) 1955; 61 1612_CR102 1612_CR101 1612_CR104 1612_CR103 1612_CR106 1612_CR105 1612_CR97 1612_CR108 1612_CR10 1612_CR107 1612_CR95 1612_CR96 S Ullman (1612_CR98) 1979; 203 1612_CR109 1612_CR93 1612_CR91 1612_CR92 1612_CR17 1612_CR18 1612_CR15 1612_CR16 1612_CR13 1612_CR14 1612_CR11 1612_CR99 1612_CR12 X Deng (1612_CR32) 2021; 37 C Bingham (1612_CR7) 1974; 2 1612_CR111 1612_CR110 1612_CR113 1612_CR112 1612_CR114 1612_CR64 1612_CR65 1612_CR62 1612_CR61 1612_CR68 1612_CR69 1612_CR66 1612_CR67 N Piasco (1612_CR79) 2018; 74 1612_CR75 1612_CR76 1612_CR74 1612_CR71 1612_CR72 1612_CR70 1612_CR77 1612_CR78 MA Fischler (1612_CR37) 1981; 24 1612_CR43 1612_CR40 1612_CR41 QA Wang (1612_CR100) 2008; 41 L Breiman (1612_CR19) 2001; 45 H Durrant-Whyte (1612_CR34) 2006; 13 1612_CR48 1612_CR49 R Subbarao (1612_CR94) 2009; 84 1612_CR46 1612_CR44 1612_CR45 ET Jaynes (1612_CR50) 1957; 106 1612_CR53 1612_CR54 1612_CR51 1612_CR52 FS Grassia (1612_CR42) 1998; 3 Y Rubner (1612_CR85) 2000; 40 1612_CR59 1612_CR57 1612_CR58 1612_CR55 1612_CR56 1612_CR8 1612_CR9 C Cadena (1612_CR24) 2016; 32 1612_CR20 1612_CR21 A Morawiec (1612_CR73) 1996; 73 M Labbé (1612_CR63) 2019; 36 1612_CR28 1612_CR5 1612_CR29 1612_CR6 1612_CR26 R Arun Srivatsan (1612_CR2) 2018; 37 1612_CR27 1612_CR1 1612_CR25 1612_CR22 1612_CR3 1612_CR23 TD Barfoot (1612_CR4) 2014; 30 1612_CR31 1612_CR30 1612_CR39 1612_CR38 1612_CR35 1612_CR36 1612_CR33 |
| References_xml | – reference: Falorsi, L., de Haan, P., Davidson, T. R., Forré, P.: Reparameterizing distributions on lie groups. arXiv preprint arXiv:1903.02958 (2019) – reference: Salas-Moreno, R. F., Newcombe, R. A., Strasdat, H., Kelly, P. H., & Davison, A. J. (2013). Slam++: Simultaneous localisation and mapping at the level of objects. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1352–1359). – reference: Guzman-Rivera, A., Batra, D., & Kohli, P. (2012). Multiple choice learning: Learning to produce multiple structured outputs. In Advances in neural information processing systems (pp. 1799–1807). – reference: Clark, R., Wang, S., Markham, A., Trigoni, N., & Wen, H. (2017). Vidloc: A deep spatio-temporal model for 6-dof video-clip relocalization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6856–6864). – reference: Mahendran, S., Ali, H., & Vidal, R. (2017). 3d pose regression using convolutional neural networks. In Proceedings of the IEEE international conference on computer vision (pp. 2174–2182). – reference: Birdal, T., & Ilic, S. (2015). Point pair features based object detection and pose estimation revisited. In 2015 International conference on 3D vision (pp. 527–535). IEEE. – reference: Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556. – reference: Sundermeyer, M., Durner, M., Puang, E. Y., Marton, Z. C., Vaskevicius, N., Arras, K. O., & Triebel, R. (2020). Multi-path learning for object pose estimation across domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13916–13925). – reference: Deng, H., Birdal, T., & Ilic, S. (2018). Ppfnet: Global context aware local features for robust 3d point matching. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 195–205). – reference: HerzCSBessel functions of matrix argumentAnnals of Mathematics19556134745236996010.2307/1969810 – reference: Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1912–1920). – reference: Corona, E., Kundu, K., & Fidler, S. (2018). Pose estimation for objects with rotational symmetry. In 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 7215–7222). IEEE. – reference: Qi, C. R., Litany, O., He, K., & Guibas, L. J. (2019). Deep hough voting for 3d object detection in point clouds. In Proceedings of the IEEE international conference on computer vision (pp. 9277–9286). – reference: FischlerMABollesRCRandom sample consensus: A paradigm for model fitting with applications to image analysis and automated cartographyCommunications of the ACM198124638139561815810.1145/358669.358692 – reference: Xiang, Y., Schmidt, T., Narayanan, V., & Fox, D. (2018). Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. In Robotics: Science and systems (RSS). – reference: Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 – reference: Horaud, R., Conio, B., Leboulleux, O., & Lacolle, B. (1989). An analytic solution for the perspective 4-point problem. In Proceedings CVPR’89: IEEE computer society conference on computer vision and pattern recognition (pp. 500–507). IEEE. – reference: Kendall, A., Grimes, M., & Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the international conference on computer vision (ICCV). – reference: Peretroukhin, V., Wagstaff, B., Giamou, M., & Kelly, J. (2019). Probabilistic regression of rotations using quaternion averaging and a deep multi-headed network. arXiv preprint arXiv:1904.03182 – reference: Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017a) Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 652–660). – reference: Birdal, T., Arbel, M., Şimşekli, U., & Guibas, L. (2020). Synchronizing probability measures on rotations via optimal transport. In Proceedings of the IEEE conference on computer vision and pattern recognition. – reference: Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 – reference: Bui, M., Albarqouni, S., Ilic, S., & Navab, N. (2018). Scene coordinate and correspondence learning for image-based localization. In British machine vision conference (BMVC). – reference: Bui, M., Baur, C., Navab, N., Ilic, S., & Albarqouni, S. (2019). Adversarial networks for camera pose regression and refinement. In International conference on computer vision workshops (ICCVW). – reference: Rupprecht, C., Laina, I., DiPietro, R., Baust, M., Tombari, F., Navab, N., & Hager, G. D. (2017). Learning in an uncertain world: Representing ambiguity through multiple hypotheses. In Proceedings of the IEEE international conference on computer vision (pp. 3591–3600). – reference: Manhardt, F., Arroyo, D. M., Rupprecht, C., Busam, B., Birdal, T., Navab, N., & Tombari, F. (2019). Explaining the ambiguity of object detection and 6d pose from visual data. In International conference of computer vision (ICCV). IEEE/CVF. – reference: Sundermeyer, M., Marton, Z. C., Durner, M., Brucker, M., & Triebel, R. (2018) Implicit 3d orientation learning for 6d object detection from RGB images. In Proceedings of the European conference on computer vision (ECCV) (pp. 699–715). – reference: Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255). IEEE. – reference: DengXMousavianAXiangYXiaFBretlTFoxDPoserbpf: A rao-blackwellized particle filter for 6-d object pose trackingIEEE Transactions on Robotics2021371328134210.1109/TRO.2021.3056043 – reference: Zakharov, S., Shugurov, I., & Ilic, S. (2019). Dpod: Dense 6d pose object detector in RGB images. In Proceedings of the IEEE international conference on computer vision. – reference: Balntas, V., Li, S., & Prisacariu, V. (2018). Relocnet: Continuous metric learning relocalisation using neural nets. In Proceedings of the European conference on computer vision (ECCV) (pp. 751–767). – reference: Kendall, A., & Cipolla, R. (2017). Geometric loss functions for camera pose regression with deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5974–5983). – reference: Schonberger, J. L., & Frahm, J. M. (2016). Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4104–4113). – reference: Massiceti, D., Krull, A., Brachmann, E., Rother, C., Torr, P. H. (2017). Random forests versus neural networks—What’s best for camera localization? In 2017 IEEE international conference on robotics and automation (ICRA) (pp. 5118–5125). IEEE. – reference: LabbéMMichaudFRtab-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operationJournal of Field Robotics201936241644610.1002/rob.21831 – reference: Glover, J., & Kaelbling, L. P.: Tracking the spin on a ping pong ball with the quaternion Bingham filter. In Glover 2014 IEEE international conference on robotics and automation (ICRA) (pp. 4133–4140). – reference: Deng, H., Birdal, T., & Ilic, S. (2018). Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors. In Proceedings of the European conference on computer vision (ECCV) (pp. 602–618). – reference: SubbaraoRMeerPNonlinear mean shift over Riemannian manifoldsInternational Journal of Computer Vision2009841110.1007/s11263-008-0195-8 – reference: Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818–2826). – reference: Birdal, T., & Simsekli, U. (2019). Probabilistic permutation synchronization using the riemannian structure of the birkhoff polytope. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11105–11116). – reference: Sattler, T., Zhou, Q., Pollefeys, M., & Leal-Taixe, L. (2019). Understanding the limitations of CNN-based absolute camera pose regression. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3302–3312). – reference: WangQAProbability distribution and entropy as a measure of uncertaintyJournal of Physics A: Mathematical and Theoretical2008416065004243592510.1088/1751-8113/41/6/065004 – reference: JaynesETInformation theory and statistical mechanicsPhysical Review195710646208730510.1103/PhysRev.106.620 – reference: Zhao, Y., Birdal, T., Lenssen, J.E., Menegatti, E., Guibas, L., & Tombari, F. (2020). Quaternion equivariant capsule networks for 3d point clouds. In European conference on computer vision (pp. 1–19). Springer. – reference: Liu, W., Luo, W., Lian, D., & Gao, S. (2018). Future frame prediction for anomaly detection—A new baseline. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6536–6545). – reference: Haarbach, A., Birdal, T., & Ilic, S. (2018). Survey of higher order rigid body motion interpolation methods for keyframe animation and continuous-time trajectory estimation. In 2018 Sixth international conference on 3D vision (3DV) (pp. 381–389). IEEE. https://doi.org/10.1109/3DV.2018.00051 – reference: Glover, J., Bradski, G., & Rusu, R. B. (2012). Monte Carlo pose estimation with quaternion kernels and the Bingham distribution. In Robotics: Science and systems (vol. 7, p. 97). – reference: BinghamCAn antipodally symmetric distribution on the sphereThe Annals of Statistics197421201122539798810.1214/aos/1176342874 – reference: Gilitschenski, I., Sahoo, R., Schwarting, W., Amini, A., Karaman, S., & Rus, D. (2020). Deep orientation uncertainty learning based on a Bingham loss. In International conference on learning representations. https://openreview.net/forum?id=ryloogSKDS – reference: Zeisl, B., Sattler, T., & Pollefeys, M. (2015). Camera pose voting for large-scale image-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 2704–2712). – reference: Arun SrivatsanRXuMZevallosNChosetHProbabilistic pose estimation using a Bingham distribution-based linear filterThe International Journal of Robotics Research20183713–141610163110.1177/0278364918778353 – reference: BarfootTDFurgalePTAssociating uncertainty with three-dimensional poses for use in estimation problemsIEEE Transactions on Robotics201430367969310.1109/TRO.2014.2298059 – reference: Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. (2019). On the continuity of rotation representations in neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5745–5753). – reference: Berger, J. O. (2013). Statistical decision theory and Bayesian analysis. Springer. – reference: Besl, P. J., McKay, N. D. (1992). Method for registration of 3-d shapes. In Sensor fusion IV: Control paradigms and data structures (vol. 1611, pp. 586–606). International Society for Optics and Photonics. – reference: Brachmann, E., Krull, A., Nowozin, S., Shotton, J., Michel, F., Gumhold, S., & Rother, C. (2017). DSAC-differentiable RANSAC for camera localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6684–6692). – reference: Zeng, A., Song, S., Nießner, M., Fisher, M., Xiao, J., & Funkhouser, T. (2017). 3Dmatch: Learning local geometric descriptors from RGB-D reconstructions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1802–1811). – reference: RubnerYTomasiCGuibasLJThe earth mover’s distance as a metric for image retrievalInternational Journal of Computer Vision20004029912110.1023/A:1026543900054 – reference: Bui, M., Birdal, T., Deng, H., Albarqouni, S., Guibas, L., Ilic, S., & Navab, N. (2020). 6d camera relocalization in ambiguous scenes via continuous multimodal inference. In European conference on computer vision (ECCV). – reference: GrassiaFSPractical parameterization of rotations using the exponential mapJournal of Graphics Tools199833294810.1080/10867651.1998.10487493 – reference: Qi, C. R., Yi, L., Su, H., Guibas, L. J. (2017b) Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems (pp. 5099–5108). – reference: Wang, Y., & Solomon, J. M. (2019b). Prnet: Self-supervised learning for partial-to-partial registration. In Advances in neural information processing systems. – reference: Okorn, B., Xu, M., Hebert, M., & Held, D. (2020). Learning orientation distributions for object pose estimation. In 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 10580–10587). IEEE. – reference: Brachmann, E., & Rother, C. (2018). Learning less is more-6d camera localization via 3d surface regression. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4654–4662). – reference: Mardia, K. V., & Jupp, P. E. (2009). Directional statistics (vol. 494). Wiley. – reference: Zolfaghari, M., Çiçek, Ö., Ali, S. M., Mahdisoltani, F., Zhang, C., & Brox, T. (2019). Learning representations for predicting future activities. arXiv preprint arXiv:1905.03578 – reference: KumeAWoodATSaddlepoint approximations for the Bingham and fisher-Bingham normalising constantsBiometrika2005922465476220137110.1093/biomet/92.2.465 – reference: Valentin, J., Nießner, M., Shotton, J., Fitzgibbon, A., Izadi, S., & Torr, P. H. (2015). Exploiting uncertainty in regression forests for accurate camera relocalization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4400–4408). – reference: Bishop, C. M.: Mixture density networks (1994) https://research.aston.ac.uk/en/publications/mixture-density-networks – reference: Deng, H., Birdal, T., & Ilic, S. (2019). 3d local features for direct pairwise registration. In The IEEE conference on computer vision and pattern recognition (CVPR). – reference: Durrant-WhyteHBaileyTSimultaneous localization and mapping: Part IIEEE Robotics & Automation Magazine20061329911010.1109/MRA.2006.1638022 – reference: CadenaCCarloneLCarrilloHLatifYScaramuzzaDNeiraJReidILeonardJJPast, present, and future of simultaneous localization and mapping: Toward the robust-perception ageIEEE Transactions on Robotics20163261309133210.1109/TRO.2016.2624754 – reference: Sattler, T., Havlena, M., Radenovic, F., Schindler, K., & Pollefeys, M. (2015). Hyperpoints and fine vocabularies for large-scale location recognition. In Proceedings of the IEEE international conference on computer vision (pp. 2102–2110). – reference: Luc, P., Neverova, N., Couprie, C., Verbeek, J., & LeCun, Y. (2017). Predicting deeper into the future of semantic segmentation. In Proceedings of the IEEE international conference on computer vision (pp. 648–657). – reference: Brachmann, E., Michel, F., Krull, A., Ying Yang, M., Gumhold, S., & Rother, C. (2016). Uncertainty-driven 6d pose estimation of objects and scenes from a single RGB image. In Proceedings of the IEEE conference on computer vision and pattern recognition. – reference: Yuan, W., Held, D., Mertz, C., & Hebert, M. (2018). Iterative transformer network for 3d point cloud. arXiv preprint arXiv:1811.11209 – reference: Yang, L., Bai, Z., Tang, C., Li, H., Furukawa, Y., & Tan, P. (2019). Sanet: Scene agnostic network for camera localization. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 42–51). – reference: Sarlin, P. E., Unagar, A., Larsson, M., Germain, H., Toft, C., Larsson, V., Pollefeys, M., Lepetit, V., Hammarstrand, L., Kahl, F., & Sattler, T. (2021.) Back to the feature: Learning robust camera localization from pixels to pose. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3247–3257). – reference: Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International conference on machine learning (pp. 1050–1059). – reference: Kehl, W., Manhardt, F., Tombari, F., Ilic, S., & Navab, N. (2017). Ssd-6d: Making RGB-based 3d detection and 6d pose estimation great again. In Proceedings of the IEEE international conference on computer vision (pp. 1521–1529). – reference: McLachlan, G. J., & Basford, K. E. (1988). Mixture models: Inference and applications to clustering (vol. 84). M. Dekker. – reference: UllmanSThe interpretation of structure from motionProceedings of the Royal Society of London. Series B. Biological Sciences19792031153405426 – reference: Birdal, T., Simsekli, U., Eken, M. O., & Ilic, S. (2018). Bayesian pose graph optimization via Bingham distributions and tempered geodesic MCMC. In Advances in neural information processing systems (pp. 308–319). – reference: He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition pp. 770–778. – reference: Peretroukhin, V., Giamou, M., Rosen, D. M., Greene, W. N., Roy, N., & Kelly, J. (2020). A smooth representation of SO(3) for deep rotation learning with uncertainty. In Proceedings of robotics: Science and systems (RSS’20). – reference: Hinterstoisser, S., Lepetit, V., Ilic, S., Holzer, S., Bradski, G., Konolige, K., & Navab, N. (2012). Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In Asian conference on computer vision (pp. 548–562). Springer. – reference: Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., & Lerer, A. (2017). Automatic differentiation in PyTorch. In: NIPS Autodiff Workshop. – reference: Zakharov, S., Kehl, W., Planche, B., Hutter, A., & Ilic, S. (2017). 3d object instance recognition and pose estimation using triplet loss with dynamic margin. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp 552–559). IEEE. – reference: Zhou, Y., & Tuzel, O. (2018). Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4490–4499). – reference: PiascoNSidibéDDemonceauxCGouet-BrunetVA survey on visual-based localization: On the benefit of heterogeneous dataPattern Recognition2018749010910.1016/j.patcog.2017.09.013 – reference: Aoki, Y., Goforth, H., Srivatsan, R. A., & Lucey, S. (2019). Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7163–7172). – reference: Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks. In Proceedings of the 34th international conference on machine learning (Vol. 70, pp. 1321–1330). JMLR.org. – reference: Makansi, O., Ilg, E., Cicek, O., & Brox, T. (2019). Overcoming limitations of mixture density networks: A sampling and fitting framework for multimodal future prediction. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7144–7153). – reference: Liao, S., Gavves, E., & Snoek, C. G. (2019). Spherical regression: Learning viewpoints, surface normals and 3d rotations on n-spheres. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9759–9767). – reference: Wang, Y., & Solomon, J. M. (2019a). Deep closest point: Learning representations for point cloud registration. arXiv preprint arXiv:1905.03304 – reference: Pitteri, G., Ramamonjisoa, M., Ilic, S., & Lepetit, V. (2019). On object symmetries and 6d pose estimation from images. In 2019 International conference on 3D vision (3DV) (pp. 614–622). IEEE. – reference: MorawiecAFieldDRodrigues parameterization for orientation and misorientation distributionsPhilosophical Magazine A19967341113113010.1080/01418619608243708 – reference: Brahmbhatt, S., Gu, J., Kim, K., Hays, J., & Kautz, J. (2018). Geometry-aware learning of maps for camera localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2616–2625). – reference: Kendall, A., & Gal, Y. (2017). What uncertainties do we need in Bayesian deep learning for computer vision? In Advances in neural information processing systems (NIPS). – reference: Kanezaki, A., Matsushita, Y., & Nishida, Y. (2018). Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5010–5019). – reference: Shotton, J., Glocker, B., Zach, C., Izadi, S., Criminisi, A., & Fitzgibbon, A. (2013). Scene coordinate regression forests for camera relocalization in RGB-D images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2930–2937). – reference: Birdal, T., & Ilic, S. (2017). Cad priors for accurate and flexible instance reconstruction. In Proceedings of the IEEE international conference on computer vision (pp. 133–142). – reference: BreimanLRandom forestsMachine Learning200145153210.1023/A:1010933404324 – reference: Kendall, A., & Cipolla, R. (2016). Modelling uncertainty in deep learning for camera relocalization. In 2016 IEEE international conference on robotics and automation (ICRA) (pp. 4762–4769). IEEE. – reference: Prokudin, S., Gehler, P., & Nowozin, S. (2018). Deep directional statistics: Pose estimation with uncertainty quantification. In Proceedings of the European conference on computer vision (ECCV) (pp. 534–551). – reference: Kurz, G., Gilitschenski, I., Pfaff, F., Drude, L., Hanebeck, U. D., Haeb-Umbach, R., & Siegwart, R. Y. (2017). Directional statistics and filtering using libdirectional. arXiv preprint arXiv:1712.09718 – reference: Chen, J., Yin, Y., Birdal, T., Chen, B., Guibas, L., & Wang, H. (2022). Projective manifold gradient layer for deep rotation regression. In The IEEE conference on computer vision and pattern recognition (CVPR). – reference: Murray, R. M. (1994). A mathematical introduction to robotic manipulation. CRC Press. – reference: Busam, B., Birdal, T., & Navab, N. (2017). Camera pose filtering with local regression geodesics on the Riemannian manifold of dual quaternions. In Proceedings of the IEEE international conference on computer vision workshops (pp. 2436–2445). – reference: Dey, D., Ramakrishna, V., Hebert, M., & Andrew Bagnell, J. (2015). Predicting multiple structured visual interpretations. In Proceedings of the IEEE international conference on computer vision (pp. 2947–2955). – reference: Feng, W., Tian, F. P., Zhang, Q., & Sun, J. (2016). 6d dynamic camera relocalization from single reference image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4049–4057). – reference: Birdal, T., Bala, E., Eren, T., & Ilic, S. (2016). Online inspection of 3d parts via a locally overlapping camera network. In 2016 IEEE Winter conference on applications of computer vision (WACV) (pp. 1–10). IEEE. – reference: Kendall, A., & Cipolla, R. (2016). Modelling uncertainty in deep learning for camera relocalization. In Proceedings of the international conference on robotics and automation (ICRA). – reference: Kurz, G., Gilitschenski, I., Julier, S., & Hanebeck, U. D. (2013). Recursive estimation of orientation based on the Bingham distribution. In 2013 16th International conference on information fusion (FUSION) (pp. 1487–1494). IEEE. – ident: 1612_CR48 doi: 10.1007/978-3-642-37331-2_42 – ident: 1612_CR64 doi: 10.1109/CVPR.2019.00999 – volume: 36 start-page: 416 issue: 2 year: 2019 ident: 1612_CR63 publication-title: Journal of Field Robotics doi: 10.1002/rob.21831 – ident: 1612_CR110 doi: 10.1109/CVPR.2017.29 – ident: 1612_CR52 doi: 10.1109/ICCV.2017.169 – ident: 1612_CR30 doi: 10.1109/CVPR.2019.00336 – ident: 1612_CR62 – ident: 1612_CR1 doi: 10.1109/CVPR.2019.00733 – volume: 30 start-page: 679 issue: 3 year: 2014 ident: 1612_CR4 publication-title: IEEE Transactions on Robotics doi: 10.1109/TRO.2014.2298059 – ident: 1612_CR33 doi: 10.1109/ICCV.2015.337 – ident: 1612_CR8 doi: 10.1109/CVPR42600.2020.00164 – ident: 1612_CR56 – ident: 1612_CR15 doi: 10.1109/CVPR.2017.267 – volume: 45 start-page: 5 issue: 1 year: 2001 ident: 1612_CR19 publication-title: Machine Learning doi: 10.1023/A:1010933404324 – ident: 1612_CR65 doi: 10.1109/CVPR.2018.00684 – ident: 1612_CR69 doi: 10.1109/ICCV.2019.00694 – ident: 1612_CR106 – ident: 1612_CR103 – volume: 37 start-page: 1328 year: 2021 ident: 1612_CR32 publication-title: IEEE Transactions on Robotics doi: 10.1109/TRO.2021.3056043 – ident: 1612_CR80 doi: 10.1109/3DV.2019.00073 – ident: 1612_CR105 doi: 10.1109/ICCV.2019.00013 – ident: 1612_CR74 – ident: 1612_CR91 doi: 10.1109/CVPR.2016.445 – ident: 1612_CR95 doi: 10.1109/CVPR42600.2020.01393 – ident: 1612_CR92 doi: 10.1109/CVPR.2013.377 – ident: 1612_CR23 doi: 10.1109/ICCVW.2017.287 – ident: 1612_CR99 doi: 10.1109/CVPR.2015.7299069 – ident: 1612_CR57 doi: 10.1109/ICCV.2015.336 – ident: 1612_CR83 – ident: 1612_CR25 doi: 10.1109/CVPR52688.2022.00653 – ident: 1612_CR21 doi: 10.1109/ICCVW.2019.00470 – ident: 1612_CR40 doi: 10.15607/RSS.2011.VII.015 – ident: 1612_CR44 – ident: 1612_CR71 doi: 10.1109/ICRA.2017.7989598 – ident: 1612_CR12 doi: 10.1109/CVPR.2019.01136 – volume: 203 start-page: 405 issue: 1153 year: 1979 ident: 1612_CR98 publication-title: Proceedings of the Royal Society of London. Series B. Biological Sciences – ident: 1612_CR58 – ident: 1612_CR35 – ident: 1612_CR82 doi: 10.1109/ICCV.2019.00937 – ident: 1612_CR107 doi: 10.1109/IROS.2017.8202207 – ident: 1612_CR109 doi: 10.1109/ICCV.2015.310 – volume: 2 start-page: 1201 year: 1974 ident: 1612_CR7 publication-title: The Annals of Statistics doi: 10.1214/aos/1176342874 – ident: 1612_CR16 doi: 10.1109/CVPR.2016.366 – ident: 1612_CR55 doi: 10.1109/CVPR.2017.694 – ident: 1612_CR75 doi: 10.1109/IROS45743.2020.9340860 – ident: 1612_CR61 – ident: 1612_CR81 doi: 10.1007/978-3-030-01240-3_33 – volume: 24 start-page: 381 issue: 6 year: 1981 ident: 1612_CR37 publication-title: Communications of the ACM doi: 10.1145/358669.358692 – ident: 1612_CR113 doi: 10.1109/CVPR.2018.00472 – ident: 1612_CR13 – ident: 1612_CR10 doi: 10.1109/3DV.2015.65 – ident: 1612_CR3 doi: 10.1007/978-3-030-01264-9_46 – volume: 41 start-page: 065004 issue: 6 year: 2008 ident: 1612_CR100 publication-title: Journal of Physics A: Mathematical and Theoretical doi: 10.1088/1751-8113/41/6/065004 – ident: 1612_CR89 doi: 10.1109/ICCV.2015.243 – ident: 1612_CR38 – ident: 1612_CR9 doi: 10.1109/WACV.2016.7477630 – ident: 1612_CR72 doi: 10.2307/2289892 – ident: 1612_CR68 doi: 10.1109/CVPR.2019.00731 – ident: 1612_CR77 doi: 10.15607/RSS.2020.XVI.007 – ident: 1612_CR112 doi: 10.1109/CVPR.2019.00589 – ident: 1612_CR108 doi: 10.1109/ICCV.2019.00203 – ident: 1612_CR87 doi: 10.1109/CVPR.2013.178 – ident: 1612_CR67 doi: 10.1109/ICCVW.2017.254 – ident: 1612_CR14 – ident: 1612_CR53 doi: 10.1109/ICRA.2016.7487679 – volume: 3 start-page: 29 issue: 3 year: 1998 ident: 1612_CR42 publication-title: Journal of Graphics Tools doi: 10.1080/10867651.1998.10487493 – ident: 1612_CR17 doi: 10.1109/CVPR.2018.00489 – ident: 1612_CR102 – volume: 106 start-page: 620 issue: 4 year: 1957 ident: 1612_CR50 publication-title: Physical Review doi: 10.1103/PhysRev.106.620 – ident: 1612_CR104 doi: 10.15607/RSS.2018.XIV.019 – ident: 1612_CR45 doi: 10.1109/3DV.2018.00051 – ident: 1612_CR5 – ident: 1612_CR27 doi: 10.1109/IROS.2018.8594282 – ident: 1612_CR78 – ident: 1612_CR114 – ident: 1612_CR97 doi: 10.1109/CVPR.2016.308 – ident: 1612_CR46 doi: 10.1109/CVPR.2016.90 – ident: 1612_CR51 doi: 10.1109/CVPR.2018.00526 – ident: 1612_CR84 – ident: 1612_CR43 – ident: 1612_CR111 doi: 10.1007/978-3-030-58452-8_1 – ident: 1612_CR18 doi: 10.1109/CVPR.2018.00277 – ident: 1612_CR41 doi: 10.1109/ICRA.2014.6907460 – volume: 37 start-page: 1610 issue: 13–14 year: 2018 ident: 1612_CR2 publication-title: The International Journal of Robotics Research doi: 10.1177/0278364918778353 – ident: 1612_CR86 doi: 10.1109/ICCV.2017.388 – ident: 1612_CR90 doi: 10.1109/CVPR.2019.00342 – volume: 92 start-page: 465 issue: 2 year: 2005 ident: 1612_CR60 publication-title: Biometrika doi: 10.1093/biomet/92.2.465 – ident: 1612_CR70 – ident: 1612_CR96 doi: 10.1007/978-3-030-01231-1_43 – ident: 1612_CR26 doi: 10.1109/CVPR.2017.284 – ident: 1612_CR101 doi: 10.1109/ICCV.2019.00362 – ident: 1612_CR29 doi: 10.1109/CVPR.2018.00028 – volume: 84 start-page: 1 issue: 1 year: 2009 ident: 1612_CR94 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-008-0195-8 – volume: 73 start-page: 1113 issue: 4 year: 1996 ident: 1612_CR73 publication-title: Philosophical Magazine A doi: 10.1080/01418619608243708 – ident: 1612_CR39 – ident: 1612_CR22 doi: 10.1007/978-3-030-58523-5_9 – ident: 1612_CR49 doi: 10.1016/0734-189X(89)90052-2 – volume: 74 start-page: 90 year: 2018 ident: 1612_CR79 publication-title: Pattern Recognition doi: 10.1016/j.patcog.2017.09.013 – ident: 1612_CR36 doi: 10.1109/CVPR.2016.439 – volume: 40 start-page: 99 issue: 2 year: 2000 ident: 1612_CR85 publication-title: International Journal of Computer Vision doi: 10.1023/A:1026543900054 – volume: 32 start-page: 1309 issue: 6 year: 2016 ident: 1612_CR24 publication-title: IEEE Transactions on Robotics doi: 10.1109/TRO.2016.2624754 – ident: 1612_CR11 doi: 10.1109/ICCV.2017.24 – ident: 1612_CR76 – ident: 1612_CR66 doi: 10.1109/ICCV.2017.77 – ident: 1612_CR20 – ident: 1612_CR54 doi: 10.1109/ICRA.2016.7487679 – ident: 1612_CR28 doi: 10.1007/978-3-030-01228-1_37 – volume: 13 start-page: 99 issue: 2 year: 2006 ident: 1612_CR34 publication-title: IEEE Robotics & Automation Magazine doi: 10.1109/MRA.2006.1638022 – volume: 61 start-page: 474 issue: 3 year: 1955 ident: 1612_CR47 publication-title: Annals of Mathematics doi: 10.2307/1969810 – ident: 1612_CR88 doi: 10.1109/CVPR46437.2021.00326 – ident: 1612_CR59 – ident: 1612_CR93 – ident: 1612_CR6 – ident: 1612_CR31 doi: 10.1109/CVPR.2009.5206848 |
| SSID | ssj0002823 |
| Score | 2.5929005 |
| Snippet | In this work, we introduce
Deep Bingham Networks (DBN)
, a generic framework that can naturally handle pose-related uncertainties and ambiguities arising in... In this work, we introduce Deep Bingham Networks (DBN), a generic framework that can naturally handle pose-related uncertainties and ambiguities arising in... |
| SourceID | proquest gale crossref springer |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1627 |
| SubjectTerms | Artificial Intelligence Computer Imaging Computer Science Computer vision Datasets Human-computer interaction Image Processing and Computer Vision Machine vision Networks Numerical methods Numerical stability Pattern Recognition Pattern Recognition and Graphics Pose estimation Solution space Special Issue on 3D Computer Vision Three dimensional models Training Uncertainty Vision |
| SummonAdditionalLinks | – databaseName: SpringerLINK Contemporary 1997-Present dbid: RSV link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3JTsMwEB1B4cCFsoqyyUJIHCBS02w2t7JUcKkqCqg3y7GdUommiLQg_p5x4lCVTQLl5kxiZzwzfo5nAThkLg2DWPlO7InQ8ZWnHMoSz6lHMvA9TVksi2ITUbtNez3WsUFhWentXh5J5pZ6GuzmNvIzR-NKEJp6HPOwgMsdNep4073_sL-4iSgKyOPGKAiZa0Nlvn_HzHL02Sh_OR3NF51W9X_DXYFlCzJJs5CKVZjT6RpULeAkVp0zbCprOpRt69C50PqJnGFPD2JI2oWTeHZKLhBQYiMx_23JHZLmngTjNyJSRZrDeNCfIJ4ng5R0Rpkml2g5iqDIDbhrXd6eXzm26oIjESuNHYXjFqHWCU6eCqTy8QpjpWUkBcIzwYRkboyqLiIv8AWjXhxGikmaJEILRAebUElHqd4CEiX1RNUpCmsofa0EC3D3p10qZOBFcUBr4JbM59KmJDeVMR75NJmy4SJHLvKci_y1BscfzzwVCTl-pT4wc8pNpovUuNL0xSTL-HX3hjcjU0oA8SCrwZElSkbYvRQ2MgE_wiTHmqHcLWWDW13PeMOkcKMepW4NTkpZmN7-eXDbfyPfgaVGLk7GV3gXKuPnid6DRfkyHmTP-7kOvAMFLQB8 priority: 102 providerName: Springer Nature |
| Title | Deep Bingham Networks: Dealing with Uncertainty and Ambiguity in Pose Estimation |
| URI | https://link.springer.com/article/10.1007/s11263-022-01612-w https://www.proquest.com/docview/2681283881 |
| Volume | 130 |
| WOSCitedRecordID | wos000790633300001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLINK Contemporary 1997-Present customDbUrl: eissn: 1573-1405 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002823 issn: 0920-5691 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3db9MwED-xjQde2MaH6BiVhZB4gIg6iWObl6nbOoEQVdQxGLxYju2MSiztlha0_55z4qwaiL2gSH5wLomT-8jZvrsfwAtJRcYKm0ZForMotYmNhCyTaMANSxMnZGFasAk-HovTU5mHBbc6hFV2NrEx1HZm_Br5m9gXyhKJEHRvfhF51Ci_uxogNNZgg8Yx9XL-gUfXlhinEy2UPE6RWCZpSJppU-do3Oxg-sCEzKN73Pgx_Wme_9onbX4_R5v_O_AtuB8cTzJsJWUb7rjqAWwGJ5QEFa-xq8N56PoeQn7o3Jzs47i-63MybgPH67fkEJ1M7CR-LZecIGkTXbC4IrqyZHheTM-W6OOTaUXyWe3ICK1Jmyj5CE6ORp8O3kUBiSEy6D8tIosfS2fOlchQy4xN8cgK6ww3Gl02LbWRtED11zxhqZYiKTJupRFlqZ1Gj-ExrFezyj0BwstBaQcCBTgzqbNaMpwROiq0YQkvmOgB7digTChT7tEyfqhVgWXPOoWsUw3r1K8evLq-Zt4W6biV-rnnrvLVLyofXnOml3Wt3h9P1JB7eAH0EWUPXgaicoaPNzpkK-BL-IJZNyh3O76roP-1WjG9B687yVmd_vfgdm6_21O4Fzcy6-OFd2F9cbl0z-Cu-bmY1pd9WONfvvZhY380zif9Rhew_Tg4wDZn37CdHH_-DRyvD0A |
| linkProvider | ProQuest |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VggQXylMslGIhEAeI2LwcuxKqlm6rrlpWK2il3lzHdspKNLs0u1T9U_zGziROVwXRWw8oN8d52PlmPBPPzAfwRoaCp7lNgjzWPEhsbAMhizjoZiZNYidkbhqyiWw4FIeHcrQEv9tcGAqrbHVirajtxNA_8o8RFcoSsRDhxvRnQKxRtLvaUmg0sNh152foslWfBn38vm-jaHtrf3Mn8KwCgUFbYBbYMOKaO1fgy9nU2AQPnltnMqPR_NBSGxnmCGWdxWmipYhznllpRFFop3H1w_vegttJguJAoYLdzUvNj-5LQ12PLlnKZeiTdJpUPXws7ZhSIAQnNpErC-Gfy8Ff-7L1cre98r9N1AO47w1r1msk4SEsufIRrHgjm3kVVmFTy2PRtj2GUd-5KfuM8_Bdn7BhExhfrbM-GtHYyOhfNTvArnX0xOyc6dKy3kk-Pp6jD8PGJRtNKse2UFs2iaBP4OBGhvoUlstJ6Z4By4puYbsCBZSbxFktU_R4XSi0SeMsT0UHwvazK-PLsBMbyA-1KCBNUFEIFVVDRZ114P3lNdOmCMm1vV8TmhRV9ygpfOhYz6tKDb59Vb2M6BPQBpYdeOc7FRN8vNE-GwMHQQXBrvRcbXGmvH6r1AJkHfjQInVx-t8v9_z6u72Cuzv7X_bU3mC4-wLuRbW8UGz0KizPTufuJdwxv2bj6nStljwGRzeN4Auemmpk |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1bb9MwFD4aAyFeNq6iMMBCIB7AWnOxY0-aUKGrqIqqCpi0N8-xnVGJpd3SMu2v8es4TpxVA7G3PaC8Oc79O7f4nPMBvJKR4Cy3Kc0TzWlqE0uFLBLazQxLEydkbhqyiWw8FgcHcrIGv9paGJ9W2erEWlHbmfH_yLdj3yhLJEJE20VIi5j0B-_nJ9QzSPmV1pZOo4HIyJ2fYfhW7Q77-K1fx_Fg79vHTzQwDFCDfsGC2ijmmjtX4I1aZmyKG8-tM5nR6IpoqY2McoS1zhKWaimSnGdWGlEU2mm0hHjeG3ATrTDzMjbK6IUVwFCmobHH8IxxGYWCnaZsDy_rV099UgT3zCKXjOKfpuGvNdra9A02_-eXdhc2gsNNeo2E3IM1V96HzeB8k6DaKhxq-S3asQcw6Ts3Jx_wnXzXx2TcJMxXO6SPzjUOEv8Pm-zj1DqrYnFOdGlJ7zifHi0xtiHTkkxmlSN7qEWbAtGHsH8tj_oI1stZ6R4DyYpuYbsCBZeb1FktGUbCLhLasCTLmehA1EJAmdCe3bOE_FCrxtIeNgpho2rYqLMOvL04Zt40J7ly9kuPLOW7fpQeCkd6WVVq-PWL6mWeVgF9Y9mBN2FSMcPLGx2qNPAhfKOwSzO3WsypoPcqtQJcB961qF3t_vfNPbn6bC_gNgJXfR6OR0_hTlyLjk-Z3oL1xenSPYNb5udiWp0-r4WQwOF1A_g34_VzCg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Bingham+Networks%3A+Dealing+with+Uncertainty+and+Ambiguity+in+Pose+Estimation&rft.jtitle=International+journal+of+computer+vision&rft.au=Deng%2C+Haowen&rft.au=Bui%2C+Mai&rft.au=Navab%2C+Nassir&rft.au=Guibas%2C+Leonidas&rft.date=2022-07-01&rft.pub=Springer+Nature+B.V&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=130&rft.issue=7&rft.spage=1627&rft.epage=1654&rft_id=info:doi/10.1007%2Fs11263-022-01612-w&rft.externalDBID=HAS_PDF_LINK |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon |