Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and Applications
Nonlinear estimation in robotics and vision is typically plagued with outliers due to wrong data association or incorrect detections from signal processing and machine learning methods. This article introduces two unifying formulations for outlier-robust estimation, generalized maximum consensus (&l...
Saved in:
| Published in: | IEEE transactions on robotics Vol. 38; no. 1; pp. 281 - 301 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
New York
IEEE
01.02.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 1552-3098, 1941-0468 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Nonlinear estimation in robotics and vision is typically plagued with outliers due to wrong data association or incorrect detections from signal processing and machine learning methods. This article introduces two unifying formulations for outlier-robust estimation, generalized maximum consensus (<inline-formula><tex-math notation="LaTeX">\text{G}</tex-math></inline-formula>-<inline-formula><tex-math notation="LaTeX">\text{MC}</tex-math></inline-formula>) and generalized truncated least squares (<inline-formula><tex-math notation="LaTeX">\text{G-TLS}</tex-math></inline-formula>), and investigates fundamental limits, practical algorithms, and applications. Our first contribution is a proof that outlier-robust estimation is inapproximable: In the worst case, it is impossible to (even approximately) find the set of outliers, even with slower-than-polynomial-time algorithms (particularly, algorithms running in quasi-polynomial time). As a second contribution, we review and extend two general-purpose algorithms. The first, adaptive trimming (<inline-formula><tex-math notation="LaTeX">\text{ADAPT}</tex-math></inline-formula>), is combinatorial and is suitable for <inline-formula><tex-math notation="LaTeX">\text{G}</tex-math></inline-formula>-<inline-formula><tex-math notation="LaTeX">\text{MC}</tex-math></inline-formula>; the second, graduated nonconvexity (<inline-formula><tex-math notation="LaTeX">\text{GNC}</tex-math></inline-formula>), is based on homotopy methods and is suitable for <inline-formula><tex-math notation="LaTeX">\text{G-TLS}</tex-math></inline-formula>. We extend <inline-formula><tex-math notation="LaTeX">\text{ADAPT}</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">\text{GNC}</tex-math></inline-formula> to the case where the user does not have prior knowledge of the inlier-noise statistics (or the statistics may vary over time) and is unable to guess a reasonable threshold to separate inliers from outliers (as the one commonly used in RANdom SAmple Consensus <inline-formula><tex-math notation="LaTeX">(\text{RANSAC})</tex-math></inline-formula>. We propose the first minimally tuned algorithms for outlier rejection, which dynamically decide how to separate inliers from outliers. Our third contribution is an evaluation of the proposed algorithms on robot perception problems: mesh registration, image-based object detection ( shape alignment ), and pose graph optimization. <inline-formula><tex-math notation="LaTeX">\text{ADAPT}</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">\text{GNC}</tex-math></inline-formula> execute in real time, are deterministic, outperform <inline-formula><tex-math notation="LaTeX">\text{RANSAC}</tex-math></inline-formula>, and are robust up to 80-90% outliers. Their minimally tuned versions also compare favorably with the state of the art, even though they do not rely on a noise bound for the inliers. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1552-3098 1941-0468 |
| DOI: | 10.1109/TRO.2021.3094984 |