Deep Hough Transform for Semantic Line Detection.

Saved in:
Bibliographic Details
Title: Deep Hough Transform for Semantic Line Detection.
Authors: Zhao, Kai, Han, Qi, Zhang, Chang-Bin, Xu, Jun, Cheng, Ming-Ming
Source: IEEE Transactions on Pattern Analysis & Machine Intelligence; Sep2022, Vol. 44 Issue 9, p4793-4806, 14p
Subject Terms: HOUGH transforms, SOURCE code, FEATURE extraction, DEEP learning
Abstract: We focus on a fundamental task of detecting meaningful line structures, a.k.a., semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inherent characteristics of lines, leading to sub-optimal performance. Lines enjoy much simpler geometric property than complex objects and thus can be compactly parameterized by a few arguments. To better exploit the property of lines, in this paper, we incorporate the classical Hough transform technique into deeply learned representations and propose a one-shot end-to-end learning framework for line detection. By parameterizing lines with slopes and biases, we perform Hough transform to translate deep representations into the parametric domain, in which we perform line detection. Specifically, we aggregate features along candidate lines on the feature map plane and then assign the aggregated features to corresponding locations in the parametric domain. Consequently, the problem of detecting semantic lines in the spatial domain is transformed into spotting individual points in the parametric domain, making the post-processing steps, i.e., non-maximal suppression, more efficient. Furthermore, our method makes it easy to extract contextual line features that are critical for accurate line detection. In addition to the proposed method, we design an evaluation metric to assess the quality of line detection and construct a large scale dataset for the line detection task. Experimental results on our proposed dataset and another public dataset demonstrate the advantages of our method over previous state-of-the-art alternatives. The dataset and source code is available at https://mmcheng.net/dhtline/. [ABSTRACT FROM AUTHOR]
Copyright of IEEE Transactions on Pattern Analysis & Machine Intelligence is the property of IEEE and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Biomedical Index
Description
Abstract:We focus on a fundamental task of detecting meaningful line structures, a.k.a., semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inherent characteristics of lines, leading to sub-optimal performance. Lines enjoy much simpler geometric property than complex objects and thus can be compactly parameterized by a few arguments. To better exploit the property of lines, in this paper, we incorporate the classical Hough transform technique into deeply learned representations and propose a one-shot end-to-end learning framework for line detection. By parameterizing lines with slopes and biases, we perform Hough transform to translate deep representations into the parametric domain, in which we perform line detection. Specifically, we aggregate features along candidate lines on the feature map plane and then assign the aggregated features to corresponding locations in the parametric domain. Consequently, the problem of detecting semantic lines in the spatial domain is transformed into spotting individual points in the parametric domain, making the post-processing steps, i.e., non-maximal suppression, more efficient. Furthermore, our method makes it easy to extract contextual line features that are critical for accurate line detection. In addition to the proposed method, we design an evaluation metric to assess the quality of line detection and construct a large scale dataset for the line detection task. Experimental results on our proposed dataset and another public dataset demonstrate the advantages of our method over previous state-of-the-art alternatives. The dataset and source code is available at https://mmcheng.net/dhtline/. [ABSTRACT FROM AUTHOR]
ISSN:01628828
DOI:10.1109/TPAMI.2021.3077129