SAL-Net: Self-Supervised Attribute Learning for Object Recognition and Segmentation

Existing attribute learning methods rely on predefined attributes, which require manual annotations. Due to the limitation of human experience, the predefined attributes are not capable enough of providing enough description. This paper proposes a self-supervised attribute learning (SAL) method, whi...

Full description

Saved in:
Bibliographic Details
Published in:Wireless communications and mobile computing Vol. 2021; no. 1
Main Authors: Yang, Shu, JingWang, Arif, Sheeraz, Jia, Minli, Zhong, Shunan
Format: Journal Article
Language:English
Published: Oxford Hindawi 2021
John Wiley & Sons, Inc
Subjects:
ISSN:1530-8669, 1530-8677
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Existing attribute learning methods rely on predefined attributes, which require manual annotations. Due to the limitation of human experience, the predefined attributes are not capable enough of providing enough description. This paper proposes a self-supervised attribute learning (SAL) method, which automatically generates attribute descriptions by differentially occluding the object region to deal with the above problems. The relationship between attributes is formulated with triplet loss functions and is utilized to supervise the CNN. Attribute learning is used as an auxiliary task of a multitask image classification and segmentation network, in which self-supervision of attributes motivates the CNN to learn more discriminative features for the main semantic tasks. Experimental results on public benchmarks CUB-2011 and Pascal VOC show that the proposed SAL-Net can obtain more accurate classification and segmentation results without additional annotations. Moreover, the SAL-Net is embedded into a multiobject recognition and segmentation system, which realizes instance-aware semantic segmentation with the help of a region proposal algorithm and a fusion nonmaximum suppression algorithm.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1530-8669
1530-8677
DOI:10.1155/2021/2891303