Captured multi-label relations via joint deep supervised autoencoder

The mapping relations learning between instances and multiple labels should reflect the underlying joint probability distribution following by the data sets. The general solution of such problem is to assume that the samples are subject to a certain distribution, i.e. normal distribution, but this h...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied soft computing Ročník 74; s. 709 - 728
Hlavní autoři: Lian, Si-ming, Liu, Jian-wei, Lu, Run-kun, Luo, Xiong-lin
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.01.2019
Témata:
ISSN:1568-4946, 1872-9681
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The mapping relations learning between instances and multiple labels should reflect the underlying joint probability distribution following by the data sets. The general solution of such problem is to assume that the samples are subject to a certain distribution, i.e. normal distribution, but this hypothesis cannot excavate the real underlying mapping relations hidden in the data sets. Meanwhile, it is not advisable to suppose that multiple labels are independent of each other. Therefore, we propose the deep supervised autoencoder as a generative model to learn the posterior conditional probability rather than assigning the specific distribution in advance. In this way, we propose the different joint augmented matrices of training instances Xi and corresponding label sets Yi under the three multi-label relations assumptions as the inputs to learn the posterior probability distribution. Finally, the experiments under model assumptions are conducted on six data sets, and we also set different noise levels to verify whether the optimal hypothesis has the ability to handle the corrupted labels. Experiments on images, biology and music real-world data sets show that our method outperforms most of state-of-the-art multi-label classifiers. •Three assumptions: multi-label independence, dependence and partial dependence.•Representation for multi-label retain the information of label locations and values.•The deep abstract representations are extracted by deep supervised encoder.•We introduce flipping probability to fight against the noise multi-label.
ISSN:1568-4946
1872-9681
DOI:10.1016/j.asoc.2018.10.035