Contrastive Multi-View Kernel Learning

Kernel method is a proven technique in multi-view learning. It implicitly defines a Hilbert space where samples can be linearly separated. Most kernel-based multi-view learning algorithms compute a kernel function aggregating and compressing the views into a single kernel. However, existing approach...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence Vol. 45; no. 8; pp. 9552 - 9566
Main Authors: Liu, Jiyuan, Liu, Xinwang, Yang, Yuexiang, Liao, Qing, Xia, Yuanqing
Format: Journal Article
Language:English
Published: United States IEEE 01.08.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Kernel method is a proven technique in multi-view learning. It implicitly defines a Hilbert space where samples can be linearly separated. Most kernel-based multi-view learning algorithms compute a kernel function aggregating and compressing the views into a single kernel. However, existing approaches compute the kernels independently for each view. This ignores complementary information across views and thus may result in a bad kernel choice. In contrast, we propose the Contrastive Multi-view Kernel - a novel kernel function based on the emerging contrastive learning framework. The Contrastive Multi-view Kernel implicitly embeds the views into a joint semantic space where all of them resemble each other while promoting to learn diverse views. We validate the method's effectiveness in a large empirical study. It is worth noting that the proposed kernel functions share the types and parameters with traditional ones, making them fully compatible with existing kernel theory and application. On this basis, we also propose a contrastive multi-view clustering framework and instantiate it with multiple kernel <inline-formula><tex-math notation="LaTeX">k</tex-math> <mml:math><mml:mi>k</mml:mi></mml:math><inline-graphic xlink:href="liu-ieq1-3253211.gif"/> </inline-formula>-means, achieving a promising performance. To the best of our knowledge, this is the first attempt to explore kernel generation in multi-view setting and the first approach to use contrastive learning for a multi-view kernel learning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
1939-3539
DOI:10.1109/TPAMI.2023.3253211