Theoretical insights on the pre-image resolution in machine learning

While many nonlinear pattern recognition and data mining tasks rely on embedding the data into a latent space, one often needs to extract the patterns in the input space. Estimating the inverse of the nonlinear embedding is the so-called pre-image problem. Several strategies have been proposed to ad...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition Vol. 156; p. 110800
Main Author: Honeine, Paul
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.12.2024
Elsevier
Subjects:
ISSN:0031-3203
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract While many nonlinear pattern recognition and data mining tasks rely on embedding the data into a latent space, one often needs to extract the patterns in the input space. Estimating the inverse of the nonlinear embedding is the so-called pre-image problem. Several strategies have been proposed to address the estimation of the pre-image; However, there are no theoretical results so far to understand the pre-image problem and its resolution. In this paper, we provide theoretical underpinnings of the resolution of the pre-image problem in Machine Learning. These theoretical results are on the gradient descent optimization, the fixed-point iteration algorithm and Newton’s method. We provide sufficient conditions on the convexity/nonconvexity of the pre-image problem. Moreover, we show that the fixed-point iteration is a Newton update and prove that it is a Majorize-Minimization (MM) algorithm where the surrogate function is a quadratic function. These theoretical results are derived for the wide classes of radial kernels and projective kernels. We also provide other insights by connecting the resolution of this problem to the gradient density estimation problem with the so-called mean shift algorithm. •Solid foundations on the resolution of the pre-image problem in Machine Learning•Relationship between the fixed-point iteration technique and Newton’s method•Fixed-point iteration is a Majorize-Minimization algorithm•General theoretical results for the wide classes of radial and projective kernels
AbstractList While many nonlinear pattern recognition and data mining tasks rely on embedding the data into a latent space, one often needs to extract the patterns in the input space. Estimating the inverse of the nonlinear embedding is the so-called pre-image problem. Several strategies have been proposed to address the estimation of the pre-image; However, there are no theoretical results so far to understand the pre-image problem and its resolution. In this paper, we provide theoretical underpinnings of the resolution of the pre-image problem in Machine Learning. These theoretical results are on the gradient descent optimization, the fixed-point iteration algorithm and Newton's method. We provide sufficient conditions on the convexity/nonconvexity of the pre-image problem. Moreover, we show that the fixed-point iteration is a Newton update and prove that it is a Majorize-Minimization (MM) algorithm where the surrogate function is a quadratic function. These theoretical results are derived for the wide classes of radial kernels and projective kernels. We also provide other insights by connecting the resolution of this problem to the gradient density estimation problem with the so-called mean shift algorithm.
While many nonlinear pattern recognition and data mining tasks rely on embedding the data into a latent space, one often needs to extract the patterns in the input space. Estimating the inverse of the nonlinear embedding is the so-called pre-image problem. Several strategies have been proposed to address the estimation of the pre-image; However, there are no theoretical results so far to understand the pre-image problem and its resolution. In this paper, we provide theoretical underpinnings of the resolution of the pre-image problem in Machine Learning. These theoretical results are on the gradient descent optimization, the fixed-point iteration algorithm and Newton’s method. We provide sufficient conditions on the convexity/nonconvexity of the pre-image problem. Moreover, we show that the fixed-point iteration is a Newton update and prove that it is a Majorize-Minimization (MM) algorithm where the surrogate function is a quadratic function. These theoretical results are derived for the wide classes of radial kernels and projective kernels. We also provide other insights by connecting the resolution of this problem to the gradient density estimation problem with the so-called mean shift algorithm. •Solid foundations on the resolution of the pre-image problem in Machine Learning•Relationship between the fixed-point iteration technique and Newton’s method•Fixed-point iteration is a Majorize-Minimization algorithm•General theoretical results for the wide classes of radial and projective kernels
ArticleNumber 110800
Author Honeine, Paul
Author_xml – sequence: 1
  givenname: Paul
  surname: Honeine
  fullname: Honeine, Paul
  email: paul.honeine@univ-rouen.fr
  organization: Univ Rouen Normandie, INSA Rouen Normandie, Université Le Havre Normandie, Normandie Univ, LITIS UR 4108, F-76000 Rouen, France
BackLink https://hal.science/hal-04648777$$DView record in HAL
BookMark eNqFkDtPwzAQgD0UibbwDxiyMiScEzdOGZCq8ihSJZYyW1fnkrhKnco2lfj3JApiYIDppLv77vHN2MR2lhi74ZBw4PndITlh0F2dpJCKhHMoACZsCpDxOEshu2Qz7w8AXHKRTtnjrqHOUTAa28hYb-om-KizUWgoOjmKzRFrihz5rv0Ipi8YGx1RN8ZS1BI6a2x9xS4qbD1df8c5e39-2q038fbt5XW92sY6K_IQLzktOWZYVLDv96eCFoUAKBf7CgllKTmhRihKLXMAkkIT8kzgfskrWS2qbM5ux7kNturk-tPcp-rQqM1qq4YciFwUUsoz73vF2Ktd572j6gfgoAZT6qBGU2owpUZTPXb_C9Mm4PB4cGja_-CHEaZewtmQU14bsppK40gHVXbm7wFfdoGK-A
CitedBy_id crossref_primary_10_1016_j_patrec_2025_02_005
Cites_doi 10.1016/j.patcog.2017.10.014
10.1109/34.888716
10.1007/s10208-020-09472-x
10.1016/j.patcog.2006.10.016
10.1109/TPAMI.2019.2913640
10.1109/TPAMI.2007.1057
10.1145/1015330.1015443
10.1109/MSP.2010.939747
10.1109/TKDE.2022.3172048
10.1142/S021800140300240X
10.1109/34.400568
10.1007/978-3-031-47637-2_1
10.1016/j.patrec.2013.05.004
10.1109/TPAMI.2024.3385920
10.1609/aaai.v38i11.29077
10.1109/TIT.1975.1055330
10.1016/j.neucom.2023.126639
10.1109/72.788641
10.1109/TPAMI.2005.59
10.1109/34.1000236
10.1109/ACCESS.2021.3096801
10.1090/S0273-0979-01-00923-5
10.1109/IJCNN.2008.4633793
10.1016/j.jmva.2014.11.009
10.1016/j.sigpro.2016.08.011
10.1016/j.neunet.2020.12.010
ContentType Journal Article
Copyright 2024
Distributed under a Creative Commons Attribution 4.0 International License
Copyright_xml – notice: 2024
– notice: Distributed under a Creative Commons Attribution 4.0 International License
DBID AAYXX
CITATION
1XC
VOOES
DOI 10.1016/j.patcog.2024.110800
DatabaseName CrossRef
Hyper Article en Ligne (HAL)
Hyper Article en Ligne (HAL) (Open Access)
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
ExternalDocumentID oai:HAL:hal-04648777v1
10_1016_j_patcog_2024_110800
S003132032400551X
GroupedDBID --K
--M
-D8
-DT
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXKI
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABFRF
ABHFT
ABJNI
ABMAC
ABTAH
ABXDB
ACBEA
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FD6
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
JJJVA
KOM
KZ1
LG9
LMP
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WUQ
XJE
XPP
ZMT
ZY4
~G-
9DU
AATTM
AAYWO
AAYXX
ABDPE
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKYEP
ANKPU
APXCP
CITATION
EFKBS
EFLBG
~HD
1XC
VOOES
ID FETCH-LOGICAL-c386t-91e91a3a8f0b01724e58400d5bfaea7d71eaca08dc7600e74cea134ab91f7f5f3
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001279597400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0031-3203
IngestDate Tue Oct 28 06:33:32 EDT 2025
Sat Nov 29 03:52:40 EST 2025
Tue Nov 18 20:41:47 EST 2025
Sat Sep 07 15:51:41 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Majorize-minimization algorithm
Newton’s method
Fixed-point iteration
Pattern recognition
Machine learning
Pre-image problem
Pattern Recognition
Newton's Method
Fixed-point Iteration
Majorize-Minimization Algorithm
Machine Learning
Pre-image Problem
Language English
License Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c386t-91e91a3a8f0b01724e58400d5bfaea7d71eaca08dc7600e74cea134ab91f7f5f3
ORCID 0000-0002-3042-183X
OpenAccessLink https://hal.science/hal-04648777
ParticipantIDs hal_primary_oai_HAL_hal_04648777v1
crossref_primary_10_1016_j_patcog_2024_110800
crossref_citationtrail_10_1016_j_patcog_2024_110800
elsevier_sciencedirect_doi_10_1016_j_patcog_2024_110800
PublicationCentury 2000
PublicationDate 2024-12-01
PublicationDateYYYYMMDD 2024-12-01
PublicationDate_xml – month: 12
  year: 2024
  text: 2024-12-01
  day: 01
PublicationDecade 2020
PublicationTitle Pattern recognition
PublicationYear 2024
Publisher Elsevier Ltd
Elsevier
Publisher_xml – name: Elsevier Ltd
– name: Elsevier
References Jia, Gaüzère, Honeine (b7) 2021
Arias-Castro, Mason, Pelletier (b35) 2016; 17
Fan, Chow (b5) 2018; 77
Aliyari Ghassabeh (b34) 2013; 34
C.S. Ong, X. Mary, S. Canu, A.J. Smola, Learning with non-positive kernels, in: Proc. 21st International Conference on Machine Learning, 2004, p. 81.
Zhu, Honeine (b3) 2017; 131
Fukunaga, Hostetler (b30) 1975; 21
Li, Hu, Wu (b32) 2007; 40
S. Mika, B. Schölkopf, A. Smola, K.R. Müller, M. Scholz, G. Rätsch, Kernel PCA and de-noising in feature spaces, in: Proc. Conf. on Advances in Neural Information Processing Systems II, 1999, pp. 536–542.
Honeine, Richard (b1) 2011; 28
Cheng (b20) 1995; 17
Tran Thi Phuong, Douzal, Yazdi, Honeine, Gallinari (b6) 2020; 286
Salazar, Rios, Aceros, Flórez-Vargas, Valencia (b4) 2021; 9
vor der Brück, Eger, Mehler (b18) 2015
Pandey, Schreurs, Suykens (b12) 2021; 135
Schölkopf (b17) 2000; 13
Celikkanat, Shen, Malliaros (b9) 2022
Fashing, Tomasi (b29) 2005; 27
Chen, Genovese, Wasserman (b39) 2014
Carreira-Perpinan (b27) 2000; 22
Pandey, De Meulemeester, De Moor, Suykens (b11) 2023; 554
Yamasaki, Tanaka (b37) 2024
Shankar, Fang, Guo, Fridovich-Keil, Schmidt, Ragan-Kelley, Recht (b13) 2020
He, He, Shi, Huang, Suykens (b19) 2023
Tax, Juszczak (b25) 2003; 17
Golub, Van Loan (b26) 1996
P. Esser, M. Fleissner, D. Ghoshdastidar, Non-parametric representation learning with kernels, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2024, pp. 11910–11918.
Yamasaki, Tanaka (b36) 2020; 42
Aliyari Ghassabeh (b21) 2015; 135
El Ahmad, Brogat-Motte, Laforgue, d’Alché Buc (b10) 2024
Cucker, Smale (b22) 2002; 39
Burges (b23) 1999
Comaniciu, Meer (b31) 2002; 24
Unser (b16) 2021; 21
M. Botsch, J.A. Nossek, Construction of interpretable radial basis function classifiers based on the random forest kernel, in: 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 2008, pp. 220–227.
Schölkopf, Mika, Burges, Knirsch, Muller, Ratsch, Smola (b2) 1999; 10
Abedsoltan, Belkin, Pandit (b15) 2023
L. Jia, X. Ning, B. Gaüzère, P. Honeine, K. Riesen, Bridging distinct spaces in graph-based machine learning, in: M. Blumenstein, H. Lu, W. Yang, S.B. Cho (Eds.), Proceedings of the 7th Asian Conference on Pattern Recognition, ACPR, Kitakyushu, Japan, 2023.
Carreira-Perpinan (b33) 2007; 29
Kwok, Tsang (b40) 2003
Yamasaki (10.1016/j.patcog.2024.110800_b36) 2020; 42
Cucker (10.1016/j.patcog.2024.110800_b22) 2002; 39
Li (10.1016/j.patcog.2024.110800_b32) 2007; 40
Honeine (10.1016/j.patcog.2024.110800_b1) 2011; 28
Schölkopf (10.1016/j.patcog.2024.110800_b2) 1999; 10
He (10.1016/j.patcog.2024.110800_b19) 2023
vor der Brück (10.1016/j.patcog.2024.110800_b18) 2015
Cheng (10.1016/j.patcog.2024.110800_b20) 1995; 17
Burges (10.1016/j.patcog.2024.110800_b23) 1999
Comaniciu (10.1016/j.patcog.2024.110800_b31) 2002; 24
Zhu (10.1016/j.patcog.2024.110800_b3) 2017; 131
10.1016/j.patcog.2024.110800_b38
Unser (10.1016/j.patcog.2024.110800_b16) 2021; 21
Kwok (10.1016/j.patcog.2024.110800_b40) 2003
Carreira-Perpinan (10.1016/j.patcog.2024.110800_b27) 2000; 22
Carreira-Perpinan (10.1016/j.patcog.2024.110800_b33) 2007; 29
Pandey (10.1016/j.patcog.2024.110800_b11) 2023; 554
10.1016/j.patcog.2024.110800_b8
Aliyari Ghassabeh (10.1016/j.patcog.2024.110800_b21) 2015; 135
10.1016/j.patcog.2024.110800_b14
Fukunaga (10.1016/j.patcog.2024.110800_b30) 1975; 21
Pandey (10.1016/j.patcog.2024.110800_b12) 2021; 135
Aliyari Ghassabeh (10.1016/j.patcog.2024.110800_b34) 2013; 34
Golub (10.1016/j.patcog.2024.110800_b26) 1996
Jia (10.1016/j.patcog.2024.110800_b7) 2021
Shankar (10.1016/j.patcog.2024.110800_b13) 2020
Yamasaki (10.1016/j.patcog.2024.110800_b37) 2024
Tax (10.1016/j.patcog.2024.110800_b25) 2003; 17
Chen (10.1016/j.patcog.2024.110800_b39) 2014
Fashing (10.1016/j.patcog.2024.110800_b29) 2005; 27
Celikkanat (10.1016/j.patcog.2024.110800_b9) 2022
Tran Thi Phuong (10.1016/j.patcog.2024.110800_b6) 2020; 286
10.1016/j.patcog.2024.110800_b28
Arias-Castro (10.1016/j.patcog.2024.110800_b35) 2016; 17
Salazar (10.1016/j.patcog.2024.110800_b4) 2021; 9
El Ahmad (10.1016/j.patcog.2024.110800_b10) 2024
Abedsoltan (10.1016/j.patcog.2024.110800_b15) 2023
Schölkopf (10.1016/j.patcog.2024.110800_b17) 2000; 13
Fan (10.1016/j.patcog.2024.110800_b5) 2018; 77
10.1016/j.patcog.2024.110800_b24
References_xml – volume: 286
  year: 2020
  ident: b6
  article-title: Interpretable time series kernel analytics by pre-image estimation
  publication-title: Artificial Intelligence
– reference: P. Esser, M. Fleissner, D. Ghoshdastidar, Non-parametric representation learning with kernels, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2024, pp. 11910–11918.
– reference: S. Mika, B. Schölkopf, A. Smola, K.R. Müller, M. Scholz, G. Rätsch, Kernel PCA and de-noising in feature spaces, in: Proc. Conf. on Advances in Neural Information Processing Systems II, 1999, pp. 536–542.
– volume: 10
  start-page: 1000
  year: 1999
  end-page: 1017
  ident: b2
  article-title: Input space versus feature space in kernel-based methods
  publication-title: IEEE Trans. Neural Netw.
– start-page: 109
  year: 2024
  end-page: 117
  ident: b10
  article-title: Sketch in, sketch out: Accelerating both learning and inference for structured prediction with kernels
  publication-title: Proc. of the 27th International Conference on Artificial Intelligence and Statistics
– volume: 554
  year: 2023
  ident: b11
  article-title: Multi-view kernel PCA for time series forecasting
  publication-title: Neurocomputing
– volume: 17
  start-page: 333
  year: 2003
  end-page: 347
  ident: b25
  article-title: Kernel whitening for one-class classification
  publication-title: Int. J. Pattern Recognit. Artif. Intell.
– volume: 24
  start-page: 603
  year: 2002
  end-page: 619
  ident: b31
  article-title: Mean shift: a robust approach toward feature space analysis
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– reference: C.S. Ong, X. Mary, S. Canu, A.J. Smola, Learning with non-positive kernels, in: Proc. 21st International Conference on Machine Learning, 2004, p. 81.
– start-page: 408
  year: 2003
  end-page: 415
  ident: b40
  article-title: The pre-image problem in kernel methods
  publication-title: Proc. 20th International Conference on Machine Learning
– year: 1996
  ident: b26
  article-title: Matrix Computations
– reference: M. Botsch, J.A. Nossek, Construction of interpretable radial basis function classifiers based on the random forest kernel, in: 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 2008, pp. 220–227.
– start-page: 61
  year: 2023
  end-page: 78
  ident: b15
  article-title: Toward large kernel models
  publication-title: International Conference on Machine Learning
– volume: 21
  start-page: 32
  year: 1975
  end-page: 40
  ident: b30
  article-title: The estimation of the gradient of a density function, with applications in pattern recognition
  publication-title: IEEE Trans. Inform. Theory
– volume: 131
  start-page: 143
  year: 2017
  end-page: 153
  ident: b3
  article-title: Online kernel nonnegative matrix factorization
  publication-title: Signal Process.
– volume: 34
  start-page: 1423
  year: 2013
  end-page: 1427
  ident: b34
  article-title: On the convergence of the mean shift algorithm in the one-dimensional space
  publication-title: Pattern Recognit. Lett.
– volume: 135
  start-page: 177
  year: 2021
  end-page: 191
  ident: b12
  article-title: Generative restricted kernel machines: a framework for multi-view generation and disentangled feature learning
  publication-title: Neural Netw.
– volume: 29
  start-page: 767
  year: 2007
  end-page: 776
  ident: b33
  article-title: Gaussian mean-shift is an EM algorithm
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 135
  start-page: 1
  year: 2015
  end-page: 10
  ident: b21
  article-title: A sufficient condition for the convergence of the mean shift algorithm with Gaussian kernel
  publication-title: J. Multivariate Anal.
– volume: 27
  start-page: 471
  year: 2005
  end-page: 474
  ident: b29
  article-title: Mean shift is a bound optimization
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– year: 2022
  ident: b9
  article-title: Multiple kernel representation learning on networks
  publication-title: IEEE Trans. Knowl. Data Eng.
– volume: 28
  start-page: 77
  year: 2011
  end-page: 88
  ident: b1
  article-title: Preimage problem in kernel-based machine learning
  publication-title: IEEE Signal Process. Mag.
– volume: 21
  start-page: 941
  year: 2021
  end-page: 960
  ident: b16
  article-title: A unifying representer theorem for inverse problems and machine learning
  publication-title: Found. Comput. Math.
– volume: 17
  start-page: 790
  year: 1995
  end-page: 799
  ident: b20
  article-title: Mean shift, mode seeking, and clustering
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 216
  year: 2021
  end-page: 226
  ident: b7
  article-title: A graph pre-image method based on graph edit distances
  publication-title: Proc. IAPR Joint International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition
– volume: 42
  start-page: 2273
  year: 2020
  end-page: 2286
  ident: b36
  article-title: Properties of mean shift
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– year: 2014
  ident: b39
  article-title: Generalized mode and ridge estimation
– volume: 9
  start-page: 101863
  year: 2021
  end-page: 101875
  ident: b4
  article-title: Kernel joint non-negative matrix factorization for genomic data
  publication-title: IEEE Access
– volume: 39
  start-page: 1
  year: 2002
  end-page: 49
  ident: b22
  article-title: On the mathematical foundations of learning
  publication-title: Bull. Amer. Math. Soc.
– volume: 17
  start-page: 1
  year: 2016
  end-page: 28
  ident: b35
  article-title: On the estimation of the gradient lines of a density and the consistency of the mean-shift algorithm
  publication-title: J. Mach. Learn. Res.
– start-page: 89
  year: 1999
  end-page: 116
  ident: b23
  article-title: Geometry and invariance in kernel based methods
  publication-title: Advances in Kernel Methods: Support Vector Learning
– start-page: 1
  year: 2023
  end-page: 12
  ident: b19
  article-title: Learning with asymmetric kernels: Least squares and feature interpretation
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 22
  start-page: 1318
  year: 2000
  end-page: 1323
  ident: b27
  article-title: Mode-finding for mixtures of gaussian distributions
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 103
  year: 2015
  end-page: 108
  ident: b18
  article-title: Complex decomposition of the negative distance kernel
  publication-title: 2015 IEEE 14th International Conference on Machine Learning and Applications
– volume: 40
  start-page: 1756
  year: 2007
  end-page: 1762
  ident: b32
  article-title: A note on the convergence of the mean shift
  publication-title: Pattern Recognit.
– volume: 77
  start-page: 378
  year: 2018
  end-page: 394
  ident: b5
  article-title: Non-linear matrix completion
  publication-title: Pattern Recognit.
– start-page: 1
  year: 2020
  end-page: 10
  ident: b13
  article-title: Neural kernels without tangents
  publication-title: Proceedings of the 37th International Conference on Machine Learning
– volume: 13
  year: 2000
  ident: b17
  article-title: The kernel trick for distances
  publication-title: Adv. Neural Inf. Process. Syst.
– reference: L. Jia, X. Ning, B. Gaüzère, P. Honeine, K. Riesen, Bridging distinct spaces in graph-based machine learning, in: M. Blumenstein, H. Lu, W. Yang, S.B. Cho (Eds.), Proceedings of the 7th Asian Conference on Pattern Recognition, ACPR, Kitakyushu, Japan, 2023.
– start-page: 1
  year: 2024
  end-page: 11
  ident: b37
  article-title: Convergence analysis of mean shift
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 77
  start-page: 378
  year: 2018
  ident: 10.1016/j.patcog.2024.110800_b5
  article-title: Non-linear matrix completion
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.10.014
– volume: 22
  start-page: 1318
  year: 2000
  ident: 10.1016/j.patcog.2024.110800_b27
  article-title: Mode-finding for mixtures of gaussian distributions
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/34.888716
– volume: 21
  start-page: 941
  year: 2021
  ident: 10.1016/j.patcog.2024.110800_b16
  article-title: A unifying representer theorem for inverse problems and machine learning
  publication-title: Found. Comput. Math.
  doi: 10.1007/s10208-020-09472-x
– start-page: 109
  year: 2024
  ident: 10.1016/j.patcog.2024.110800_b10
  article-title: Sketch in, sketch out: Accelerating both learning and inference for structured prediction with kernels
– volume: 40
  start-page: 1756
  year: 2007
  ident: 10.1016/j.patcog.2024.110800_b32
  article-title: A note on the convergence of the mean shift
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2006.10.016
– volume: 42
  start-page: 2273
  year: 2020
  ident: 10.1016/j.patcog.2024.110800_b36
  article-title: Properties of mean shift
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2019.2913640
– volume: 29
  start-page: 767
  year: 2007
  ident: 10.1016/j.patcog.2024.110800_b33
  article-title: Gaussian mean-shift is an EM algorithm
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2007.1057
– volume: 17
  start-page: 1
  year: 2016
  ident: 10.1016/j.patcog.2024.110800_b35
  article-title: On the estimation of the gradient lines of a density and the consistency of the mean-shift algorithm
  publication-title: J. Mach. Learn. Res.
– ident: 10.1016/j.patcog.2024.110800_b38
  doi: 10.1145/1015330.1015443
– volume: 28
  start-page: 77
  year: 2011
  ident: 10.1016/j.patcog.2024.110800_b1
  article-title: Preimage problem in kernel-based machine learning
  publication-title: IEEE Signal Process. Mag.
  doi: 10.1109/MSP.2010.939747
– year: 2022
  ident: 10.1016/j.patcog.2024.110800_b9
  article-title: Multiple kernel representation learning on networks
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2022.3172048
– volume: 17
  start-page: 333
  year: 2003
  ident: 10.1016/j.patcog.2024.110800_b25
  article-title: Kernel whitening for one-class classification
  publication-title: Int. J. Pattern Recognit. Artif. Intell.
  doi: 10.1142/S021800140300240X
– start-page: 216
  year: 2021
  ident: 10.1016/j.patcog.2024.110800_b7
  article-title: A graph pre-image method based on graph edit distances
– volume: 17
  start-page: 790
  year: 1995
  ident: 10.1016/j.patcog.2024.110800_b20
  article-title: Mean shift, mode seeking, and clustering
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/34.400568
– ident: 10.1016/j.patcog.2024.110800_b8
  doi: 10.1007/978-3-031-47637-2_1
– start-page: 1
  year: 2020
  ident: 10.1016/j.patcog.2024.110800_b13
  article-title: Neural kernels without tangents
– start-page: 61
  year: 2023
  ident: 10.1016/j.patcog.2024.110800_b15
  article-title: Toward large kernel models
– volume: 34
  start-page: 1423
  year: 2013
  ident: 10.1016/j.patcog.2024.110800_b34
  article-title: On the convergence of the mean shift algorithm in the one-dimensional space
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2013.05.004
– start-page: 1
  year: 2024
  ident: 10.1016/j.patcog.2024.110800_b37
  article-title: Convergence analysis of mean shift
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2024.3385920
– ident: 10.1016/j.patcog.2024.110800_b14
  doi: 10.1609/aaai.v38i11.29077
– volume: 21
  start-page: 32
  year: 1975
  ident: 10.1016/j.patcog.2024.110800_b30
  article-title: The estimation of the gradient of a density function, with applications in pattern recognition
  publication-title: IEEE Trans. Inform. Theory
  doi: 10.1109/TIT.1975.1055330
– start-page: 103
  year: 2015
  ident: 10.1016/j.patcog.2024.110800_b18
  article-title: Complex decomposition of the negative distance kernel
– volume: 554
  year: 2023
  ident: 10.1016/j.patcog.2024.110800_b11
  article-title: Multi-view kernel PCA for time series forecasting
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2023.126639
– year: 1996
  ident: 10.1016/j.patcog.2024.110800_b26
– volume: 10
  start-page: 1000
  year: 1999
  ident: 10.1016/j.patcog.2024.110800_b2
  article-title: Input space versus feature space in kernel-based methods
  publication-title: IEEE Trans. Neural Netw.
  doi: 10.1109/72.788641
– start-page: 408
  year: 2003
  ident: 10.1016/j.patcog.2024.110800_b40
  article-title: The pre-image problem in kernel methods
– volume: 286
  year: 2020
  ident: 10.1016/j.patcog.2024.110800_b6
  article-title: Interpretable time series kernel analytics by pre-image estimation
  publication-title: Artificial Intelligence
– start-page: 89
  year: 1999
  ident: 10.1016/j.patcog.2024.110800_b23
  article-title: Geometry and invariance in kernel based methods
– volume: 13
  year: 2000
  ident: 10.1016/j.patcog.2024.110800_b17
  article-title: The kernel trick for distances
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 27
  start-page: 471
  year: 2005
  ident: 10.1016/j.patcog.2024.110800_b29
  article-title: Mean shift is a bound optimization
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2005.59
– volume: 24
  start-page: 603
  year: 2002
  ident: 10.1016/j.patcog.2024.110800_b31
  article-title: Mean shift: a robust approach toward feature space analysis
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/34.1000236
– volume: 9
  start-page: 101863
  year: 2021
  ident: 10.1016/j.patcog.2024.110800_b4
  article-title: Kernel joint non-negative matrix factorization for genomic data
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3096801
– volume: 39
  start-page: 1
  year: 2002
  ident: 10.1016/j.patcog.2024.110800_b22
  article-title: On the mathematical foundations of learning
  publication-title: Bull. Amer. Math. Soc.
  doi: 10.1090/S0273-0979-01-00923-5
– ident: 10.1016/j.patcog.2024.110800_b28
  doi: 10.1109/IJCNN.2008.4633793
– start-page: 1
  year: 2023
  ident: 10.1016/j.patcog.2024.110800_b19
  article-title: Learning with asymmetric kernels: Least squares and feature interpretation
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 135
  start-page: 1
  year: 2015
  ident: 10.1016/j.patcog.2024.110800_b21
  article-title: A sufficient condition for the convergence of the mean shift algorithm with Gaussian kernel
  publication-title: J. Multivariate Anal.
  doi: 10.1016/j.jmva.2014.11.009
– ident: 10.1016/j.patcog.2024.110800_b24
– year: 2014
  ident: 10.1016/j.patcog.2024.110800_b39
– volume: 131
  start-page: 143
  year: 2017
  ident: 10.1016/j.patcog.2024.110800_b3
  article-title: Online kernel nonnegative matrix factorization
  publication-title: Signal Process.
  doi: 10.1016/j.sigpro.2016.08.011
– volume: 135
  start-page: 177
  year: 2021
  ident: 10.1016/j.patcog.2024.110800_b12
  article-title: Generative restricted kernel machines: a framework for multi-view generation and disentangled feature learning
  publication-title: Neural Netw.
  doi: 10.1016/j.neunet.2020.12.010
SSID ssj0017142
Score 2.4642558
Snippet While many nonlinear pattern recognition and data mining tasks rely on embedding the data into a latent space, one often needs to extract the patterns in the...
SourceID hal
crossref
elsevier
SourceType Open Access Repository
Enrichment Source
Index Database
Publisher
StartPage 110800
SubjectTerms Artificial Intelligence
Computer Science
Fixed-point iteration
Machine Learning
Majorize-minimization algorithm
Newton’s method
Pattern recognition
Pre-image problem
Title Theoretical insights on the pre-image resolution in machine learning
URI https://dx.doi.org/10.1016/j.patcog.2024.110800
https://hal.science/hal-04648777
Volume 156
WOSCitedRecordID wos001279597400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 0031-3203
  databaseCode: AIEXJ
  dateStart: 19950101
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0017142
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3dT9swELfGxwMvGxsgCgNZE69BTu3g5LHiQ-3oqkqwqW-WkzhQBGnVdog_n7vYTsrQBHvYixW5cWP5dz7fne-DkCMQoXMJ6k6QGthuIjIZPmXBSRpLGWqe5lW1hl99ORjEo1EydNXb5lU5AVmW8dNTMv2vUEMfgI2hs_8Ad_2n0AHPADq0ADu07wW-Dk3slXNUvv2VADpcBOMHdNNBs72dBVo8flQulcZnW71ZFlmHVQZOjHpxrkbNxX13UpqxNYnWHobOgtAWf3hj-NCWZTbJw4C3GX_BJm0C8Fcs12r_d8dTODomN8f4gSq2gLHmiPHX6t3OlRqeXah-b3D58tclt8Bupw_trb5Ht1OB-QofQb1da8soAba11umdj77XF0UyFDYhvJuxj46sXPheT-lv0sfKrbejV3LF9Sb56BQC2rFAfiYfTPmFfPLFNqjjvVvkbAlX6nGlk5ICrrTGlTa40nFJHa7U47pNfl6cX592A1cCI8h4fLKAo8gksGN0XKDBWraFAYGRsTxKC220zGUIB6dmcZ7hBauRIjM65EKnSVjIIir4DlktgRZ2CRWciwKkuYiFhUgZjBaSpZmMoxxUfK1bhPulUZnLD49lSu6VdwS8U3ZBFS6osgvaIkE9amrzo7zxvvSrrpyMZ2U3BfT0xshvAFL9EUyLDpSisK-hk733vLRPNppt8JWsLma_zQFZzx4X4_ns0FHYM2JWe0Y
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Theoretical+Insights+on+the+Pre-image+Resolution+in+Machine+Learning&rft.jtitle=Pattern+recognition&rft.au=Honeine%2C+Paul&rft.date=2024-12-01&rft.pub=Elsevier&rft.issn=0031-3203&rft.volume=156&rft_id=info:doi/10.1016%2Fj.patcog.2024.110800&rft.externalDBID=HAS_PDF_LINK&rft.externalDocID=oai%3AHAL%3Ahal-04648777v1
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon