Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation

Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high comput...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing Jg. 33; S. 2835 - 2850
Hauptverfasser: Qin, Wenjin, Wang, Hailin, Zhang, Feng, Ma, Weijun, Wang, Jianjun, Huang, Tingwen
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States IEEE 01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1057-7149, 1941-0042, 1941-0042
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order- d (<inline-formula> <tex-math notation="LaTeX">d\geq 3 </tex-math></inline-formula>) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
AbstractList Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order- d ([Formula Omitted]) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order-d (d ≥ 3) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order-d ( d ≥ 3 ) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order-d ( d ≥ 3 ) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order- d (<inline-formula> <tex-math notation="LaTeX">d\geq 3 </tex-math></inline-formula>) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
Author Huang, Tingwen
Wang, Jianjun
Wang, Hailin
Ma, Weijun
Qin, Wenjin
Zhang, Feng
Author_xml – sequence: 1
  givenname: Wenjin
  orcidid: 0000-0002-8064-4206
  surname: Qin
  fullname: Qin, Wenjin
  email: qinwenjin2021@163.com
  organization: School of Mathematics and Statistics, Southwest University, Chongqing, China
– sequence: 2
  givenname: Hailin
  orcidid: 0000-0002-7797-2719
  surname: Wang
  fullname: Wang, Hailin
  email: wanghailin97@163.com
  organization: School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China
– sequence: 3
  givenname: Feng
  orcidid: 0000-0003-1000-8877
  surname: Zhang
  fullname: Zhang, Feng
  email: zfmath@swu.edu.cn
  organization: School of Mathematics and Statistics, Southwest University, Chongqing, China
– sequence: 4
  givenname: Weijun
  orcidid: 0000-0003-1949-4541
  surname: Ma
  fullname: Ma, Weijun
  email: Weijunma_2008@sina.com
  organization: School of Information Engineering, Ningxia University, Yinchuan, China
– sequence: 5
  givenname: Jianjun
  orcidid: 0000-0002-5344-4460
  surname: Wang
  fullname: Wang, Jianjun
  email: wjj@swu.edu.cn
  organization: School of Mathematics and Statistics, Research Institute of Intelligent Finance and Digital Economics, Southwest University, Chongqing, China
– sequence: 6
  givenname: Tingwen
  orcidid: 0000-0001-9610-846X
  surname: Huang
  fullname: Huang, Tingwen
  email: tingwen.huang@qatar.tamu.edu
  organization: Department of Mathematics, Texas A&M University at Qatar, Doha, Qatar
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38598373$$D View this record in MEDLINE/PubMed
BookMark eNp9kc1v1DAQxS1URD_gzgGhSFy4ZBl_xfGxWgGttKKo2j1HjjMpLom92Elp-9fjZReEekAayWPp9_w8807JkQ8eCXlNYUEp6A_ry68LBkwsOK8lq8UzckK1oCWAYEe5B6lKRYU-Jqcp3QJQIWn1ghxnWtdc8ROy-RK8Df4O74vr0M5pKi7czbfyKnYYizX6FGKxDON2wMkFX2yS8zfFtfFdGN0jdsUq_Czz9Xtxvt3GcO9Gs-Nekue9GRK-OpxnZPPp43p5Ua6uPl8uz1el5aKeyl5BB53tDRqDFQelkDFllbRtm4tWCLYGrTTjXBjTt4zZThvbQ9VZZmt-Rt7v383eP2ZMUzO6ZHEYjMcwp4ZDHlJXFYOMvnuC3oY5-vy7TAkAKaXmmXp7oOZ2xK7ZxjxRfGj-LCwDsAdsDClF7P8iFJpdJk3OpNll0hwyyZLqicS66feapmjc8D_hm73QIeI_PkJXUlL-C01jmQU
CODEN IIPRE4
CitedBy_id crossref_primary_10_1109_TGRS_2024_3507207
crossref_primary_10_1109_TGRS_2025_3568627
crossref_primary_10_1109_TGRS_2024_3508456
crossref_primary_10_1109_TSP_2025_3569861
crossref_primary_10_1016_j_neucom_2024_129266
crossref_primary_10_1016_j_eswa_2025_129138
crossref_primary_10_1109_TIP_2024_3475738
crossref_primary_10_1016_j_artmed_2025_103254
crossref_primary_10_1109_TCSVT_2024_3514614
Cites_doi 10.1145/2915921
10.1109/TSP.2016.2639466
10.1109/tnnls.2022.3181378
10.1007/s10915-021-01679-6
10.1073/pnas.2015851118
10.1137/21M146079X
10.1109/TPAMI.2012.39
10.1109/TSP.2013.2279362
10.1109/TSP.2017.2690524
10.1002/nla.2548
10.1109/TIP.2021.3138325
10.1109/ICASSP.2019.8682197
10.1038/ncomms13890
10.1137/130905010
10.1007/BF02289464
10.1109/TIP.2021.3062195
10.1137/21M1451191
10.1016/j.patcog.2019.107181
10.1007/s10915-020-01356-0
10.1109/TCYB.2021.3067676
10.1109/TPAMI.2019.2929043
10.1137/21M1441754
10.1137/17M1159932
10.1109/TPAMI.2021.3063527
10.1109/TCI.2020.3006718
10.1109/TCSVT.2021.3114208
10.1137/17M1112303
10.1109/TIP.2017.2672439
10.1109/TIP.2019.2963376
10.1002/nla.2299
10.1109/ICASSP43922.2022.9746491
10.1016/j.sigpro.2019.107319
10.24963/ijcai.2017/468
10.1109/TIP.2019.2946445
10.1117/1.JEI.30.6.063016
10.1016/j.neuroimage.2004.10.043
10.1109/ICASSP.2019.8683818
10.1109/tcyb.2022.3198932
10.1016/j.cam.2020.113380
10.1002/nla.2179
10.1109/TCYB.2021.3140148
10.1137/110847445
10.1109/TCSVT.2021.3067022
10.1016/j.laa.2015.07.021
10.1109/TNNLS.2015.2423694
10.1016/j.ins.2020.05.005
10.1109/TPAMI.2020.3017672
10.1109/tnnls.2023.3280086
10.1002/nla.2444
10.1109/TPAMI.2020.2986773
10.1109/TGRS.2022.3140800
10.1137/15M1026080
10.1137/21M1429539
10.1109/TNNLS.2021.3051650
10.1109/tip.2023.3284673
10.1137/110836067
10.1109/TPAMI.2019.2891760
10.1137/090752286
10.2172/1807223
10.1109/tnnls.2023.3236415
10.1007/s00041-008-9045-x
10.1137/110837711
10.1016/j.ins.2021.03.025
10.1109/TSP.2021.3085116
10.1088/2632-2153/abad87
10.1016/j.patcog.2021.108311
10.1137/19M1261043
10.1109/TSP.2019.2946022
10.1561/2200000016
10.1561/2200000035
10.1137/090771806
10.1016/j.knosys.2022.110198
10.1109/ICCV.2013.34
10.1109/TIT.2022.3198725
10.1137/07070111X
10.1109/ICCV.2017.197
10.1088/1361-6420/abd85b
10.1561/0400000060
10.1109/TIP.2022.3155949
10.1016/j.laa.2010.09.020
10.1198/016214501753382273
10.1137/20M1387158
10.1007/s10898-022-01182-8
10.1109/TIP.2020.3023798
10.1109/tpami.2023.3259640
10.1007/s10444-018-9622-8
10.1017/S0962492920000021
10.1109/TPAMI.2021.3059299
10.1109/JSTSP.2018.2879185
10.24963/ijcai.2019/368
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2024.3385284
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE/IET Electronic Library
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList Technology Research Database
PubMed
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 2850
ExternalDocumentID 38598373
10_1109_TIP_2024_3385284
10496551
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 12071380; 12101512; 62063028
  funderid: 10.13039/501100001809
– fundername: Natural Science Foundation of Ningxia Province; Natural Science Foundation of Ningxia
  grantid: 2023AAC03017
  funderid: 10.13039/501100004772
– fundername: Fundamental Research Funds for the Central Universities
  grantid: SWU120078
  funderid: 10.13039/501100012226
– fundername: Joint Funds of the Natural Science Innovation-Driven Development of Chongqing, China
  grantid: 2023NSCQ-LZX0218
– fundername: National Key Research and Development Program of China
  grantid: 2023YFA1008500
  funderid: 10.13039/501100012166
– fundername: Chongqing Talent Project, China
  grantid: cstc2021ycjh-bgzxm0015
– fundername: China Postdoctoral Science Foundation
  grantid: 2021M692681
  funderid: 10.13039/501100002858
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c348t-f70d0dcfaeaae63077e227c75cbbcbb16e0c809792334aafb22cd9acf06dc2c83
IEDL.DBID RIE
ISICitedReferencesCount 12
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001205012500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1057-7149
1941-0042
IngestDate Sat Sep 27 20:38:49 EDT 2025
Sun Jun 29 12:33:24 EDT 2025
Mon Jul 21 05:55:29 EDT 2025
Tue Nov 18 22:25:23 EST 2025
Sat Nov 29 03:34:43 EST 2025
Wed Aug 27 02:06:06 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c348t-f70d0dcfaeaae63077e227c75cbbcbb16e0c809792334aafb22cd9acf06dc2c83
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-7797-2719
0000-0002-8064-4206
0000-0003-1949-4541
0000-0003-1000-8877
0000-0002-5344-4460
0000-0001-9610-846X
PMID 38598373
PQID 3040055593
PQPubID 85429
PageCount 16
ParticipantIDs crossref_primary_10_1109_TIP_2024_3385284
ieee_primary_10496551
proquest_miscellaneous_3037396620
proquest_journals_3040055593
pubmed_primary_38598373
crossref_citationtrail_10_1109_TIP_2024_3385284
PublicationCentury 2000
PublicationDate 2024-01-01
PublicationDateYYYYMMDD 2024-01-01
PublicationDate_xml – month: 01
  year: 2024
  text: 2024-01-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref56
ref59
ref58
ref53
ref52
ref55
ref54
ref51
ref50
ref46
ref45
ref48
ref47
ref42
ref41
ref44
ref43
Malik (ref74); 31
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref35
ref34
ref37
ref36
ref31
ref30
ref33
Ma (ref75); 34
ref32
ref39
Chen (ref84) 2023
ref38
ref24
ref23
ref26
ref25
ref20
ref22
ref21
ref28
Malik (ref81)
ref27
ref29
ref13
ref15
ref14
ref97
ref96
ref11
ref10
ref98
ref17
ref16
ref19
ref18
Wang (ref71); 28
ref93
ref92
ref95
ref94
ref91
ref90
ref89
ref86
ref85
ref88
ref87
ref82
ref83
ref80
ref79
ref78
ref77
ref76
ref2
ref1
Zhao (ref57) 2016
Qiu (ref40)
ref70
ref73
ref72
ref68
ref67
ref69
ref64
ref63
ref66
ref65
Huang (ref12) 2015; 11
Murray (ref99) 2023
ref60
ref62
ref61
References_xml – ident: ref3
  doi: 10.1145/2915921
– ident: ref43
  doi: 10.1109/TSP.2016.2639466
– ident: ref53
  doi: 10.1109/tnnls.2022.3181378
– ident: ref39
  doi: 10.1007/s10915-021-01679-6
– ident: ref61
  doi: 10.1073/pnas.2015851118
– ident: ref85
  doi: 10.1137/21M146079X
– ident: ref49
  doi: 10.1109/TPAMI.2012.39
– ident: ref96
  doi: 10.1109/TSP.2013.2279362
– volume: 11
  start-page: 339
  issue: 2
  year: 2015
  ident: ref12
  article-title: Provable models for robust low-rank tensor completion
  publication-title: Pacific J. Optim.
– ident: ref4
  doi: 10.1109/TSP.2017.2690524
– ident: ref82
  doi: 10.1002/nla.2548
– volume: 34
  start-page: 24299
  volume-title: Proc. Adv. Neural Inf. Process. Syst. (NIPS)
  ident: ref75
  article-title: Fast and accurate randomized algorithms for low-rank tensor decompositions
– ident: ref51
  doi: 10.1109/TIP.2021.3138325
– ident: ref79
  doi: 10.1109/ICASSP.2019.8682197
– ident: ref2
  doi: 10.1038/ncomms13890
– ident: ref11
  doi: 10.1137/130905010
– ident: ref55
  doi: 10.1007/BF02289464
– ident: ref7
  doi: 10.1109/TIP.2021.3062195
– ident: ref87
  doi: 10.1137/21M1451191
– ident: ref25
  doi: 10.1016/j.patcog.2019.107181
– ident: ref30
  doi: 10.1007/s10915-020-01356-0
– ident: ref34
  doi: 10.1109/TCYB.2021.3067676
– ident: ref9
  doi: 10.1109/TPAMI.2019.2929043
– ident: ref73
  doi: 10.1137/21M1441754
– ident: ref89
  doi: 10.1137/17M1159932
– ident: ref10
  doi: 10.1109/TPAMI.2021.3063527
– ident: ref15
  doi: 10.1109/TCI.2020.3006718
– ident: ref17
  doi: 10.1109/TCSVT.2021.3114208
– ident: ref72
  doi: 10.1137/17M1112303
– ident: ref50
  doi: 10.1109/TIP.2017.2672439
– start-page: 7400
  volume-title: Proc. Int. Conf. Mach. Learn. (ICML)
  ident: ref81
  article-title: A sampling-based method for tensor ring decomposition
– ident: ref5
  doi: 10.1109/TIP.2019.2963376
– ident: ref24
  doi: 10.1002/nla.2299
– ident: ref63
  doi: 10.1109/ICASSP43922.2022.9746491
– start-page: 18211
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref40
  article-title: Fast and provable nonconvex tensor RPCA
– ident: ref22
  doi: 10.1016/j.sigpro.2019.107319
– ident: ref93
  doi: 10.24963/ijcai.2017/468
– ident: ref26
  doi: 10.1109/TIP.2019.2946445
– ident: ref64
  doi: 10.1117/1.JEI.30.6.063016
– ident: ref1
  doi: 10.1016/j.neuroimage.2004.10.043
– ident: ref23
  doi: 10.1109/ICASSP.2019.8683818
– ident: ref20
  doi: 10.1109/tcyb.2022.3198932
– ident: ref77
  doi: 10.1016/j.cam.2020.113380
– ident: ref88
  doi: 10.1002/nla.2179
– ident: ref52
  doi: 10.1109/TCYB.2021.3140148
– ident: ref95
  doi: 10.1137/110847445
– ident: ref46
  doi: 10.1109/TCSVT.2021.3067022
– ident: ref58
  doi: 10.1016/j.laa.2015.07.021
– ident: ref13
  doi: 10.1109/TNNLS.2015.2423694
– ident: ref42
  doi: 10.1016/j.ins.2020.05.005
– ident: ref33
  doi: 10.1109/TPAMI.2020.3017672
– ident: ref27
  doi: 10.1109/tnnls.2023.3280086
– ident: ref36
  doi: 10.1002/nla.2444
– ident: ref37
  doi: 10.1109/TPAMI.2020.2986773
– ident: ref6
  doi: 10.1109/TGRS.2022.3140800
– ident: ref92
  doi: 10.1137/15M1026080
– volume: 31
  start-page: 10096
  volume-title: Proc. Adv. Neural Inf. Process. Syst. (NIPS)
  ident: ref74
  article-title: Low-rank tucker decomposition of large tensors using tensorsketch
– year: 2023
  ident: ref84
  article-title: Low-rank tensor train decomposition using TensorSketch
  publication-title: arXiv:2309.08093
– ident: ref31
  doi: 10.1137/21M1429539
– ident: ref47
  doi: 10.1109/TNNLS.2021.3051650
– ident: ref48
  doi: 10.1109/tip.2023.3284673
– ident: ref90
  doi: 10.1137/110836067
– ident: ref32
  doi: 10.1109/TPAMI.2019.2891760
– ident: ref56
  doi: 10.1137/090752286
– ident: ref70
  doi: 10.2172/1807223
– ident: ref19
  doi: 10.1109/tnnls.2023.3236415
– ident: ref94
  doi: 10.1007/s00041-008-9045-x
– ident: ref60
  doi: 10.1137/110837711
– ident: ref16
  doi: 10.1016/j.ins.2021.03.025
– ident: ref18
  doi: 10.1109/TSP.2021.3085116
– ident: ref80
  doi: 10.1088/2632-2153/abad87
– ident: ref35
  doi: 10.1016/j.patcog.2021.108311
– ident: ref76
  doi: 10.1137/19M1261043
– ident: ref14
  doi: 10.1109/TSP.2019.2946022
– year: 2016
  ident: ref57
  article-title: Tensor ring decomposition
  publication-title: arXiv:1606.05535
– ident: ref98
  doi: 10.1561/2200000016
– ident: ref67
  doi: 10.1561/2200000035
– ident: ref66
  doi: 10.1137/090771806
– ident: ref38
  doi: 10.1016/j.knosys.2022.110198
– ident: ref97
  doi: 10.1109/ICCV.2013.34
– ident: ref41
  doi: 10.1109/TIT.2022.3198725
– ident: ref54
  doi: 10.1137/07070111X
– ident: ref8
  doi: 10.1109/ICCV.2017.197
– ident: ref29
  doi: 10.1088/1361-6420/abd85b
– ident: ref68
  doi: 10.1561/0400000060
– volume: 28
  start-page: 991
  volume-title: Proc. Adv. Neural Inf. Process. Syst. (NIPS)
  ident: ref71
  article-title: Fast and guaranteed tensor decomposition via sketching
– ident: ref62
  doi: 10.1109/TIP.2022.3155949
– ident: ref59
  doi: 10.1016/j.laa.2010.09.020
– ident: ref91
  doi: 10.1198/016214501753382273
– ident: ref86
  doi: 10.1137/20M1387158
– ident: ref78
  doi: 10.1007/s10898-022-01182-8
– ident: ref28
  doi: 10.1109/TIP.2020.3023798
– year: 2023
  ident: ref99
  article-title: Randomized numerical linear algebra: A perspective on the field with an eye to software
  publication-title: arXiv:2302.11474
– ident: ref65
  doi: 10.1109/tpami.2023.3259640
– ident: ref83
  doi: 10.1007/s10444-018-9622-8
– ident: ref69
  doi: 10.1017/S0962492920000021
– ident: ref45
  doi: 10.1109/TPAMI.2021.3059299
– ident: ref44
  doi: 10.1109/JSTSP.2018.2879185
– ident: ref21
  doi: 10.24963/ijcai.2019/368
SSID ssj0014516
Score 2.5073242
Snippet Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2835
SubjectTerms ADMM algorithm
Algorithms
Approximation
Approximation algorithms
Computational modeling
Computing costs
High-order T-SVD framework
Image reconstruction
Indexes
Mathematical analysis
nonconvex regularizers
Optimization
randomized low-rank tensor approximation
robust high-order tensor completion
Robustness
Singular value decomposition
Tensors
Title Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation
URI https://ieeexplore.ieee.org/document/10496551
https://www.ncbi.nlm.nih.gov/pubmed/38598373
https://www.proquest.com/docview/3040055593
https://www.proquest.com/docview/3037396620
Volume 33
WOSCitedRecordID wos001205012500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3daxQxEB-0iNSHVmvVrbVE8MWHtNmP2ySPpVgUylnKVe5tyScc1V25Dy3-9Z3J7h31oYKwD1l2Nht2ZpKZzMwvAB-stIbQJnkeTeSV9oLbGC3Hpc6rPDc2prq1bxdyPFbTqb4citVTLUwIISWfhWNqpli-79yKtspQwwndnAqmH0tZ98Vam5ABnTibQpsjySXa_euYpNAnky-X6AkW1TH6YyOcj7fhKTY0-mblX8tROl_lYVMzLTnnu_852OewM9iW7LQXhhfwKLR7sDvYmWzQ4sUePLsHQvgSrsddm3LPb9lVZ1eLJaPcD_6VMDnZBL3cbs5o1iCU7q5lKceAXZnWdz9mf7Dfi-43x9sbdkrw5LezvhZyH67PP03OPvPhsAXuykoteZTCC--iCcaEGjVfhqKQTo6ctXjldRBOCU1wg2VlTLRF4bw2Lorau8Kp8hVstV0b3gCLAYlsbm2lQoX2ljGqML52Jg_KS60zOFn_88YNSOR0IMb3JnkkQjfIsIYY1gwMy-Dj5o2fPQrHP2j3iRn36Ho-ZHC45msz6OmiKWkOG6FXVWbwfvMYNYzCJqYN3YpoUE7QKyxEBq97edh0vhajgwc--ha2aWz9ns0hbC3nq_AOnrhfy9lifoRiPFVHSYzvAMXI7YI
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB6hgqAcWigFAgWMxIWDW-ex6_hYVVStWJaq2qLeIj-lFZCgfdCqv54Zx7sqhyIh5eAojmNlZuwZz8w3AB-MNJrQJnkedOCVcoKbEAzHrc7Vea5NiHlr30ZyPK4vL9VZSlaPuTDe-xh85vepGX35rrNLOipDCSd0c0qYvk-ls1K61tppQDVno3NzILlEzX_llRTqYHJ6hrZgUe2jRTbAFXkTHmJDoXVW_rUhxQordyubcdM53v7P6T6BraRdssOeHZ7CPd_uwHbSNFmS4_kOPL4FQ_gMLsZdG6PPr9l5Z5bzBaPoD_6VUDnZBO3cbsZo3SCc7q5lMcqAnevWdT-nNzjuqLviePudHRJA-fW0z4bchYvjT5OjE57KLXBbVvWCBymccDZor7UfouxLXxTSyoE1Bq986IWthSLAwbLSOpiisE5pG8TQ2cLW5XPYaLvWvwQWPHYyuTFV7SvUuLSuC-2GVue-dlKpDA5W_7yxCYucSmL8aKJNIlSDBGuIYE0iWAYf12_86nE4_tF3l4hxq19Phwz2VnRtkqTOm5JWsQHaVWUG79ePUcbIcaJb3y2pD_IJ2oWFyOBFzw_rwVds9OqOj76DRyeTL6NmdDr-_Bo2aZ79Cc4ebCxmS_8GHtjfi-l89jYy8x-xsu_j
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Nonconvex+Robust+High-Order+Tensor+Completion+Using+Randomized+Low-Rank+Approximation&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Qin%2C+Wenjin&rft.au=Wang%2C+Hailin&rft.au=Zhang%2C+Feng&rft.au=Ma%2C+Weijun&rft.date=2024-01-01&rft.eissn=1941-0042&rft.volume=PP&rft_id=info:doi/10.1109%2FTIP.2024.3385284&rft_id=info%3Apmid%2F38598373&rft.externalDocID=38598373
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon