Visual saliency detection via invariant feature constrained stacked denoising autoencoder

Visual saliency detection is usually regarded as an image pre-processing method to predict and locate the position and shape of saliency regions. However, many existing saliency detection methods can only obtain the local or even incorrect position and shape of saliency regions, resulting in incompl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications Jg. 82; H. 18; S. 27451 - 27472
Hauptverfasser: Ma, Yunpeng, Yu, Zhihong, Zhou, Yaqin, Xu, Chang, Yu, Dabing
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York Springer US 01.07.2023
Springer Nature B.V
Schlagworte:
ISSN:1380-7501, 1573-7721
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Visual saliency detection is usually regarded as an image pre-processing method to predict and locate the position and shape of saliency regions. However, many existing saliency detection methods can only obtain the local or even incorrect position and shape of saliency regions, resulting in incomplete detection and segmentation of the salient target region. In order to solve this problem, a visual saliency detection method based on scale invariant feature and stacked denoising autoencoder is proposed. Firstly, the deep belief network would be pretrained to initialize the parameters of stacked denoising autoencoder network. Secondly, different from traditional features, scale invariant feature is not limited to the size, resolution, and content of original images. At the same time, it can help the network to restore important features of original images more accurately in multi-scale space. So, scale invariant feature is adopted to design the loss function of the network to complete self-training and update the parameters. Finally, the difference between the final reconstructed image obtained by stacked denoising autoencoder and the original is regarded as the final saliency map. In the experiment, we test the performance of the proposed method in both saliency prediction and saliency object segmentation. The experimental results show that the proposed method has good ability in saliency prediction and has the best performance in saliency object segmentation than other comparison saliency prediction methods and saliency object detection methods.
AbstractList Visual saliency detection is usually regarded as an image pre-processing method to predict and locate the position and shape of saliency regions. However, many existing saliency detection methods can only obtain the local or even incorrect position and shape of saliency regions, resulting in incomplete detection and segmentation of the salient target region. In order to solve this problem, a visual saliency detection method based on scale invariant feature and stacked denoising autoencoder is proposed. Firstly, the deep belief network would be pretrained to initialize the parameters of stacked denoising autoencoder network. Secondly, different from traditional features, scale invariant feature is not limited to the size, resolution, and content of original images. At the same time, it can help the network to restore important features of original images more accurately in multi-scale space. So, scale invariant feature is adopted to design the loss function of the network to complete self-training and update the parameters. Finally, the difference between the final reconstructed image obtained by stacked denoising autoencoder and the original is regarded as the final saliency map. In the experiment, we test the performance of the proposed method in both saliency prediction and saliency object segmentation. The experimental results show that the proposed method has good ability in saliency prediction and has the best performance in saliency object segmentation than other comparison saliency prediction methods and saliency object detection methods.
Author Xu, Chang
Ma, Yunpeng
Zhou, Yaqin
Yu, Dabing
Yu, Zhihong
Author_xml – sequence: 1
  givenname: Yunpeng
  orcidid: 0000-0001-6077-3097
  surname: Ma
  fullname: Ma, Yunpeng
  email: yunpengma_hhu@163.com
  organization: Key Laboratory of Sensor Networks and Environmental Sensing, Hohai University, Jiangsu Key Laboratory of Power Transmission and Distribution Equipment Technology, Hohai University, College of Internet of Things Engineering, Hohai University
– sequence: 2
  givenname: Zhihong
  surname: Yu
  fullname: Yu, Zhihong
  organization: Key Laboratory of Sensor Networks and Environmental Sensing, Hohai University, College of Internet of Things Engineering, Hohai University
– sequence: 3
  givenname: Yaqin
  surname: Zhou
  fullname: Zhou, Yaqin
  organization: Key Laboratory of Sensor Networks and Environmental Sensing, Hohai University, Jiangsu Key Laboratory of Power Transmission and Distribution Equipment Technology, Hohai University, College of Internet of Things Engineering, Hohai University
– sequence: 4
  givenname: Chang
  surname: Xu
  fullname: Xu, Chang
  organization: Key Laboratory of Sensor Networks and Environmental Sensing, Hohai University, College of Internet of Things Engineering, Hohai University
– sequence: 5
  givenname: Dabing
  surname: Yu
  fullname: Yu, Dabing
  organization: Key Laboratory of Sensor Networks and Environmental Sensing, Hohai University, College of Internet of Things Engineering, Hohai University
BookMark eNp9kE1LAzEQhoNUsK3-AU8Lnlfzsdlkj1L8goIXFTyFNJktqWtSk2yh_95dKwgeenrn8D4zwzNDEx88IHRJ8DXBWNwkQnBFS0xZSSpOeSlP0JRwwUohKJkMM5O4FByTMzRLaYMxqTmtpuj9zaVed0XSnQNv9oWFDCa74Iud04XzOx2d9rloQec-QmGCTzlq58EWKWvzMaQFH1xyfl3oPodhTbAQz9Fpq7sEF785R6_3dy-Lx3L5_PC0uF2WhpEml1AZXtf1qsWC1VYwzizn1LYMWw3Aag1WgGAt41SytmHaStBccCZXjZTWsjm6OuzdxvDVQ8pqE_roh5OKSto0sq6EHFr00DIxpBShVdvoPnXcK4LVqFAdFKpBofpRqEZI_oOMy3qUMxrojqPsgKbhjl9D_PvqCPUN-cWJWw
CitedBy_id crossref_primary_10_1007_s11042_024_19796_3
Cites_doi 10.1162/neco.2008.04-07-510
10.1109/TMM.2019.2947352
10.1109/TMM.2016.2576283
10.1002/asi.10242
10.1109/CVPR.2016.399
10.1109/CVPR.2019.00320
10.7551/mitpress/7503.003.0073
10.1109/TCSVT.2013.2280096
10.1109/TIP.2018.2882156
10.1007/s11042-015-3037-z
10.1016/j.patcog.2012.02.009
10.1109/TIP.2015.2411433
10.1109/CVPR.2015.7298731
10.1023/B:VISI.0000029664.99615.94
10.1109/TMM.2018.2864613
10.1109/CVPR.2014.360
10.1145/1180639.1180824
10.1109/TMM.2017.2713982
10.1109/TIP.2017.2669878
10.1167/7.9.950
10.1109/TPAMI.2011.146
10.1109/ICCV.2013.370
10.1109/CVPR.2013.151
10.1109/TIP.2012.2199502
10.1109/CVPRW.2012.6239191
10.1109/CVPR.2015.7298935
10.1007/978-3-319-54407-6_19
10.1167/13.4.11
10.1109/ICCV.2017.31
10.1007/s11042-019-7462-2
10.1109/TMM.2017.2693022
10.1007/s11042-019-7423-9
10.1007/s11042-019-7431-9
10.1109/CVPR.2010.5539929
10.1109/LSP.2013.2260737
10.1109/TIP.2015.2440174
10.1109/TIP.2015.2487833
10.1016/j.cviu.2008.08.006
10.1162/neco.2006.18.7.1527
10.1023/A:1026543900054
10.1109/TMM.2015.2389616
10.1109/ICCV.2009.5459462
10.1109/CVPR.2012.6247711
10.1109/CVPR.2006.95
10.1109/TNNLS.2016.2522440
10.1109/TNNLS.2015.2512898
10.1109/TPAMI.2012.98
10.1109/CVPR.2018.00326
10.1109/TMM.2017.2694219
10.1016/j.patcog.2013.03.006
10.1016/j.imavis.2020.103887
10.1109/CVPR.2015.7298938
10.1109/TMM.2016.2638207
10.1109/TMM.2011.2169775
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8AO
8FD
8FE
8FG
8FK
8FL
8G5
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
GUQSH
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
M2O
MBDVC
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
Q9U
DOI 10.1007/s11042-023-14525-8
DatabaseName CrossRef
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
ProQuest Pharma Collection
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
Research Library (Alumni)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials - QC
ProQuest Central
Business Premium Collection
Technology Collection
ProQuest One Community College
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
ProQuest Research Library
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Research Library
Research Library (Corporate)
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Research Library Prep
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
Research Library (Alumni Edition)
ProQuest Pharma Collection
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Research Library
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
Business Premium Collection (Alumni)
DatabaseTitleList ABI/INFORM Global (Corporate)

Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1573-7721
EndPage 27472
ExternalDocumentID 10_1007_s11042_023_14525_8
GrantInformation_xml – fundername: national natural science foundation of china
  grantid: 62001156
  funderid: http://dx.doi.org/10.13039/501100001809
– fundername: Jiangsu Provincial Key Research and Development Program
  grantid: BE2019036; BE2020092; BE 2020649
  funderid: http://dx.doi.org/10.13039/501100013058
– fundername: Fundamental Research Funds for the Central Universities
  grantid: B220201037
  funderid: http://dx.doi.org/10.13039/501100012226
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
2.D
203
28-
29M
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
3EH
3V.
4.4
406
408
409
40D
40E
5QI
5VS
67Z
6NX
7WY
8AO
8FE
8FG
8FL
8G5
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACSNA
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GUQSH
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IHE
IJ-
IKXTQ
ITG
ITH
ITM
IWAJR
IXC
IXE
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M2O
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
Q2X
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TEORI
TH9
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7W
Z7X
Z7Y
Z7Z
Z81
Z83
Z86
Z88
Z8M
Z8N
Z8Q
Z8R
Z8S
Z8T
Z8U
Z8W
Z92
ZMTXR
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
PQGLB
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
MBDVC
PKEHL
PQEST
PQUKI
Q9U
ID FETCH-LOGICAL-c319t-e4c5666bf0736d7353d552df30daee36aed7e73f35283f93ad8ea57538b988dd3
IEDL.DBID K7-
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000937944300004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1380-7501
IngestDate Wed Nov 05 02:04:41 EST 2025
Tue Nov 18 22:42:18 EST 2025
Sat Nov 29 06:20:25 EST 2025
Fri Feb 21 02:43:15 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 18
Keywords Visual saliency detection
Scale invariant feature
Saliency prediction
Saliency object segmentation
Stacked denoising autoencoder
Reconstruction network
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-e4c5666bf0736d7353d552df30daee36aed7e73f35283f93ad8ea57538b988dd3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-6077-3097
PQID 2829986478
PQPubID 54626
PageCount 22
ParticipantIDs proquest_journals_2829986478
crossref_primary_10_1007_s11042_023_14525_8
crossref_citationtrail_10_1007_s11042_023_14525_8
springer_journals_10_1007_s11042_023_14525_8
PublicationCentury 2000
PublicationDate 20230700
2023-07-00
20230701
PublicationDateYYYYMMDD 2023-07-01
PublicationDate_xml – month: 7
  year: 2023
  text: 20230700
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
– name: Dordrecht
PublicationSubtitle An International Journal
PublicationTitle Multimedia tools and applications
PublicationTitleAbbrev Multimed Tools Appl
PublicationYear 2023
Publisher Springer US
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer Nature B.V
References HouXHarelJKochCImage signature: highlighting sparse salient regionsIEEE Trans Pattern Anal Mach Intell20123419420110.1109/TPAMI.2011.146
ErdemEErdemAVisual saliency estimation by nonlinearly integrating features using region covariancesJ Vis2013131110.1167/13.4.11
RahtuEKannalaJSaloMHeikkilaJSegmenting salient objects from images and videos. In: computer vision - ECCV 20102010Heraklion, Crete, GreeceP.V. Springer366379
ChangH-HShihTKChangCKTavanapongWCMAIR: content and mask-aware image retargetingMultimed Tools Appl201978217312175810.1007/s11042-019-7462-2
Goferman S, Zelnik-Manor L, Tal A (2010) Context-aware saliency detection. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, pp 2376–2383. https://doi.org/10.1109/CVPR.2010.5539929
Zhu W, Liang S, Wei Y, Sun J (2014) Saliency optimization from robust background detection. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 2814–2821
He J, Feng J, Liu X, et al (2012) Mobile product search with bag of hash bits and boundary reranking. In: 2012 IEEE conference on computer vision and pattern recognition. pp. 3005–3012
Judd T, Ehinger K, Durand F, Torralba A (2009) Learning to predict where humans look. In: 2009 IEEE 12th International Conference on Computer Vision. IEEE, pp 2106–2113
QianXWangHZhaoYHouXHongRWangMTangYYImage location inference by multisaliency enhancementIEEE Trans Multimed20171981382110.1109/TMM.2016.2638207
MahadevanVVasconcelosNBiologically inspired object tracking using center-surround saliency mechanismsIEEE Trans Pattern Anal Mach Intell20133554155410.1109/TPAMI.2012.98
ZhouHYuanYShiCObject tracking using SIFT features and mean shiftComput Vis Image Underst200911334535210.1016/j.cviu.2008.08.006
YeLLiuZLiLShenLBaiCWangYSalient object segmentation via effective integration of saliency and ObjectnessIEEE Trans Multimed2017191742175610.1109/TMM.2017.2693022
HuangFQiJLuHZhangLRuanXSalient object detection via multiple instance learningIEEE Trans Image Process20172619111922363624010.1109/TIP.2017.26698781409.94235
YangCZhangLLuHGraph-regularized saliency detection with convex-Hull-based center priorIEEE Signal Process Lett20132063764010.1109/LSP.2013.2260737
MaCMiaoZZhangXLiMA saliency prior context model for real-time object trackingIEEE Trans Multimed2017192415242410.1109/TMM.2017.2694219
Tavakoli HR, Laaksonen J (2017) Bottom-up fixation prediction using unsupervised hierarchical models. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp 287–302
Ke Y, Sukthankar R (2004) PCA-SIFT: a more distinctive representation for local image descriptors. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, pp 506–513
Borji A, Itti L (2012) Exploiting local and global patch rarities for saliency detection. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 478–485. https://doi.org/10.1109/CVPR.2012.6247711
BorjiAChengMJiangHLiJSalient object detection: a benchmarkIEEE Trans Image Process20152457065722341785210.1109/TIP.2015.24878331408.94882
LoweDGDistinctive image features from scale-invariant keypointsInt J Comput Vis2004609111010.1023/B:VISI.0000029664.99615.94
Kuen J, Wang Z, Wang G (2016) Recurrent Attentional Networks for Saliency Detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3668–3677
XiaCQiFShiGBottom–up visual saliency estimation with deep autoencoder-based sparse reconstructionIEEE Trans Neural Netw Learn Syst20162712271240350723610.1109/TNNLS.2015.2512898
Zhao T, Wu X (2019) Pyramid feature attention network for saliency detection. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3080–3089
Margolin R, Tal A, Zelnik-Manor L (2013) What makes a patch distinct? In: 2013 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 1139–1146
ChengHZhangJWuQAnPA computational model for stereoscopic visual saliency predictionIEEE Trans Multimed20192167868910.1109/TMM.2018.2864613
GaoYWangMTaoDJiRDaiQ3-D object retrieval and recognition with hypergraph analysisIEEE Trans Image Process20122142904303297241810.1109/TIP.2012.21995021373.94131
ZhaiYShahMShahPMVisual attention detection in video sequences using spatiotemporal cuesIn: Proceedings of the 14th annual ACM international conference on Multimedia2006Santa BarbaraACM Press81582410.1145/1180639.1180824
AytekinCPosseggerHMauthnerTKiranyazSBischofHGabboujMSpatiotemporal saliency estimation by spectral foreground detectionIEEE Trans Multimed201820829510.1109/TMM.2017.2713982
LiHLuHLinZShenXPriceBInner and inter label propagation: salient object detection in the wildIEEE Trans Image Process20152431763186335880710.1109/TIP.2015.24401741408.94371
VincentPLarochelleHLajoieIStacked Denoising autoencoders: learning useful representations in a deep network with a local Denoising criterionJ Mach Learn Res2010113371340827561881242.68256
Zhang P, Wang D, Lu H et al (2017) Amulet: aggregating multi-level convolutional features for salient object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, pp 202–211
XiaoSLiTWangJOptimization methods of video images processing for mobile object recognitionMultimed Tools Appl202079172451725510.1007/s11042-019-7423-9
XiaoXZhouYGongYRGB-‘D’ saliency detection with Pseudo depthIEEE Trans Image Process20192821262139390909810.1109/TIP.2018.2882156
Le RouxNBengioYRepresentational power of restricted Boltzmann machines and deep belief networksNeural Comput20082016311649241037010.1162/neco.2008.04-07-5101140.68057
FangSLiJTianYHuangTChenXLearning discriminative subspaces on random contrasts for image saliency analysisIEEE Trans Neural Netw Learn Syst2017281095110810.1109/TNNLS.2016.2522440
RiazSParkULeeS-WA photograph reconstruction by object retargeting for better compositionMultimed Tools Appl201675164391646010.1007/s11042-015-3037-z
Borji A, Frintrop S, Sihite DN, Itti L (2012) Adaptive object tracking by learning background context. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, pp 23–30. https://doi.org/10.1109/CVPRW.2012.6239191
Li X, Lu H, Zhang L et al (2013) Saliency detection via dense and sparse reconstruction. In: 2013 IEEE International Conference on Computer Vision. IEEE, pp 2976–2983
Rafiee, G., Woo, et al (2013) Region-of-interest extraction in low depth of field images using ensemble clustering and difference of Gaussian approaches. Pattern Recognit J Pattern Recognit Soc 46:2685–2699
AfshariradHSeyedinSACorrection to: salient object detection using the phase information and object modelMultimed Tools Appl2019781908110.1007/s11042-019-7431-9
DuanLWuCMiaoJVisual saliency detection by spatially weighted dissimilarityCVPR20112011473480
Liu N, Han J, Yang M-H (2018) PiCANet: learning pixel-wise contextual attention for saliency detection. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, pp 3089–3098
Wrede, B., Tscherepanow, et al (2012) A saliency map based on sampling an image into random rectangular regions of interest. Pattern Recognit J Pattern Recognit Soc 45:3114–3124
JerripothulaKRCaiJYuanJImage co-segmentation via saliency co-fusionIEEE Trans Multimed2016181896190910.1109/TMM.2016.2576283
BruceNTsotsosJAttention based on information maximizationJ Vis2010795010.1167/7.9.950
DuncanJHumphreysGWVisual search and stimulus similarityJ Am Soc Inf Sci Technol198996433458
FangYLinWLeeBBottom-up saliency detection model based on human visual sensitivity and amplitude SpectrumIEEE Trans Multimed20121418719810.1109/TMM.2011.2169775
Wang L, Lu H, Ruan X, Yang M-H (2015) Deep networks for saliency detection via local estimation and global search. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3183–3192
Abdel-Hakim AE, Farag AA (2006) CSIFT: a SIFT descriptor with color invariant characteristics. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2 (CVPR’06). IEEE, pp 1978–1983
YangXQianXXueYScalable Mobile image retrieval by exploring contextual saliencyIEEE Trans Image Process20152417091721332592710.1109/TIP.2015.24114331408.94764
Bruce NDB, Tsotsos JK (2005) Saliency based on information maximization. In: Advances in Neural Information Processing Systems. pp 155–162
Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: A neural image caption generator. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3156–3164
Zhao R, Ouyang W, Li H, Wang X (2015) Saliency detection by multi-context deep learning. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 1265–1274
LiuFShenTLouSHanBDeep network saliency detection based on global model and local optimizationActa Opt Sin201737272280
YangSLinGJiangQLinWA dilated inception network for visual saliency predictionIEEE Trans Multimed2020222163217610.1109/TMM.2019.2947352
HintonGEOsinderoSTehYA fast learning algorithm for deep belief netsNeural Comput20061815271554222448510.1162/neco.2006.18.7.15271106.68094
AhlgrenPJarnevingBRousseauRRequirements for a cocitation similarity measure, with special reference to Pearson’s correlation coefficientJ Am Soc Inf Sci Technol20035455056010.1002/asi.10242
GaoYShiMTaoDXuCDatabase saliency for fast image retrievalIEEE Trans Multimed20151735936910.1109/TMM.2015.2389616
RubnerYTomasiCGuibasLJThe earth Mover’s distance as a metric for image retrievalInt J Comput Vis2000409912110.1023/A:10265439000541012.68705
JiaSBruceNDBEML-NET: An expandable multi-layer NETwork for saliency predictionImage Vis Comput20209510388710.1016/j.imavis.2020.103887
Kim K-S, Yoon Y-J, Kang M-C et al (2014) An improved GrabCut using a saliency map. In: 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE). IEEE, pp 317–318
RenZGaoSChiaL-TTsangIW-HRegion-based sa
14525_CR30
X Hou (14525_CR24) 2012; 34
14525_CR31
F Liu (14525_CR35) 2017; 37
X Yang (14525_CR56) 2015; 24
Y Zhai (14525_CR59) 2006
C Xia (14525_CR52) 2016; 27
14525_CR36
Y Rubner (14525_CR46) 2000; 40
H Afsharirad (14525_CR2) 2019; 78
14525_CR33
C Yang (14525_CR55) 2013; 20
KR Jerripothula (14525_CR26) 2016; 18
14525_CR29
L Duan (14525_CR13) 2011; 2011
H-H Chang (14525_CR10) 2019; 78
N Bruce (14525_CR9) 2010; 7
14525_CR42
S Jia (14525_CR27) 2020; 95
14525_CR40
14525_CR49
14525_CR47
Z Ren (14525_CR44) 2014; 24
L Ye (14525_CR58) 2017; 19
Y Fang (14525_CR16) 2012; 14
J Duncan (14525_CR14) 1989; 96
S Fang (14525_CR17) 2017; 28
DG Lowe (14525_CR37) 2004; 60
S Riaz (14525_CR45) 2016; 75
14525_CR50
V Mahadevan (14525_CR39) 2013; 35
14525_CR51
S Yang (14525_CR57) 2020; 22
S Xiao (14525_CR54) 2020; 79
C Aytekin (14525_CR4) 2018; 20
A Borji (14525_CR7) 2015; 24
E Rahtu (14525_CR43) 2010
P Vincent (14525_CR48) 2010; 11
X Qian (14525_CR41) 2017; 19
X Xiao (14525_CR53) 2019; 28
Y Gao (14525_CR18) 2012; 21
M Cheng (14525_CR11) 2011; 2011
GE Hinton (14525_CR23) 2006; 18
14525_CR20
14525_CR64
14525_CR61
14525_CR62
14525_CR60
14525_CR28
Y Gao (14525_CR19) 2015; 17
14525_CR21
14525_CR22
14525_CR1
E Erdem (14525_CR15) 2013; 13
14525_CR8
H Li (14525_CR34) 2015; 24
14525_CR5
14525_CR6
C Ma (14525_CR38) 2017; 19
P Ahlgren (14525_CR3) 2003; 54
N Le Roux (14525_CR32) 2008; 20
H Cheng (14525_CR12) 2019; 21
F Huang (14525_CR25) 2017; 26
H Zhou (14525_CR63) 2009; 113
References_xml – reference: JiaSBruceNDBEML-NET: An expandable multi-layer NETwork for saliency predictionImage Vis Comput20209510388710.1016/j.imavis.2020.103887
– reference: BruceNTsotsosJAttention based on information maximizationJ Vis2010795010.1167/7.9.950
– reference: JerripothulaKRCaiJYuanJImage co-segmentation via saliency co-fusionIEEE Trans Multimed2016181896190910.1109/TMM.2016.2576283
– reference: RahtuEKannalaJSaloMHeikkilaJSegmenting salient objects from images and videos. In: computer vision - ECCV 20102010Heraklion, Crete, GreeceP.V. Springer366379
– reference: He J, Feng J, Liu X, et al (2012) Mobile product search with bag of hash bits and boundary reranking. In: 2012 IEEE conference on computer vision and pattern recognition. pp. 3005–3012
– reference: Zhao R, Ouyang W, Li H, Wang X (2015) Saliency detection by multi-context deep learning. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 1265–1274
– reference: RenZGaoSChiaL-TTsangIW-HRegion-based saliency detection and its application in object recognitionIEEE Trans Circuits Syst Video Technol20142476977910.1109/TCSVT.2013.2280096
– reference: BorjiAChengMJiangHLiJSalient object detection: a benchmarkIEEE Trans Image Process20152457065722341785210.1109/TIP.2015.24878331408.94882
– reference: RiazSParkULeeS-WA photograph reconstruction by object retargeting for better compositionMultimed Tools Appl201675164391646010.1007/s11042-015-3037-z
– reference: GaoYWangMTaoDJiRDaiQ3-D object retrieval and recognition with hypergraph analysisIEEE Trans Image Process20122142904303297241810.1109/TIP.2012.21995021373.94131
– reference: Bruce NDB, Tsotsos JK (2005) Saliency based on information maximization. In: Advances in Neural Information Processing Systems. pp 155–162
– reference: HuangFQiJLuHZhangLRuanXSalient object detection via multiple instance learningIEEE Trans Image Process20172619111922363624010.1109/TIP.2017.26698781409.94235
– reference: ChengHZhangJWuQAnPA computational model for stereoscopic visual saliency predictionIEEE Trans Multimed20192167868910.1109/TMM.2018.2864613
– reference: FangYLinWLeeBBottom-up saliency detection model based on human visual sensitivity and amplitude SpectrumIEEE Trans Multimed20121418719810.1109/TMM.2011.2169775
– reference: Liu N, Han J, Yang M-H (2018) PiCANet: learning pixel-wise contextual attention for saliency detection. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, pp 3089–3098
– reference: YangCZhangLLuHGraph-regularized saliency detection with convex-Hull-based center priorIEEE Signal Process Lett20132063764010.1109/LSP.2013.2260737
– reference: Zhao T, Wu X (2019) Pyramid feature attention network for saliency detection. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3080–3089
– reference: AfshariradHSeyedinSACorrection to: salient object detection using the phase information and object modelMultimed Tools Appl2019781908110.1007/s11042-019-7431-9
– reference: AhlgrenPJarnevingBRousseauRRequirements for a cocitation similarity measure, with special reference to Pearson’s correlation coefficientJ Am Soc Inf Sci Technol20035455056010.1002/asi.10242
– reference: HintonGEOsinderoSTehYA fast learning algorithm for deep belief netsNeural Comput20061815271554222448510.1162/neco.2006.18.7.15271106.68094
– reference: Margolin R, Tal A, Zelnik-Manor L (2013) What makes a patch distinct? In: 2013 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 1139–1146
– reference: ChangH-HShihTKChangCKTavanapongWCMAIR: content and mask-aware image retargetingMultimed Tools Appl201978217312175810.1007/s11042-019-7462-2
– reference: GaoYShiMTaoDXuCDatabase saliency for fast image retrievalIEEE Trans Multimed20151735936910.1109/TMM.2015.2389616
– reference: LiHLuHLinZShenXPriceBInner and inter label propagation: salient object detection in the wildIEEE Trans Image Process20152431763186335880710.1109/TIP.2015.24401741408.94371
– reference: XiaCQiFShiGBottom–up visual saliency estimation with deep autoencoder-based sparse reconstructionIEEE Trans Neural Netw Learn Syst20162712271240350723610.1109/TNNLS.2015.2512898
– reference: Zhang P, Wang D, Lu H et al (2017) Amulet: aggregating multi-level convolutional features for salient object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, pp 202–211
– reference: LiuFShenTLouSHanBDeep network saliency detection based on global model and local optimizationActa Opt Sin201737272280
– reference: LoweDGDistinctive image features from scale-invariant keypointsInt J Comput Vis2004609111010.1023/B:VISI.0000029664.99615.94
– reference: Borji A, Itti L (2012) Exploiting local and global patch rarities for saliency detection. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 478–485. https://doi.org/10.1109/CVPR.2012.6247711
– reference: Harel J, Koch C, Perona P (2007) Graph-based visual saliency. In: Advances in Neural Information Processing Systems 19. The MIT Press, pp 545–552. https://doi.org/10.7551/mitpress/7503.003.0073
– reference: Wang L, Lu H, Ruan X, Yang M-H (2015) Deep networks for saliency detection via local estimation and global search. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3183–3192
– reference: MaCMiaoZZhangXLiMA saliency prior context model for real-time object trackingIEEE Trans Multimed2017192415242410.1109/TMM.2017.2694219
– reference: ErdemEErdemAVisual saliency estimation by nonlinearly integrating features using region covariancesJ Vis2013131110.1167/13.4.11
– reference: Ke Y, Sukthankar R (2004) PCA-SIFT: a more distinctive representation for local image descriptors. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, pp 506–513
– reference: DuncanJHumphreysGWVisual search and stimulus similarityJ Am Soc Inf Sci Technol198996433458
– reference: Borji A, Frintrop S, Sihite DN, Itti L (2012) Adaptive object tracking by learning background context. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, pp 23–30. https://doi.org/10.1109/CVPRW.2012.6239191
– reference: Rafiee, G., Woo, et al (2013) Region-of-interest extraction in low depth of field images using ensemble clustering and difference of Gaussian approaches. Pattern Recognit J Pattern Recognit Soc 46:2685–2699
– reference: Judd T, Ehinger K, Durand F, Torralba A (2009) Learning to predict where humans look. In: 2009 IEEE 12th International Conference on Computer Vision. IEEE, pp 2106–2113
– reference: YangSLinGJiangQLinWA dilated inception network for visual saliency predictionIEEE Trans Multimed2020222163217610.1109/TMM.2019.2947352
– reference: Zhu W, Liang S, Wei Y, Sun J (2014) Saliency optimization from robust background detection. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 2814–2821
– reference: Kuen J, Wang Z, Wang G (2016) Recurrent Attentional Networks for Saliency Detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3668–3677
– reference: FangSLiJTianYHuangTChenXLearning discriminative subspaces on random contrasts for image saliency analysisIEEE Trans Neural Netw Learn Syst2017281095110810.1109/TNNLS.2016.2522440
– reference: YangXQianXXueYScalable Mobile image retrieval by exploring contextual saliencyIEEE Trans Image Process20152417091721332592710.1109/TIP.2015.24114331408.94764
– reference: AytekinCPosseggerHMauthnerTKiranyazSBischofHGabboujMSpatiotemporal saliency estimation by spectral foreground detectionIEEE Trans Multimed201820829510.1109/TMM.2017.2713982
– reference: Goferman S, Zelnik-Manor L, Tal A (2010) Context-aware saliency detection. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, pp 2376–2383. https://doi.org/10.1109/CVPR.2010.5539929
– reference: DuanLWuCMiaoJVisual saliency detection by spatially weighted dissimilarityCVPR20112011473480
– reference: Kim K-S, Yoon Y-J, Kang M-C et al (2014) An improved GrabCut using a saliency map. In: 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE). IEEE, pp 317–318
– reference: Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: A neural image caption generator. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3156–3164
– reference: Wrede, B., Tscherepanow, et al (2012) A saliency map based on sampling an image into random rectangular regions of interest. Pattern Recognit J Pattern Recognit Soc 45:3114–3124
– reference: ChengMZhangGMitraNJGlobal contrast based salient region detectionCVPR20112011409416
– reference: RubnerYTomasiCGuibasLJThe earth Mover’s distance as a metric for image retrievalInt J Comput Vis2000409912110.1023/A:10265439000541012.68705
– reference: Tavakoli HR, Laaksonen J (2017) Bottom-up fixation prediction using unsupervised hierarchical models. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp 287–302
– reference: HouXHarelJKochCImage signature: highlighting sparse salient regionsIEEE Trans Pattern Anal Mach Intell20123419420110.1109/TPAMI.2011.146
– reference: YeLLiuZLiLShenLBaiCWangYSalient object segmentation via effective integration of saliency and ObjectnessIEEE Trans Multimed2017191742175610.1109/TMM.2017.2693022
– reference: VincentPLarochelleHLajoieIStacked Denoising autoencoders: learning useful representations in a deep network with a local Denoising criterionJ Mach Learn Res2010113371340827561881242.68256
– reference: QianXWangHZhaoYHouXHongRWangMTangYYImage location inference by multisaliency enhancementIEEE Trans Multimed20171981382110.1109/TMM.2016.2638207
– reference: Abdel-Hakim AE, Farag AA (2006) CSIFT: a SIFT descriptor with color invariant characteristics. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2 (CVPR’06). IEEE, pp 1978–1983
– reference: XiaoSLiTWangJOptimization methods of video images processing for mobile object recognitionMultimed Tools Appl202079172451725510.1007/s11042-019-7423-9
– reference: Le RouxNBengioYRepresentational power of restricted Boltzmann machines and deep belief networksNeural Comput20082016311649241037010.1162/neco.2008.04-07-5101140.68057
– reference: MahadevanVVasconcelosNBiologically inspired object tracking using center-surround saliency mechanismsIEEE Trans Pattern Anal Mach Intell20133554155410.1109/TPAMI.2012.98
– reference: Li X, Lu H, Zhang L et al (2013) Saliency detection via dense and sparse reconstruction. In: 2013 IEEE International Conference on Computer Vision. IEEE, pp 2976–2983
– reference: XiaoXZhouYGongYRGB-‘D’ saliency detection with Pseudo depthIEEE Trans Image Process20192821262139390909810.1109/TIP.2018.2882156
– reference: ZhaiYShahMShahPMVisual attention detection in video sequences using spatiotemporal cuesIn: Proceedings of the 14th annual ACM international conference on Multimedia2006Santa BarbaraACM Press81582410.1145/1180639.1180824
– reference: ZhouHYuanYShiCObject tracking using SIFT features and mean shiftComput Vis Image Underst200911334535210.1016/j.cviu.2008.08.006
– volume: 20
  start-page: 1631
  year: 2008
  ident: 14525_CR32
  publication-title: Neural Comput
  doi: 10.1162/neco.2008.04-07-510
– volume: 22
  start-page: 2163
  year: 2020
  ident: 14525_CR57
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2019.2947352
– volume: 18
  start-page: 1896
  year: 2016
  ident: 14525_CR26
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2016.2576283
– volume: 54
  start-page: 550
  year: 2003
  ident: 14525_CR3
  publication-title: J Am Soc Inf Sci Technol
  doi: 10.1002/asi.10242
– ident: 14525_CR31
  doi: 10.1109/CVPR.2016.399
– ident: 14525_CR61
  doi: 10.1109/CVPR.2019.00320
– volume: 11
  start-page: 3371
  year: 2010
  ident: 14525_CR48
  publication-title: J Mach Learn Res
– volume: 2011
  start-page: 409
  year: 2011
  ident: 14525_CR11
  publication-title: CVPR
– ident: 14525_CR21
  doi: 10.7551/mitpress/7503.003.0073
– volume: 24
  start-page: 769
  year: 2014
  ident: 14525_CR44
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2013.2280096
– start-page: 366
  volume-title: Segmenting salient objects from images and videos. In: computer vision - ECCV 2010
  year: 2010
  ident: 14525_CR43
– volume: 28
  start-page: 2126
  year: 2019
  ident: 14525_CR53
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2018.2882156
– volume: 75
  start-page: 16439
  year: 2016
  ident: 14525_CR45
  publication-title: Multimed Tools Appl
  doi: 10.1007/s11042-015-3037-z
– ident: 14525_CR51
  doi: 10.1016/j.patcog.2012.02.009
– volume: 24
  start-page: 1709
  year: 2015
  ident: 14525_CR56
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2015.2411433
– ident: 14525_CR62
  doi: 10.1109/CVPR.2015.7298731
– volume: 60
  start-page: 91
  year: 2004
  ident: 14525_CR37
  publication-title: Int J Comput Vis
  doi: 10.1023/B:VISI.0000029664.99615.94
– volume: 21
  start-page: 678
  year: 2019
  ident: 14525_CR12
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2018.2864613
– ident: 14525_CR64
  doi: 10.1109/CVPR.2014.360
– start-page: 815
  volume-title: In: Proceedings of the 14th annual ACM international conference on Multimedia
  year: 2006
  ident: 14525_CR59
  doi: 10.1145/1180639.1180824
– volume: 20
  start-page: 82
  year: 2018
  ident: 14525_CR4
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2017.2713982
– volume: 26
  start-page: 1911
  year: 2017
  ident: 14525_CR25
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2017.2669878
– volume: 7
  start-page: 950
  year: 2010
  ident: 14525_CR9
  publication-title: J Vis
  doi: 10.1167/7.9.950
– volume: 34
  start-page: 194
  year: 2012
  ident: 14525_CR24
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2011.146
– ident: 14525_CR33
  doi: 10.1109/ICCV.2013.370
– ident: 14525_CR40
  doi: 10.1109/CVPR.2013.151
– volume: 21
  start-page: 4290
  year: 2012
  ident: 14525_CR18
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2012.2199502
– ident: 14525_CR6
  doi: 10.1109/CVPRW.2012.6239191
– volume: 2011
  start-page: 473
  year: 2011
  ident: 14525_CR13
  publication-title: CVPR
– ident: 14525_CR49
  doi: 10.1109/CVPR.2015.7298935
– ident: 14525_CR47
  doi: 10.1007/978-3-319-54407-6_19
– volume: 13
  start-page: 11
  year: 2013
  ident: 14525_CR15
  publication-title: J Vis
  doi: 10.1167/13.4.11
– ident: 14525_CR60
  doi: 10.1109/ICCV.2017.31
– volume: 78
  start-page: 21731
  year: 2019
  ident: 14525_CR10
  publication-title: Multimed Tools Appl
  doi: 10.1007/s11042-019-7462-2
– volume: 19
  start-page: 1742
  year: 2017
  ident: 14525_CR58
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2017.2693022
– volume: 96
  start-page: 433
  year: 1989
  ident: 14525_CR14
  publication-title: J Am Soc Inf Sci Technol
– volume: 79
  start-page: 17245
  year: 2020
  ident: 14525_CR54
  publication-title: Multimed Tools Appl
  doi: 10.1007/s11042-019-7423-9
– volume: 78
  start-page: 19081
  year: 2019
  ident: 14525_CR2
  publication-title: Multimed Tools Appl
  doi: 10.1007/s11042-019-7431-9
– ident: 14525_CR20
  doi: 10.1109/CVPR.2010.5539929
– volume: 20
  start-page: 637
  year: 2013
  ident: 14525_CR55
  publication-title: IEEE Signal Process Lett
  doi: 10.1109/LSP.2013.2260737
– volume: 24
  start-page: 3176
  year: 2015
  ident: 14525_CR34
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2015.2440174
– volume: 24
  start-page: 5706
  year: 2015
  ident: 14525_CR7
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2015.2487833
– volume: 113
  start-page: 345
  year: 2009
  ident: 14525_CR63
  publication-title: Comput Vis Image Underst
  doi: 10.1016/j.cviu.2008.08.006
– volume: 18
  start-page: 1527
  year: 2006
  ident: 14525_CR23
  publication-title: Neural Comput
  doi: 10.1162/neco.2006.18.7.1527
– volume: 40
  start-page: 99
  year: 2000
  ident: 14525_CR46
  publication-title: Int J Comput Vis
  doi: 10.1023/A:1026543900054
– ident: 14525_CR29
– ident: 14525_CR30
– volume: 17
  start-page: 359
  year: 2015
  ident: 14525_CR19
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2015.2389616
– ident: 14525_CR28
  doi: 10.1109/ICCV.2009.5459462
– ident: 14525_CR5
  doi: 10.1109/CVPR.2012.6247711
– ident: 14525_CR1
  doi: 10.1109/CVPR.2006.95
– volume: 28
  start-page: 1095
  year: 2017
  ident: 14525_CR17
  publication-title: IEEE Trans Neural Netw Learn Syst
  doi: 10.1109/TNNLS.2016.2522440
– volume: 27
  start-page: 1227
  year: 2016
  ident: 14525_CR52
  publication-title: IEEE Trans Neural Netw Learn Syst
  doi: 10.1109/TNNLS.2015.2512898
– volume: 35
  start-page: 541
  year: 2013
  ident: 14525_CR39
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2012.98
– ident: 14525_CR36
  doi: 10.1109/CVPR.2018.00326
– volume: 19
  start-page: 2415
  year: 2017
  ident: 14525_CR38
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2017.2694219
– ident: 14525_CR42
  doi: 10.1016/j.patcog.2013.03.006
– ident: 14525_CR22
– volume: 37
  start-page: 272
  year: 2017
  ident: 14525_CR35
  publication-title: Acta Opt Sin
– volume: 95
  start-page: 103887
  year: 2020
  ident: 14525_CR27
  publication-title: Image Vis Comput
  doi: 10.1016/j.imavis.2020.103887
– ident: 14525_CR8
– ident: 14525_CR50
  doi: 10.1109/CVPR.2015.7298938
– volume: 19
  start-page: 813
  year: 2017
  ident: 14525_CR41
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2016.2638207
– volume: 14
  start-page: 187
  year: 2012
  ident: 14525_CR16
  publication-title: IEEE Trans Multimed
  doi: 10.1109/TMM.2011.2169775
SSID ssj0016524
Score 2.3268826
Snippet Visual saliency detection is usually regarded as an image pre-processing method to predict and locate the position and shape of saliency regions. However, many...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 27451
SubjectTerms Accuracy
Belief networks
Bias
Computer Communication Networks
Computer Science
Data Structures and Information Theory
Deep learning
Image reconstruction
Image restoration
Image retrieval
Image segmentation
Invariants
Laboratories
Methods
Multimedia
Multimedia Information Systems
Neural networks
Noise reduction
Object recognition
Parameters
Salience
Special Purpose and Application-Based Systems
SummonAdditionalLinks – databaseName: SpringerLINK Contemporary 1997-Present
  dbid: RSV
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3PS8MwFA46PejB6VScTsnBmwbapmnTo4jiaYg_xjyVpEmgIJ2s3f5-X7J0U1FBb4U0Ibzk5X3ty_s-hM4lSwBV8IyEMg1IzHhBskBHRIbwGAHeyAx3YhPpcMjH4-zeF4XV7W33NiXpTupVsVtoS0kgxpDQJuMIX0cbEO64dceHx9Eyd5AwL2XLAwLxMPSlMt-P8TkcrTDml7Soiza33f_NcxfteHSJrxbbYQ-t6aqHuq1yA_aO3EPbH2gI99HLqKxn0K0GTG4rMbHSjbuhVeF5KXBZzeGDGlYAG-1oQHFhQaXVltAKA7qEg0BBn2pS2h8PWMyaiaXHVHp6gJ5vb56u74iXXCAF-GJDdFwAvkukAc9PVEoZVYxFytBACa1pIrRKdUqN44QxGRWKawGIj3KZca4UPUSdalLpI4Qls3IJVHAlTWxZZgIVsYIGsTDaSJr0UdhaPi88H7md-mu-YlK2lszBkrmzZM776GLZ523BxvHr24N2QXPvmXVuM8eWkj6F5st2AVfNP492_LfXT9CWVaZf3OwdoE4znelTtFnMm7Kenrkd-w7IGeRb
  priority: 102
  providerName: Springer Nature
Title Visual saliency detection via invariant feature constrained stacked denoising autoencoder
URI https://link.springer.com/article/10.1007/s11042-023-14525-8
https://www.proquest.com/docview/2829986478
Volume 82
WOSCitedRecordID wos000937944300004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: SpringerLINK Contemporary 1997-Present
  customDbUrl:
  eissn: 1573-7721
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0016524
  issn: 1380-7501
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LT-MwEB7xOiwHYNldUR6VD3sDiySuE-eEAIGQEKXizV6iJLalSKsUmrS_n5nUoYAEFy5WpMSWpbFnvng83wfwN5MhogoVcz-LPN6TKuexZwKe-fgYIN6IrWrEJqJ-Xz08xAN34Fa5a5WtT2wctR7mdEa-Txk_ohKP1MHTMyfVKMquOgmNeVj0g8CndX4e8dcsQiidqK3yOEZG3xXNTEvnfCpMwYjFfUrtcfU-MM3Q5ocEaRN3Tle_O-M1WHGIkx1Ol8hPmDPlOqy2ag7Mbe51WH5DTfgLHu-KaozdKsTpVJ3JtKmbW1slmxQpK8oJ_mSjVZg1DTUoywlokt6E0QwRJzoHjX3KYUGHESwd10OizNRm9BtuT09ujs-4k2HgOe7PmptejpgvzCx6g1BHQgotZaCt8HRqjAhToyMTCdvwxNhYpFqZFFGgUFmslNbiDyyUw9JsAMskSSiIVOnM9oh5xtOBzIXXS62xmQg74Lc2SHLHUU5T_5_M2JXJbgnaLWnslqgO7L72eZoydHz59XZrrMTt1iqZWaoDe625Z68_H23z69G24Aep009v927DQj0amx1Yyid1UY26MB_dP3Zh8eikP7jqNisX2wvvmNrgEtuB_Ift1fXdCwoW9PI
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LT9wwEB5RqNRy6FJKxVJafKAnsEjiPJxDVSEKAgGrHmgFp9SJbSkSysImu6h_qr-RGW_C0krsjUNvkRJbSfzNw56ZbwC28yhGr0Km3M8Tj4eRLHjqmYDnPl4G6G-kVrpmE8lgIC8v0-8L8KerhaG0yk4nOkWthwWdke9RxI-oxBP59eaWU9coiq52LTSmsDg1v-9wy1Z_OfmG6_s5CI4OLw6OedtVgBcIt4absEAXJs4tgjvWiYiEjqJAW-FpZYyIldGJSYR1tCc2FUpLo9CpETJPpdRa4LwvYCkMURwoVdA7eIhaxFHbRFd6HC2x3xbpTEv1fCqEQQvJfQolcvm3IZx5t_8EZJ2dO-r9b39oBd60HjXbn4rAW1gw1Sr0um4VrFVeq7D8iHrxHVz9LOsxDqtxH0LVp0ybxmWlVWxSKlZWE4WiWTXMGkd9ygpypKmfhtEMPWpUfhrHVMOSDluYGjdDogTVZrQGP57lc9_DYjWszDqwPKIWEUJJnduQmHU8HUSF8EJljc1F3Ae_W_OsaDnY6dWvsxl7NOEkQ5xkDieZ7MPOw5ibKQPJ3Kc3O3BkrTaqsxky-rDbwWt2--nZNubPtgWvji_Oz7Kzk8HpB3gdELpdJvMmLDajsfkIL4tJU9ajT05OGPx6btjdA9wHTCQ
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LT9wwEB5Riip6KI9SdSktPsCptUjideIcUFVBV6BFKw5Q0VPqxLYUqcrCJrtV_1p_XWe8CQtI5caht0iJrST-5mHPzDcAe7mM0atQKQ_zJOB9qQqeBjbieYiXEfobqVO-2UQyGqmrq_R8Cf50tTCUVtnpRK-ozbigM_IDivgRlXiiDlybFnF-PPh8fcOpgxRFWrt2GnOIDO3vX7h9qw9Pj3Gt96No8PXi6IS3HQZ4gdBruO0X6M7EuUOgxyYRUhgpI-NEYLS1ItbWJDYRzlOguFRoo6xGB0eoPFXKGIHzPoPnaIUlydgw4bcRjFi2DXVVwNEqh23BzrxsL6SiGLSWPKSwIlf3jeLC030QnPU2b7D2P_-tdXjVetrsy1w0NmDJVpuw1nWxYK1S24SXdygZX8P3b2U9xWE17k-oKpUZ2_hstYrNSs3KaqZRZKuGOespUVlBDjb12bCGoaeNStHgmGpc0iEM09NmTFShxk624PJJPvcNLFfjyr4FlktqHSG0MrnrE-NOYCJZiKCvnXW5iHsQduufFS03O736z2zBKk2YyRAzmcdMpnrw8XbM9ZyZ5NGndzqgZK2WqrMFSnrwqYPa4va_Z9t-fLZdeIFoy85OR8N3sBoR0H2C8w4sN5OpfQ8rxawp68kHLzIMfjw16v4CRSRUyg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Visual+saliency+detection+via+invariant+feature+constrained+stacked+denoising+autoencoder&rft.jtitle=Multimedia+tools+and+applications&rft.au=Ma%2C+Yunpeng&rft.au=Yu%2C+Zhihong&rft.au=Zhou%2C+Yaqin&rft.au=Xu%2C+Chang&rft.date=2023-07-01&rft.issn=1380-7501&rft.eissn=1573-7721&rft.volume=82&rft.issue=18&rft.spage=27451&rft.epage=27472&rft_id=info:doi/10.1007%2Fs11042-023-14525-8&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s11042_023_14525_8
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1380-7501&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1380-7501&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1380-7501&client=summon