An efficient method for autoencoder‐based collaborative filtering

Summary Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compar...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Concurrency and computation Ročník 31; číslo 23
Hlavní autori: Wang, Yi‐Lei, Tang, Wen‐Zhe, Yang, Xian‐Jun, Wu, Ying‐Jie, Chen, Fu‐Ji
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Hoboken Wiley Subscription Services, Inc 10.12.2019
Predmet:
ISSN:1532-0626, 1532-0634
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Summary Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models.
AbstractList Summary Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models.
Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models.
Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models.
Author Chen, Fu‐Ji
Wang, Yi‐Lei
Yang, Xian‐Jun
Wu, Ying‐Jie
Tang, Wen‐Zhe
Author_xml – sequence: 1
  givenname: Yi‐Lei
  surname: Wang
  fullname: Wang, Yi‐Lei
  organization: Fuzhou University
– sequence: 2
  givenname: Wen‐Zhe
  surname: Tang
  fullname: Tang, Wen‐Zhe
  organization: Fuzhou University
– sequence: 3
  givenname: Xian‐Jun
  surname: Yang
  fullname: Yang, Xian‐Jun
  organization: Fuzhou University
– sequence: 4
  givenname: Ying‐Jie
  orcidid: 0000-0002-5201-3159
  surname: Wu
  fullname: Wu, Ying‐Jie
  email: yjwu@fzu.edu.cn
  organization: Fuzhou University
– sequence: 5
  givenname: Fu‐Ji
  surname: Chen
  fullname: Chen, Fu‐Ji
  organization: Fuzhou University
BookMark eNp1kM1KAzEQx4NUsK2Cj7DgxcvWfOwm3WNZ6gcU9NB7yGYnmrJNapIqvfkIPqNP4taKB9HTzMDvPzP8RmjgvAOEzgmeEIzpld7ApCixOEJDUjKaY86KwU9P-QkaxbjCmBDMyBDVM5eBMVZbcClbQ3rybWZ8yNQ2eXDatxA-3t4bFaHNtO861figkn2BzNguQbDu8RQdG9VFOPuuY7S8ni_r23xxf3NXzxa5phUTOZC2LBgwhgUxtNGK8X7mRBctFkbQqiJGQMFZ1bCioi0IxShVXE2xqqaMjdHFYe0m-OctxCRXfhtcf1FSRqjgJWe0pyYHSgcfYwAjtU39w96loGwnCZZ7T7L3JPee-sDlr8Am2LUKu7_Q_IC-2g52_3Kyfph_8Z__DXhW
CitedBy_id crossref_primary_10_1177_1550147720923529
crossref_primary_10_3390_electronics9030501
crossref_primary_10_1109_ACCESS_2020_3002803
crossref_primary_10_1002_cpe_5425
crossref_primary_10_1080_1206212X_2022_2097769
crossref_primary_10_1177_1550147721992881
Cites_doi 10.21437/Interspeech.2010-487
10.1145/371920.372071
10.1145/3097983.3098077
10.1007/978-3-319-12643-2_35
10.1109/MC.2009.263
10.1561/2200000006
10.1145/2783258.2783273
10.1145/2988450.2988456
10.18653/v1/D15-1166
10.1145/1273496.1273596
10.1609/aaai.v29i1.9548
10.1145/2740908.2742726
10.1109/MIC.2003.1167344
10.1016/j.knosys.2013.06.010
10.1162/089976602760128018
10.1016/j.eswa.2015.01.001
ContentType Journal Article
Copyright 2018 John Wiley & Sons, Ltd.
2019 John Wiley & Sons, Ltd.
Copyright_xml – notice: 2018 John Wiley & Sons, Ltd.
– notice: 2019 John Wiley & Sons, Ltd.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1002/cpe.4507
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList
CrossRef
Computer and Information Systems Abstracts
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1532-0634
EndPage n/a
ExternalDocumentID 10_1002_cpe_4507
CPE4507
Genre article
GrantInformation_xml – fundername: National Natural Science Foundation of Fujian Province
  funderid: 2014J01230; 2017J01754
– fundername: National Natural Science Foundation of China
  funderid: 61300026
GroupedDBID .3N
.DC
.GA
05W
0R~
10A
1L6
1OC
33P
3SF
3WU
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5VS
66C
702
7PT
8-0
8-1
8-3
8-4
8-5
8UM
930
A03
AAESR
AAEVG
AAHHS
AAHQN
AAMNL
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABEML
ABIJN
ACAHQ
ACCFJ
ACCZN
ACPOU
ACSCC
ACXBN
ACXQS
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADOZA
ADXAS
ADZMN
ADZOD
AEEZP
AEIGN
AEIMD
AEQDE
AEUQT
AEUYR
AFBPY
AFFPM
AFGKR
AFPWT
AFWVQ
AHBTC
AITYG
AIURR
AIWBW
AJBDE
AJXKR
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ATUGU
AUFTA
AZBYB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BROTX
BRXPI
BY8
CS3
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
EBS
F00
F01
F04
F5P
G-S
G.N
GNP
GODZA
HGLYW
HHY
HZ~
IX1
JPC
KQQ
LATKE
LAW
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N04
N05
N9A
O66
O9-
OIG
P2W
P2X
P4D
PQQKQ
Q.N
Q11
QB0
QRW
R.K
ROL
RWI
RX1
SUPJJ
TN5
UB1
V2E
W8V
W99
WBKPD
WIH
WIK
WOHZO
WQJ
WRC
WXSBR
WYISQ
WZISG
XG1
XV2
~IA
~WT
AAYXX
ADMLS
AEYWJ
AGHNM
AGYGG
CITATION
O8X
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c2937-e1d543e33071f2bca3654361c4d07f72991f7e4639b3492de7a322a6a80a9833
IEDL.DBID DRFUL
ISICitedReferencesCount 4
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000496467600021&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1532-0626
IngestDate Fri Jul 25 01:59:20 EDT 2025
Tue Nov 18 22:33:10 EST 2025
Sat Nov 29 01:41:19 EST 2025
Wed Jan 22 16:38:02 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 23
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c2937-e1d543e33071f2bca3654361c4d07f72991f7e4639b3492de7a322a6a80a9833
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-5201-3159
PQID 2312765632
PQPubID 2045170
PageCount 1
ParticipantIDs proquest_journals_2312765632
crossref_citationtrail_10_1002_cpe_4507
crossref_primary_10_1002_cpe_4507
wiley_primary_10_1002_cpe_4507_CPE4507
PublicationCentury 2000
PublicationDate 10 December 2019
PublicationDateYYYYMMDD 2019-12-10
PublicationDate_xml – month: 12
  year: 2019
  text: 10 December 2019
  day: 10
PublicationDecade 2010
PublicationPlace Hoboken
PublicationPlace_xml – name: Hoboken
PublicationTitle Concurrency and computation
PublicationYear 2019
Publisher Wiley Subscription Services, Inc
Publisher_xml – name: Wiley Subscription Services, Inc
References 2002; 14
2009; 42
2001
2011
2013; 51
2015; 42
2003; 7
2008
2014; 15
2007
2014; 58
2017
2016
2015
2014
2013
2009; 2
2010; 9
e_1_2_7_6_1
e_1_2_7_5_1
e_1_2_7_4_1
Glorot X (e_1_2_7_25_1) 2010; 9
e_1_2_7_3_1
e_1_2_7_9_1
e_1_2_7_8_1
e_1_2_7_7_1
e_1_2_7_19_1
e_1_2_7_18_1
e_1_2_7_17_1
e_1_2_7_16_1
e_1_2_7_2_1
e_1_2_7_15_1
e_1_2_7_14_1
e_1_2_7_13_1
e_1_2_7_12_1
e_1_2_7_11_1
e_1_2_7_10_1
e_1_2_7_26_1
e_1_2_7_27_1
e_1_2_7_28_1
Pascanu R (e_1_2_7_24_1) 2014; 58
e_1_2_7_23_1
e_1_2_7_22_1
e_1_2_7_20_1
Srivastava N (e_1_2_7_21_1) 2014; 15
References_xml – year: 2011
– volume: 15
  start-page: 1929
  issue: 1
  year: 2014
  end-page: 58
  article-title: Dropout: a simple way to prevent neural networks from overfitting
  publication-title: J Mach Learn Res
– volume: 7
  start-page: 76
  issue: 1
  year: 2003
  end-page: 80
  article-title: Amazon.com recommendations: item‐to‐item collaborative filtering
  publication-title: IEEE Internet Comput
– volume: 9
  start-page: 249
  year: 2010
  end-page: 56
  article-title: Understanding the difficulty of training deep feedforward neural networks
  publication-title: J Mach Learn Res
– volume: 2
  start-page: 1
  issue: 1
  year: 2009
  end-page: 127
  article-title: Learning deep architectures for AI
  publication-title: Found Trends Mach Learn
– volume: 14
  start-page: 1771
  issue: 8
  year: 2002
  end-page: 880
  article-title: Training products of experts by minimizing contrastive divergence
  publication-title: Neural Comput
– year: 2001
– year: 2008
– year: 2007
– volume: 42
  start-page: 4022
  issue: 8
  year: 2015
  end-page: 8
  article-title: Reversed CF: a fast collaborative filtering algorithm using a ‐nearest neighbor graph
  publication-title: Expert Syst Appl
– volume: 42
  start-page: 30
  issue: 8
  year: 2009
  end-page: 37
  article-title: Matrix factorization techniques for recommender systems
  publication-title: Computer
– volume: 51
  start-page: 27
  issue: 19
  year: 2013
  end-page: 34
  article-title: A similarity metric designed to speed up, using hardware, the recommender systems k ‐nearest neighbors algorithm
  publication-title: Knowl‐Based Syst
– year: 2017
– year: 2016
– year: 2014
– year: 2015
– year: 2013
– volume: 58
  start-page: 1823
  issue: 6
  year: 2014
  end-page: 32
  article-title: On the number of response regions of deep feed forward networks with piece‐wise linear activations
  publication-title: Arthritis Rheum
– ident: e_1_2_7_20_1
  doi: 10.21437/Interspeech.2010-487
– volume: 15
  start-page: 1929
  issue: 1
  year: 2014
  ident: e_1_2_7_21_1
  article-title: Dropout: a simple way to prevent neural networks from overfitting
  publication-title: J Mach Learn Res
– ident: e_1_2_7_3_1
  doi: 10.1145/371920.372071
– ident: e_1_2_7_16_1
  doi: 10.1145/3097983.3098077
– ident: e_1_2_7_22_1
– ident: e_1_2_7_28_1
– ident: e_1_2_7_17_1
– ident: e_1_2_7_12_1
  doi: 10.1007/978-3-319-12643-2_35
– ident: e_1_2_7_4_1
  doi: 10.1109/MC.2009.263
– ident: e_1_2_7_23_1
  doi: 10.1561/2200000006
– ident: e_1_2_7_5_1
– ident: e_1_2_7_10_1
– ident: e_1_2_7_15_1
  doi: 10.1145/2783258.2783273
– volume: 9
  start-page: 249
  year: 2010
  ident: e_1_2_7_25_1
  article-title: Understanding the difficulty of training deep feedforward neural networks
  publication-title: J Mach Learn Res
– ident: e_1_2_7_13_1
– ident: e_1_2_7_11_1
  doi: 10.1145/2988450.2988456
– ident: e_1_2_7_27_1
  doi: 10.18653/v1/D15-1166
– ident: e_1_2_7_7_1
  doi: 10.1145/1273496.1273596
– ident: e_1_2_7_14_1
  doi: 10.1609/aaai.v29i1.9548
– ident: e_1_2_7_9_1
  doi: 10.1145/2740908.2742726
– ident: e_1_2_7_2_1
  doi: 10.1109/MIC.2003.1167344
– ident: e_1_2_7_6_1
– ident: e_1_2_7_18_1
  doi: 10.1016/j.knosys.2013.06.010
– ident: e_1_2_7_26_1
– ident: e_1_2_7_8_1
  doi: 10.1162/089976602760128018
– ident: e_1_2_7_19_1
  doi: 10.1016/j.eswa.2015.01.001
– volume: 58
  start-page: 1823
  issue: 6
  year: 2014
  ident: e_1_2_7_24_1
  article-title: On the number of response regions of deep feed forward networks with piece‐wise linear activations
  publication-title: Arthritis Rheum
SSID ssj0011031
Score 2.2550395
Snippet Summary Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models...
Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have...
SourceID proquest
crossref
wiley
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
SubjectTerms Algorithms
autoencoder
Collaboration
collaborative filtering
deep learning
Filtration
Machine learning
Neural networks
recommender system
Recommender systems
Training
Title An efficient method for autoencoder‐based collaborative filtering
URI https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcpe.4507
https://www.proquest.com/docview/2312765632
Volume 31
WOSCitedRecordID wos000496467600021&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Full Collection 2020
  customDbUrl:
  eissn: 1532-0634
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0011031
  issn: 1532-0626
  databaseCode: DRFUL
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://onlinelibrary.wiley.com
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1NS8NAEB2k9eDF-onVKiuInmKT7DabHEtt8VBKkQq9hc1mAgVJS7_O_gR_o7_E2Xy0FRQET7nMQpidyXvJZt4DuFNObGTZWlZCcGwJj15QAu2jJZNIRXYkgjjXme3LwcAfj4Nh8VelmYXJ9SE2H9xMZ2TPa9PgKlo0t6KheoaPomUGyasula2oQPXppffa35whGAODXC3VtWzi7aX0rO02y7XfwWjLMHd5agY0vdp_bvEIDgt6ydp5PRzDHqYnUCutG1jRyafQaacMM_UIAh2W20gz4q9MrZZTo20Z4_zz_cNgXMx2amWNLJmYA3ZCvDMY9bqjzrNV-ClYmkBdWkj7IjhyamsncSOtuBks9RwtYlsmxLIDJ5EoiLNERrMwRqmo3ZWnfFsFPufnUEmnKV4AcwhaW6gCrjUxEClUjIHkmHBH-16AXh0eyryGutAaN5YXb2GukuyGlJrQpKYOt5vIWa6v8UNMo9yasOiwRUi81JVERrlbh_tsE35dH3aGXXO9_GvgFRwQL8psIhy7AZXlfIXXsK_Xy8liflPU2Rd8l9aY
linkProvider Wiley-Blackwell
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1NS8NAEB1KK-jF-onVqiuInmKTbJrN4qnUloq1FKnQW9hsNlCQtPTr7E_wN_pLnM1HW0FB8JTLLISZeXkvu-wbgGthhdqWrW5ESMeG4-IPCpeeMlgUiMAMHB6mPrNd1ut5wyHvF-A-vwuT-kOsNtw0MpLvtQa43pCurV1D5UTdOXV9k7zkYBfVi1B6eGm_dleHCHqCQWqXahsmCvfce9a0a_na72y0lpibQjVhmnb5X--4B7uZwCSNtCP2oaDiAyjnwxtIhuVDaDZiohL_CKQdkg6SJqhgiVjMx9rdMlTTz_cPzXIh2eiWpSLRSB-xI-cdwaDdGjQ7RjZRwZBI68xQWBmHKorAtiI7kILqq6WuJZ3QZBHqbG5FTGE2eaBdC0PFBAJeuMIzBfcoPYZiPI7VCRALybWuBKdSogZhjggVZ1RF1JKey5Vbgds8sb7M3Mb10Is3P_VJtn1Mja9TU4GrVeQkddj4Iaaa18bPMDbzUZnaDOUotStwk1Th1_V-s9_Sz9O_Bl7Cdmfw3PW7j72nM9hBlZQMjbDMKhTn04U6hy25nI9m04us6b4AW0DaiA
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1NS8NAEB1KFfHit1ituoLoKZpk02wWT6UfKJZSpEJvYbOZQEHa0lbP_gR_o7_E2Xy0FRQET7nMQpidt-8ly7wBuFRObGzZalZCdGx5Pn2gSB2gJZJIRXbkyTjzme2IbjcYDGSvBHdFL0zmD7H44WaQkZ7XBuA4iZPbpWuonuCNVzOd5GteTfqEyrXmU_u5s7hEMBMMMrtU17JJuBfes7Z7W6z9zkZLibkqVFOmaW__6x13YCsXmKyeVcQulHC0B9vF8AaWY3kfGvURw9Q_gmiHZYOkGSlYpl7nY-NuGeP08_3DsFzMVqrlDVkyNFfsxHkH0G-3-o17K5-oYGmidWEh7YzHkROwncSNtOKmtdR3tBfbIiGdLZ1EoEeqJTKuhTEKRYBXvgpsJQPOD6E8Go_wCJhD5FpDJbnWpEGEp2KUgmPCHR34Ev0KXBeJDXXuNm6GXryEmU-yG1JqQpOaClwsIieZw8YPMdVib8IcY7OQlKkrSI5ytwJX6S78uj5s9FrmefzXwHPY6DXbYeeh-3gCmySS0pkRjl2F8nz6iqewrt_mw9n0LK-5Lx3I2gM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=An+efficient+method+for+autoencoder%E2%80%90based+collaborative+filtering&rft.jtitle=Concurrency+and+computation&rft.au=Yi%E2%80%90Lei+Wang&rft.au=Wen%E2%80%90Zhe+Tang&rft.au=Xian%E2%80%90Jun+Yang&rft.au=Ying%E2%80%90Jie+Wu&rft.date=2019-12-10&rft.pub=Wiley+Subscription+Services%2C+Inc&rft.issn=1532-0626&rft.eissn=1532-0634&rft.volume=31&rft.issue=23&rft_id=info:doi/10.1002%2Fcpe.4507&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1532-0626&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1532-0626&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1532-0626&client=summon