Federated Matrix Factorization: Algorithm Design and Application to Data Clustering
Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks. Although many FL algorithms have been proposed, few of them have considered the matrix factorization (MF) model, which is known to have a vast number o...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on signal processing Jg. 70; S. 1625 - 1640 |
|---|---|
| Hauptverfasser: | , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
IEEE
2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 1053-587X, 1941-0476 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks. Although many FL algorithms have been proposed, few of them have considered the matrix factorization (MF) model, which is known to have a vast number of signal processing and machine learning applications. Since the MF problem involves two blocks of variables and the variables are usually subject to constraints related to specific solution structure, it requires new FL algorithm designs to achieve communication-efficient MF in heterogeneous data networks. In this paper, we address the challenge by proposing two new federated MF (FedMF) algorithms, namely, FedMAvg and FedMGS, based on the model averaging and gradient sharing principles, respectively. Both FedMAvg and FedMGS adopt multiple steps of local updates per communication round to speed up convergence, and allow only a randomly sampled subset of clients to communicate with the server for reducing the communication cost. Convergence properties for the two algorithms are thoroughly analyzed, which delineate the impacts of heterogeneous data distribution, local update number, and partial client communication on the algorithm performance, and guide the design of proposed algorithms. By focusing on a data clustering task, extensive experiment results are presented to examine the practical performance of proposed algorithms, as well as demonstrating their efficacy over the existing distributed clustering algorithms. |
|---|---|
| AbstractList | Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks. Although many FL algorithms have been proposed, few of them have considered the matrix factorization (MF) model, which is known to have a vast number of signal processing and machine learning applications. Since the MF problem involves two blocks of variables and the variables are usually subject to constraints related to specific solution structure, it requires new FL algorithm designs to achieve communication-efficient MF in heterogeneous data networks. In this paper, we address the challenge by proposing two new federated MF (FedMF) algorithms, namely, FedMAvg and FedMGS, based on the model averaging and gradient sharing principles, respectively. Both FedMAvg and FedMGS adopt multiple steps of local updates per communication round to speed up convergence, and allow only a randomly sampled subset of clients to communicate with the server for reducing the communication cost. Convergence properties for the two algorithms are thoroughly analyzed, which delineate the impacts of heterogeneous data distribution, local update number, and partial client communication on the algorithm performance, and guide the design of proposed algorithms. By focusing on a data clustering task, extensive experiment results are presented to examine the practical performance of proposed algorithms, as well as demonstrating their efficacy over the existing distributed clustering algorithms. |
| Author | Wang, Shuai Chang, Tsung-Hui |
| Author_xml | – sequence: 1 givenname: Shuai orcidid: 0000-0001-6457-9478 surname: Wang fullname: Wang, Shuai email: shuaiwang@link.cuhk.edu.cn organization: Shenzhen Research Institute of Big Data and School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China – sequence: 2 givenname: Tsung-Hui orcidid: 0000-0003-1349-2764 surname: Chang fullname: Chang, Tsung-Hui email: tsunghui.chang@ieee.org organization: School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China |
| BookMark | eNp9kE1LAzEQhoNUsK3eBS8Bz1vzvRtvpbUqVBRawVtIdyc1Zbtbsymov97tBx48eJoZeJ-Z4emhTlVXgNAlJQNKib6Zz14GjDA24FRSSeQJ6lItaEJEqjptTyRPZJa-naFe06wIoUJo1UWzCRQQbIQCP9kY_Cee2DzWwX_b6OvqFg_LZTvF9zUeQ-OXFbZVgYebTenzfQLHGo9ttHhUbpsIwVfLc3TqbNnAxbH20evkbj56SKbP94-j4TTJmaYxoYwLAgUtKCc2k1mheOo0AxCpVpLlUjiunEwXi4xz0JQ55qRVAI6nheWO99H1Ye8m1B9baKJZ1dtQtScNUyIVTEqt2pQ6pPJQN00AZ3If96_HYH1pKDE7gaYVaHYCzVFgC5I_4Cb4tQ1f_yFXB8QDwG9cp5RrwfkPkZJ9HA |
| CODEN | ITPRED |
| CitedBy_id | crossref_primary_10_1016_j_iot_2025_101687 crossref_primary_10_1145_3698875 crossref_primary_10_1109_TMC_2025_3551759 crossref_primary_10_1109_TWC_2023_3286990 crossref_primary_10_1109_JIOT_2023_3312852 crossref_primary_10_1016_j_ipm_2023_103470 crossref_primary_10_1016_j_ins_2024_121203 crossref_primary_10_3390_s23167235 crossref_primary_10_1109_JSTSP_2024_3461311 crossref_primary_10_1109_TAI_2024_3446759 |
| Cites_doi | 10.1016/j.neucom.2014.02.018 10.1109/MNET.001.1900506 10.1162/NECO_a_00168 10.1145/3394138 10.1109/MSP.2020.2970170 10.1007/978-3-319-65482-9_14 10.1109/MC.2009.263 10.1007/978-3-030-63076-8_2 10.1109/MIS.2020.3014880 10.1109/ICASSP39728.2021.9413927 10.1609/aaai.v33i01.33015693 10.1371/journal.pcbi.1000029 10.1007/3-540-46502-2_13 10.1145/3298981 10.1561/2200000055 10.1109/ICASSP.2015.7178631 10.1145/1772690.1772760 10.13140/RG.2.2.25204.14729 10.1109/MLSP49062.2020.9231531 10.1109/TSP.2016.2614491 10.1109/TSP.2021.3102106 10.5555/3001460.3001507 10.1145/2020408.2020426 10.14778/2180912.2180915 10.1109/TKDE.2010.165 10.1609/aaai.v32i1.11244 10.1145/2507157.2507195 10.1109/TKDE.2017.2785326 10.1038/nature07385 10.4236/jcc.2014.211002 10.1109/ICASSP.2019.8683466 10.1007/s10107-013-0701-9 10.1007/978-3-030-10925-7_24 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TSP.2022.3151505 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1941-0476 |
| EndPage | 1640 |
| ExternalDocumentID | 10_1109_TSP_2022_3151505 9713943 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Shenzhen Fundamental Research and Discipline Layout project; Shenzhen Fundamental Research Fund grantid: JCYJ20190813171003723 funderid: 10.13039/501100012271 – fundername: Guangdong Provincial Key Laboratory of Big Data Computing – fundername: National Natural Science Foundation of China; NSFC grantid: 61731018 funderid: 10.13039/501100001809 |
| GroupedDBID | -~X .DC 0R~ 29I 3EH 4.4 53G 5GY 5VS 6IK 85S 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACIWK ACKIV ACNCT AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AJQPL AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c291t-12340ed1d130a858d637f92ee479652c54f36f57bb833e912f2f5a6eef37da3f3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 16 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000778899300003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1053-587X |
| IngestDate | Mon Jun 30 10:15:20 EDT 2025 Tue Nov 18 20:58:10 EST 2025 Sat Nov 29 04:10:55 EST 2025 Wed Aug 27 02:40:51 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c291t-12340ed1d130a858d637f92ee479652c54f36f57bb833e912f2f5a6eef37da3f3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0003-1349-2764 0000-0001-6457-9478 |
| PQID | 2647425596 |
| PQPubID | 85478 |
| PageCount | 16 |
| ParticipantIDs | proquest_journals_2647425596 crossref_citationtrail_10_1109_TSP_2022_3151505 ieee_primary_9713943 crossref_primary_10_1109_TSP_2022_3151505 |
| PublicationCentury | 2000 |
| PublicationDate | 20220000 2022-00-00 20220101 |
| PublicationDateYYYYMMDD | 2022-01-01 |
| PublicationDate_xml | – year: 2022 text: 20220000 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on signal processing |
| PublicationTitleAbbrev | TSP |
| PublicationYear | 2022 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref12 ref15 Mackey (ref57) 2015; 16 ref52 Li (ref14) 2020 ref10 ref54 Chen (ref32) 2018 ref16 Stich (ref19) 2019 Ding (ref39) 2016 ref51 ref50 Geyer (ref47) McMahan (ref13) 2017 ref46 Kǒnecn (ref11) 2016 ref45 ref41 ref44 Bauckhage (ref34) 2015 ref49 ref8 ref9 ref4 Arthur (ref55) 2007 ref3 ref6 ref5 ref35 Wang (ref43) ref37 ref36 Ivkin (ref18) 2019 ref30 ref2 ref1 Schelter (ref23) 2019 ref38 Li (ref22) 2014 Kǒnecn (ref33) 2015 Balcan (ref31) 2013 Guo (ref40) 2018 Lian (ref42) 2017 ref24 ref26 Hegeds (ref7) 2019 ref25 ref20 ref21 Maaten (ref56) 2008; 9 ref27 Zhu (ref28) 2019 ref29 Bonawitz (ref48) 2016 Li (ref17) 2020 LeCun (ref53) |
| References_xml | – start-page: 317 volume-title: Proc. Eur. Conf. Mach. Learn. Princ. Pract. Knowl. Discov. Databases year: 2019 ident: ref7 article-title: Decentralized recommendation based on matrix factorization: A comparison of gossip and federated learning – ident: ref2 doi: 10.1016/j.neucom.2014.02.018 – volume: 9 start-page: 2579 issue: 86 year: 2008 ident: ref56 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – ident: ref46 doi: 10.1109/MNET.001.1900506 – ident: ref44 doi: 10.1162/NECO_a_00168 – ident: ref10 doi: 10.1145/3394138 – start-page: 7849 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. year: 2018 ident: ref40 article-title: Distributed K-clustering for data with heavy noise – ident: ref12 doi: 10.1109/MSP.2020.2970170 – start-page: 1027 volume-title: Proc. 18th Annu. ACM-SIAM Symp. Discrete Algorithms year: 2007 ident: ref55 article-title: K-means++: The advantages of careful seeding – ident: ref25 doi: 10.1007/978-3-319-65482-9_14 – ident: ref3 doi: 10.1109/MC.2009.263 – start-page: 77 volume-title: Proc. 3rd Int. Workshop Big Data, Streams Heterogeneous Source Mining: Algorithms, Syst., Program. Models Appl. year: 2014 ident: ref22 article-title: A fast distributed stochastic gradient descent algorithm for matrix factorization – start-page: 1 volume-title: Proc. 34th Int. Conf. Mach. Learn. Held Int. Conv. Centre year: 2017 ident: ref13 article-title: Communication-efficient learning of deep networks from decentralized data – start-page: 1 volume-title: Proc. Int. Conf. Learn. Representations year: 2020 ident: ref17 article-title: On the convergence of FedAvg on non-IID data – year: 2016 ident: ref11 article-title: Federated optimization: Distributed machine learning for on-device intelligence – ident: ref45 doi: 10.1007/978-3-030-63076-8_2 – ident: ref8 doi: 10.1109/MIS.2020.3014880 – ident: ref1 doi: 10.1109/ICASSP39728.2021.9413927 – ident: ref53 article-title: The MNIST database – ident: ref16 doi: 10.1609/aaai.v33i01.33015693 – start-page: 1 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. Private Multi-Party Mach. Learn. year: 2016 ident: ref48 article-title: Practical secure aggregation for federated learning on user-held data – start-page: 1339 volume-title: Proc. 33rd Int. Conf. Mach. Learn. year: 2016 ident: ref39 article-title: K-means clustering with distributed dimensions – ident: ref4 doi: 10.1371/journal.pcbi.1000029 – year: 2015 ident: ref34 article-title: K-means clustering is matrix factorization – ident: ref36 doi: 10.1007/3-540-46502-2_13 – ident: ref6 doi: 10.1145/3298981 – start-page: 1 volume-title: Proc. Conf. Mach. Learn. Syst. year: 2020 ident: ref14 article-title: Federated optimization in heterogeneous networks – ident: ref29 doi: 10.1561/2200000055 – start-page: 5336 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. year: 2017 ident: ref42 article-title: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent – ident: ref26 doi: 10.1109/ICASSP.2015.7178631 – start-page: 1 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. year: 2019 ident: ref28 article-title: Distributed low-rank matrix factorization with exact consensus – start-page: 1 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. year: 2019 ident: ref18 article-title: Communication-efficient distributed SGD with sketching – start-page: 1 volume-title: Proc. Int. Conf. Learn. Representations year: 2019 ident: ref19 article-title: Local SGD converges fast and communicates little – ident: ref24 doi: 10.1145/1772690.1772760 – ident: ref27 doi: 10.13140/RG.2.2.25204.14729 – ident: ref49 doi: 10.1109/MLSP49062.2020.9231531 – ident: ref54 doi: 10.1109/TSP.2016.2614491 – start-page: 1995 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. year: 2013 ident: ref31 article-title: Distributed K-means and K-median clustering on general topologies – ident: ref50 doi: 10.1109/TSP.2021.3102106 – ident: ref38 doi: 10.5555/3001460.3001507 – ident: ref20 doi: 10.1145/2020408.2020426 – ident: ref30 doi: 10.14778/2180912.2180915 – volume: 16 start-page: 913 issue: 28 year: 2015 ident: ref57 article-title: Distributed matrix completion and robust factorization publication-title: J. Mach. Learn. Res. – ident: ref43 article-title: Supplementary material of federated matrix factorization: Algorithm design and application to data clustering – ident: ref52 doi: 10.1109/TKDE.2010.165 – year: 2019 ident: ref23 article-title: Factorbird-A parameter server approach to distributed matrix factorization – ident: ref9 doi: 10.1609/aaai.v32i1.11244 – ident: ref47 article-title: Differentially private federated learning: A client level perspective – ident: ref21 doi: 10.1145/2507157.2507195 – start-page: 2248 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. year: 2018 ident: ref32 article-title: A practical algorithm for distributed clustering and outlier detection – ident: ref5 doi: 10.1109/TKDE.2017.2785326 – ident: ref51 doi: 10.1038/nature07385 – ident: ref37 doi: 10.4236/jcc.2014.211002 – ident: ref35 doi: 10.1109/ICASSP.2019.8683466 – ident: ref41 doi: 10.1007/s10107-013-0701-9 – ident: ref15 doi: 10.1007/978-3-030-10925-7_24 – start-page: 1 volume-title: Proc. Conf. Workshop Neural Inf. Process. Syst. Optim. Mach. Learn. year: 2015 ident: ref33 article-title: Federated optimization: Distributed optimization beyond the datacenter |
| SSID | ssj0014496 |
| Score | 2.5111735 |
| Snippet | Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks. Although... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1625 |
| SubjectTerms | Algorithms Clustering Clustering algorithms Communication Convergence Cost analysis Data models Distributed databases Factorization Federated learning gradient sharing Machine learning matrix factorization model averaging Partitioning algorithms Servers Signal processing Signal processing algorithms |
| Title | Federated Matrix Factorization: Algorithm Design and Application to Data Clustering |
| URI | https://ieeexplore.ieee.org/document/9713943 https://www.proquest.com/docview/2647425596 |
| Volume | 70 |
| WOSCitedRecordID | wos000778899300003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE/IET Electronic Library (IEL) customDbUrl: eissn: 1941-0476 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014496 issn: 1053-587X databaseCode: RIE dateStart: 19910101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB5UPOjBt1hf7MGLYGyyj2zWW1GLBxXBB72FdDOrQm2lpuLPd3abVkERvCWwu4SZJN98OzvzARxIQjlMCowI2l0kucPIFMpGAtPMqq5AF-TbHi719XXW6ZibGTia1sIgYjh8hsf-MuTyy4Ed-a2ypiFGZaSYhVmt9bhWa5oxkDJocVG4ICKV6c4kJRmb5t3tDRFBzomfEnp7obpvEBQ0VX78iAO6tJf_91wrsFRHkaw1dvsqzGB_DRa_9RZch9u2bxNBkWTJrnwb_g_WDtI6dd3lCWv1HumuenphZ-EUByv6JWt95bNZNWBnRVWw097Id1OgVTfgvn1-d3oR1QoKkeUmqSKCJRljmZSEVEWmsjIV2hmOKLVJFbdKOpE6pbvdTAg0CXfcqSJF8pAuC-HEJsz1B33cAtZNUcU2TqzjViorDFofHjhREuWQLm1Ac2LU3Nbtxb3KRS8PNCM2Obkh927Iazc04HA643XcWuOPseve7NNxtcUbsDvxW15_e285hXjE94kppdu_z9qBBb_2eCNlF-aq4Qj3YN6-V89vw_3wWn0CbNLKKw |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Na9tAEB3SpNDmkLRNS5w46R56KVS1tB-SNjcTxzjUMQa7xTchr2abgGsXWw75-Zldy46hoZCbBLuSmJH05u3szAP4IgnlMMoxIGi3geQWA50rEwiMU6PGAq2Xb_vVTXq9dDTS_R34tqmFQUS_-Qy_u0Ofyy9mZumWyhqaGJWW4hXsKSl5tKrW2uQMpPRqXBQwiEClyWidlAx1YzjoExXknBgq4beTqtsCIa-q8s-v2ONL-_BlT_YODqo4kjVXjn8POzj9APtb3QWPYNB2jSIolizYjWvE_8DaXlynqry8YM3Jbzorb_-wlt_HwfJpwZpPGW1WzlgrL3N2OVm6fgp01Y_ws301vOwElYZCYLiOyoCASYZYRAVhVZ6qtIhFYjVHlImOFTdKWhFblYzHqRCoI265VXmM5KOkyIUVn2B3OpviMbBxjCo0YWQsN1IZodG4AMGKgkiHtHENGmujZqZqMO50LiaZJxqhzsgNmXNDVrmhBl83M_6ummv8Z-yRM_tmXGXxGtTXfsuqr2-RUZBHjJ-4Unzy_KzP8KYzvOlm3evej1N46-6zWlapw245X-IZvDb35d1ifu5fsUccDM1y |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Federated+Matrix+Factorization%3A+Algorithm+Design+and+Application+to+Data+Clustering&rft.jtitle=IEEE+transactions+on+signal+processing&rft.au=Wang%2C+Shuai&rft.au=Chang%2C+Tsung-Hui&rft.date=2022&rft.issn=1053-587X&rft.eissn=1941-0476&rft.volume=70&rft.spage=1625&rft.epage=1640&rft_id=info:doi/10.1109%2FTSP.2022.3151505&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TSP_2022_3151505 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1053-587X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1053-587X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1053-587X&client=summon |