FSpGEMM: A Framework for Accelerating Sparse General Matrix-Matrix Multiplication Using Gustavson's Algorithm on FPGAs

General sparse matrix-matrix multiplication (SpGEMM) is integral to many high-performance computing (HPC) and machine learning applications. However, prior field-programmable gate array (FPGA)-based SpGEMM accelerators either use the inner product algorithm with wasted and costly operations or Gusta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on very large scale integration (VLSI) systems Jg. 32; H. 4; S. 1 - 0
Hauptverfasser: Tavakoli, Erfan Bank, Riera, Michael, Quraishi, Masudul Hassan, Ren, Fengbo
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.04.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1063-8210, 1557-9999
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract General sparse matrix-matrix multiplication (SpGEMM) is integral to many high-performance computing (HPC) and machine learning applications. However, prior field-programmable gate array (FPGA)-based SpGEMM accelerators either use the inner product algorithm with wasted and costly operations or Gustavson's algorithm with a cache-based hardware architecture suffering from long-latency cache miss penalties and limited to embedded devices. In this work, we propose framework for accelerating SpGEMM (FSpGEMM), an OpenCL-based SpGEMM framework for accelerating Gustvason's algorithm that includes an FPGA kernel implementing a throughput-optimized and scalable hardware architecture compatible with high-bandwidth memory (HBM) or traditional DDR-based memory. In addition, to address the irregular memory access patterns incurred by Gustavson's algorithm, we propose a new buffering scheme tailored to Gustavson's algorithm enabled by a new compressed sparse vector (CSV) format for representing sparse matrices and a row reordering technique as a preprocessing step to improve data reuse, and consequently, resource utilization. The proposed framework includes a host program implementing preprocessing functions for reordering input matrices and storing them in the proposed CSV format for further use. We implemented FSpGEMM using Intel FPGA SDK for OpenCL and experimented with a benchmark of sparse matrices selected from the SuiteSparse Matrix Collection on a Bittware 520N-MX FPGA board. The results show that the reordering technique improves the performance on average by 20.3% compared with the baseline. Finally, FSpGEMM outperforms the state-of-the-art (SOTA) FPGA implementation by an average of 2.23<inline-formula> <tex-math notation="LaTeX">\times</tex-math> </inline-formula> in terms of execution cycles with the same benchmark and memory system configuration for a fair comparison.
AbstractList General sparse matrix–matrix multiplication (SpGEMM) is integral to many high-performance computing (HPC) and machine learning applications. However, prior field-programmable gate array (FPGA)-based SpGEMM accelerators either use the inner product algorithm with wasted and costly operations or Gustavson’s algorithm with a cache-based hardware architecture suffering from long-latency cache miss penalties and limited to embedded devices. In this work, we propose framework for accelerating SpGEMM (FSpGEMM), an OpenCL-based SpGEMM framework for accelerating Gustvason’s algorithm that includes an FPGA kernel implementing a throughput-optimized and scalable hardware architecture compatible with high-bandwidth memory (HBM) or traditional DDR-based memory. In addition, to address the irregular memory access patterns incurred by Gustavson’s algorithm, we propose a new buffering scheme tailored to Gustavson’s algorithm enabled by a new compressed sparse vector (CSV) format for representing sparse matrices and a row reordering technique as a preprocessing step to improve data reuse, and consequently, resource utilization. The proposed framework includes a host program implementing preprocessing functions for reordering input matrices and storing them in the proposed CSV format for further use. We implemented FSpGEMM using Intel FPGA SDK for OpenCL and experimented with a benchmark of sparse matrices selected from the SuiteSparse Matrix Collection on a Bittware 520N-MX FPGA board. The results show that the reordering technique improves the performance on average by 20.3% compared with the baseline. Finally, FSpGEMM outperforms the state-of-the-art (SOTA) FPGA implementation by an average of [Formula Omitted] in terms of execution cycles with the same benchmark and memory system configuration for a fair comparison.
General sparse matrix-matrix multiplication (SpGEMM) is integral to many high-performance computing (HPC) and machine learning applications. However, prior field-programmable gate array (FPGA)-based SpGEMM accelerators either use the inner product algorithm with wasted and costly operations or Gustavson's algorithm with a cache-based hardware architecture suffering from long-latency cache miss penalties and limited to embedded devices. In this work, we propose framework for accelerating SpGEMM (FSpGEMM), an OpenCL-based SpGEMM framework for accelerating Gustvason's algorithm that includes an FPGA kernel implementing a throughput-optimized and scalable hardware architecture compatible with high-bandwidth memory (HBM) or traditional DDR-based memory. In addition, to address the irregular memory access patterns incurred by Gustavson's algorithm, we propose a new buffering scheme tailored to Gustavson's algorithm enabled by a new compressed sparse vector (CSV) format for representing sparse matrices and a row reordering technique as a preprocessing step to improve data reuse, and consequently, resource utilization. The proposed framework includes a host program implementing preprocessing functions for reordering input matrices and storing them in the proposed CSV format for further use. We implemented FSpGEMM using Intel FPGA SDK for OpenCL and experimented with a benchmark of sparse matrices selected from the SuiteSparse Matrix Collection on a Bittware 520N-MX FPGA board. The results show that the reordering technique improves the performance on average by 20.3% compared with the baseline. Finally, FSpGEMM outperforms the state-of-the-art (SOTA) FPGA implementation by an average of 2.23<inline-formula> <tex-math notation="LaTeX">\times</tex-math> </inline-formula> in terms of execution cycles with the same benchmark and memory system configuration for a fair comparison.
Author Quraishi, Masudul Hassan
Riera, Michael
Tavakoli, Erfan Bank
Ren, Fengbo
Author_xml – sequence: 1
  givenname: Erfan Bank
  orcidid: 0000-0002-3248-9301
  surname: Tavakoli
  fullname: Tavakoli, Erfan Bank
  organization: Arizona State University, Tempe, AZ, USA
– sequence: 2
  givenname: Michael
  surname: Riera
  fullname: Riera, Michael
  organization: Arizona State University, Tempe, AZ, USA
– sequence: 3
  givenname: Masudul Hassan
  orcidid: 0000-0001-6939-1669
  surname: Quraishi
  fullname: Quraishi, Masudul Hassan
  organization: Arizona State University, Tempe, AZ, USA
– sequence: 4
  givenname: Fengbo
  orcidid: 0000-0002-6509-8753
  surname: Ren
  fullname: Ren, Fengbo
  organization: Arizona State University, Tempe, AZ, USA
BookMark eNp9kE1PxCAQhonRxM8_YDyQePDUFSiU4q0xu9VkN5qsem1YCop2SwXWj38v63owHuQyZOZ9ZpJnH2z3rtcAHGM0whiJ87uH6fx6RBChozxnjAqxBfYwYzwT6W2nPyryrCQY7YL9EJ4RwpQKtAfeJvOhHs9mF7CCEy-X-t35F2ich5VSutNeRts_wvkgfdCw1n3qdHAmo7cf2abA2aqLduisSlnXw_uwJupViPItuP4swKp7dN7GpyVM48ltXYVDsGNkF_TRTz0A95Px3eVVNr2pry-raaaIKGImZLvARmlJaVG0SPCclYZzrgmhWFJjUgrhkrZcLQrFeEsXZSsEXRjMuGlJfgBON3sH715XOsTm2a18n042JG0jDCMmUopsUsq7ELw2zeDtUvrPBqNm7bf59tus_TY_fhNU_oGUjd8Gope2-x892aBWa_3rFsUClzz_AjRGi1Y
CODEN IEVSE9
CitedBy_id crossref_primary_10_1109_TVLSI_2024_3497166
crossref_primary_10_1109_TVLSI_2025_3558895
crossref_primary_10_1145_3687480
crossref_primary_10_3390_mi16010101
Cites_doi 10.1109/HPCC.2008.96
10.1109/HPCA.2018.00067
10.1145/3332466.3374546
10.1137/1.9780898719505
10.1145/3582016.3582047
10.1145/3445814.3446702
10.1016/0196-6774(89)90005-9
10.1007/s11265-022-01821-z
10.1002/9781119079231
10.1109/HPCA47549.2020.00030
10.1145/3503221.3508431
10.1145/3293883.3295712
10.1137/S0895479894278952
10.1109/TCAD.2023.3281719
10.1137/130948811
10.1109/ICPP.2017.19
10.1145/3140659.3080254
10.1109/IPDPS.2014.47
10.1109/FCCM48280.2020.00028
10.1109/IPDPS.2008.4536313
10.1145/3208040.3208062
10.1145/3293883.3295701
10.1109/ICPR.2018.8545462
10.1109/ASAP.2017.7995254
10.1007/978-1-4615-8675-3_14
10.1145/3332466.3374521
10.1145/3458744.3473352
10.1109/MM.2019.2930057
10.1016/j.micpro.2011.05.005
10.1002/cta.796
10.1109/IPDPS.2014.125
10.1145/2049662.2049663
10.1109/JSSC.2019.2960480
10.1145/1583991.1584053
10.1145/331532.331562
10.1109/CIT.2010.208
10.23919/DATE56975.2023.10136958
10.1109/MICRO50266.2020.00068
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
RIA
RIE
AAYXX
CITATION
7SP
8FD
L7M
DOI 10.1109/TVLSI.2024.3355499
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Electronics & Communications Abstracts
Technology Research Database
Advanced Technologies Database with Aerospace
DatabaseTitle CrossRef
Technology Research Database
Advanced Technologies Database with Aerospace
Electronics & Communications Abstracts
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1557-9999
EndPage 0
ExternalDocumentID 10_1109_TVLSI_2024_3355499
10419187
Genre orig-research
GrantInformation_xml – fundername: NSF
  grantid: IIS/CPS-1652038
GroupedDBID -~X
.DC
0R~
29I
4.4
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFS
ACIWK
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
TN5
3EH
5VS
AAYXX
ABFSI
AETIX
AGSQL
AI.
AIBXA
ALLEH
CITATION
E.L
EJD
H~9
ICLAB
IFJZH
VH1
7SP
8FD
L7M
ID FETCH-LOGICAL-c296t-9adb1fcea4466d097358f777e2241a4ff2960184d7cb6c57d4b8d994bf157fd23
IEDL.DBID RIE
ISICitedReferencesCount 5
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001168601500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1063-8210
IngestDate Sun Nov 30 05:23:11 EST 2025
Sat Nov 29 03:36:21 EST 2025
Tue Nov 18 22:11:35 EST 2025
Wed Aug 27 02:17:10 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c296t-9adb1fcea4466d097358f777e2241a4ff2960184d7cb6c57d4b8d994bf157fd23
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-6939-1669
0000-0002-6509-8753
0000-0002-3248-9301
PQID 2973251059
PQPubID 85424
PageCount 0
ParticipantIDs crossref_primary_10_1109_TVLSI_2024_3355499
proquest_journals_2973251059
crossref_citationtrail_10_1109_TVLSI_2024_3355499
ieee_primary_10419187
PublicationCentury 2000
PublicationDate 2024-04-01
PublicationDateYYYYMMDD 2024-04-01
PublicationDate_xml – month: 04
  year: 2024
  text: 2024-04-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on very large scale integration (VLSI) systems
PublicationTitleAbbrev TVLSI
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref35
Anzt (ref42)
ref12
ref34
ref15
ref14
ref36
ref31
ref30
ref11
ref33
ref10
ref32
ref2
ref39
ref38
ref19
ref18
Naumov (ref16)
ref24
Briggs (ref1) 2000
ref23
ref45
ref26
ref25
ref20
ref41
ref22
ref21
ref43
(ref44) 2022
ref28
ref27
ref29
ref8
Dalton (ref17) 2014
ref9
ref4
ref3
Biookaghazadeh (ref5)
ref6
Karypis (ref37) 1998
ref40
Jamro (ref7) 2014; 33
References_xml – volume-title: Intel FPGA SDK for Opencl Pro Edition: Programming Guide
  year: 2022
  ident: ref44
– ident: ref35
  doi: 10.1109/HPCC.2008.96
– ident: ref12
  doi: 10.1109/HPCA.2018.00067
– ident: ref29
  doi: 10.1145/3332466.3374546
– volume-title: A Multigrid Tutorial
  year: 2000
  ident: ref1
  doi: 10.1137/1.9780898719505
– ident: ref27
  doi: 10.1145/3582016.3582047
– ident: ref15
  doi: 10.1145/3445814.3446702
– start-page: 1
  volume-title: Proc. USENIX Workshop Hot Topics Edge Comput.
  ident: ref5
  article-title: Are FPGAs suitable for edge computing?
– ident: ref2
  doi: 10.1016/0196-6774(89)90005-9
– ident: ref32
  doi: 10.1007/s11265-022-01821-z
– ident: ref4
  doi: 10.1002/9781119079231
– start-page: 7–1
  volume-title: A software package for partitioning unstructured graphs, partitioning meshes, and computing fill-reducing orderings of sparse matrices
  year: 1998
  ident: ref37
– ident: ref13
  doi: 10.1109/HPCA47549.2020.00030
– ident: ref23
  doi: 10.1145/3503221.3508431
– ident: ref30
  doi: 10.1145/3293883.3295712
– ident: ref34
  doi: 10.1137/S0895479894278952
– ident: ref9
  doi: 10.1109/TCAD.2023.3281719
– ident: ref19
  doi: 10.1137/130948811
– volume: 33
  start-page: 667
  issue: 3
  year: 2014
  ident: ref7
  article-title: The algorithms for FPGA implementation of sparse matrices multiplication
  publication-title: Comput. Informat.
– ident: ref18
  doi: 10.1109/ICPP.2017.19
– ident: ref3
  doi: 10.1145/3140659.3080254
– ident: ref21
  doi: 10.1109/IPDPS.2014.47
– ident: ref8
  doi: 10.1109/FCCM48280.2020.00028
– ident: ref40
  doi: 10.1109/IPDPS.2008.4536313
– ident: ref39
  doi: 10.1145/3208040.3208062
– volume-title: CUSP: Generic Parallel Algorithms for Sparse Matrix and Graph Computations, Version 0.5.0
  year: 2014
  ident: ref17
– ident: ref20
  doi: 10.1145/3293883.3295701
– ident: ref25
  doi: 10.1109/ICPR.2018.8545462
– ident: ref26
  doi: 10.1109/ASAP.2017.7995254
– ident: ref36
  doi: 10.1007/978-1-4615-8675-3_14
– ident: ref22
  doi: 10.1145/3332466.3374521
– ident: ref28
  doi: 10.1145/3458744.3473352
– ident: ref24
  doi: 10.1109/MM.2019.2930057
– volume-title: Proc. GPU Technol. Conf.
  ident: ref16
  article-title: Cusparse library
– ident: ref33
  doi: 10.1016/j.micpro.2011.05.005
– ident: ref6
  doi: 10.1002/cta.796
– ident: ref41
  doi: 10.1109/IPDPS.2014.125
– ident: ref11
  doi: 10.1145/2049662.2049663
– ident: ref14
  doi: 10.1109/JSSC.2019.2960480
– ident: ref38
  doi: 10.1145/1583991.1584053
– ident: ref31
  doi: 10.1145/331532.331562
– ident: ref43
  doi: 10.1109/CIT.2010.208
– ident: ref45
  doi: 10.23919/DATE56975.2023.10136958
– ident: ref10
  doi: 10.1109/MICRO50266.2020.00068
– start-page: 75
  volume-title: Proc. SpringSim (HPS)
  ident: ref42
  article-title: Accelerating the LOBPCG method on GPUs using a blocked sparse matrix vector product
SSID ssj0014490
Score 2.429994
Snippet General sparse matrix-matrix multiplication (SpGEMM) is integral to many high-performance computing (HPC) and machine learning applications. However, prior...
General sparse matrix–matrix multiplication (SpGEMM) is integral to many high-performance computing (HPC) and machine learning applications. However, prior...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1
SubjectTerms Algorithms
Benchmarks
Computer architecture
Computer memory
Embedded systems
Field programmable gate arrays
Field-programmable gate array (FPGA)
Format
general sparse matrix–matrix multiplication (SpGEMM)
Graphics processing units
Gustavson’s algorithm
Hardware
Indexes
Machine learning
Mathematical analysis
Matrix algebra
Matrix converters
Memory management
Multiplication
Network latency
OpenCL
Performance enhancement
Preprocessing
reconfigurable computing
Resource utilization
Sparse matrices
Sparsity
Title FSpGEMM: A Framework for Accelerating Sparse General Matrix-Matrix Multiplication Using Gustavson's Algorithm on FPGAs
URI https://ieeexplore.ieee.org/document/10419187
https://www.proquest.com/docview/2973251059
Volume 32
WOSCitedRecordID wos001168601500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1557-9999
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014490
  issn: 1063-8210
  databaseCode: RIE
  dateStart: 19930101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFA5u-KAPXidOp-RB8EEye0mbxrci6xTcGGzK3kqayxzMbeyGP98k7eZAFHxqoUkp_Zqe8-Wc8x0AbihjXiBciqgvAoQ9RyIWYYG0qVaYyDDkSthmE6Tdjvp92imK1W0tjJTSJp_Jujm1sXwx4UuzVaZXONb0IiIlUCIkzIu1NiEDjGkuPRD6KNJEZl0h49D73ttL91lzQQ_XfWNfrdDrtxWybVV-_IutgUkO__loR-Cg8CRhnEN_DHbk-ATsb-kLnoJV0p02G63WA4xhss7CgtpNhTHn2t4Y9McD2J1qdithoUANW0a1_xPlB9jKMw6LrT1oUwxg05RdrbSrfjuH8WgwmQ0X7x9QX046zXheAa9Jo_f4hIpGC4h7NFwgykTmKi6ZCe4KI-ATRIoQIo19Z1gpPcrRVFAQnoU8IAJnkaAUZ8oNiBKefwbK48lYngNIPGx8vEzxwDQY194AJ5kfBjJkGaaSVYG7fvEpL1TITTOMUWrZiENTC1ZqwEoLsKrgbjNnmmtw_Dm6YuDZGpkjUwW1NcBpsU7nqenc5Vkf8-KXaZdgz9w9T9apgfJitpRXYJevFsP57Np-gl95qthR
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fT9swED5tBWnjgR9bJzrY8APSHiZDmzhxzFuESEE0FVI7xFuU-AcglbZqS8WfP5_jAhJi0p4SKbYS5Ytzd7677wM4FGUZRKojqAhVRFnQ1rRMmKLWVBvGdRxLo5zYBO_3k5sbceWb1V0vjNbaFZ_pIzx1uXw1kY-4VWZXOLPhRcI_whpKZ_l2reekAWOiJh-IQ5rYUGbVI9MWx8Pr3uDCRoMBOwrRwjqq1xc75IRV3vyNnYnJtv7z4bZh0_uSJK3B34EPevwFNl4xDH6FZTaYds_y_ISkJFvVYRHrqJJUSmtxEP_xLRlMbXyrieegJjny9j_R-kDyuubQb-4RV2RAuth4tbTO-q85SUe3k9n94u6B2MvZVTedN-FPdjY8PadeaoHKQMQLKkpVdYzUJaZ3FVL4RInhnGu08CUzxo5q22BQcVnFMuKKVYkSglWmE3GjgvAbNMaTsd4FwgOGXl5lZIQS49YfkLwK40jHZcWELlvQWb34QnoecpTDGBUuHmmLwoFVIFiFB6sFv5_nTGsWjn-ObiI8r0bWyLRgfwVw4VfqvEDtrsB5md_fmXYAn86Hea_oXfQv9-Az3qku3dmHxmL2qH_Aulwu7uezn-5z_Avnk9ua
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=FSpGEMM%3A+A+Framework+for+Accelerating+Sparse+General+Matrix%E2%80%93Matrix+Multiplication+Using+Gustavson%E2%80%99s+Algorithm+on+FPGAs&rft.jtitle=IEEE+transactions+on+very+large+scale+integration+%28VLSI%29+systems&rft.au=Bank+Tavakoli%2C+Erfan&rft.au=Riera%2C+Michael&rft.au=Quraishi%2C+Masudul+Hassan&rft.au=Ren%2C+Fengbo&rft.date=2024-04-01&rft.issn=1063-8210&rft.eissn=1557-9999&rft.volume=32&rft.issue=4&rft.spage=633&rft.epage=644&rft_id=info:doi/10.1109%2FTVLSI.2024.3355499&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TVLSI_2024_3355499
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1063-8210&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1063-8210&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1063-8210&client=summon