An efficient method for autoencoder‐based collaborative filtering
Summary Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compar...
Saved in:
| Published in: | Concurrency and computation Vol. 31; no. 23 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Hoboken
Wiley Subscription Services, Inc
10.12.2019
|
| Subjects: | |
| ISSN: | 1532-0626, 1532-0634 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Summary
Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models. |
|---|---|
| AbstractList | Summary
Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models. Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models. Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐zero entries of the input matrix are concentrated in a few rows. Considering this sparse characteristic, we propose a new method for training autoencoder‐based CF. We run experiments on two popular datasets MovieLens 1 M and MovieLens 10 M. Experimental results show that our algorithm leads to orders of magnitude speed‐up for training (stacked) autoencoder‐based CF model while achieving comparable performance compared with existing state‐of‐the‐art models. |
| Author | Chen, Fu‐Ji Wang, Yi‐Lei Yang, Xian‐Jun Wu, Ying‐Jie Tang, Wen‐Zhe |
| Author_xml | – sequence: 1 givenname: Yi‐Lei surname: Wang fullname: Wang, Yi‐Lei organization: Fuzhou University – sequence: 2 givenname: Wen‐Zhe surname: Tang fullname: Tang, Wen‐Zhe organization: Fuzhou University – sequence: 3 givenname: Xian‐Jun surname: Yang fullname: Yang, Xian‐Jun organization: Fuzhou University – sequence: 4 givenname: Ying‐Jie orcidid: 0000-0002-5201-3159 surname: Wu fullname: Wu, Ying‐Jie email: yjwu@fzu.edu.cn organization: Fuzhou University – sequence: 5 givenname: Fu‐Ji surname: Chen fullname: Chen, Fu‐Ji organization: Fuzhou University |
| BookMark | eNp1kM1KAzEQx4NUsK2Cj7DgxcvWfOwm3WNZ6gcU9NB7yGYnmrJNapIqvfkIPqNP4taKB9HTzMDvPzP8RmjgvAOEzgmeEIzpld7ApCixOEJDUjKaY86KwU9P-QkaxbjCmBDMyBDVM5eBMVZbcClbQ3rybWZ8yNQ2eXDatxA-3t4bFaHNtO861figkn2BzNguQbDu8RQdG9VFOPuuY7S8ni_r23xxf3NXzxa5phUTOZC2LBgwhgUxtNGK8X7mRBctFkbQqiJGQMFZ1bCioi0IxShVXE2xqqaMjdHFYe0m-OctxCRXfhtcf1FSRqjgJWe0pyYHSgcfYwAjtU39w96loGwnCZZ7T7L3JPee-sDlr8Am2LUKu7_Q_IC-2g52_3Kyfph_8Z__DXhW |
| CitedBy_id | crossref_primary_10_1177_1550147720923529 crossref_primary_10_3390_electronics9030501 crossref_primary_10_1109_ACCESS_2020_3002803 crossref_primary_10_1002_cpe_5425 crossref_primary_10_1080_1206212X_2022_2097769 crossref_primary_10_1177_1550147721992881 |
| Cites_doi | 10.21437/Interspeech.2010-487 10.1145/371920.372071 10.1145/3097983.3098077 10.1007/978-3-319-12643-2_35 10.1109/MC.2009.263 10.1561/2200000006 10.1145/2783258.2783273 10.1145/2988450.2988456 10.18653/v1/D15-1166 10.1145/1273496.1273596 10.1609/aaai.v29i1.9548 10.1145/2740908.2742726 10.1109/MIC.2003.1167344 10.1016/j.knosys.2013.06.010 10.1162/089976602760128018 10.1016/j.eswa.2015.01.001 |
| ContentType | Journal Article |
| Copyright | 2018 John Wiley & Sons, Ltd. 2019 John Wiley & Sons, Ltd. |
| Copyright_xml | – notice: 2018 John Wiley & Sons, Ltd. – notice: 2019 John Wiley & Sons, Ltd. |
| DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
| DOI | 10.1002/cpe.4507 |
| DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | CrossRef Computer and Information Systems Abstracts |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1532-0634 |
| EndPage | n/a |
| ExternalDocumentID | 10_1002_cpe_4507 CPE4507 |
| Genre | article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of Fujian Province funderid: 2014J01230; 2017J01754 – fundername: National Natural Science Foundation of China funderid: 61300026 |
| GroupedDBID | .3N .DC .GA 05W 0R~ 10A 1L6 1OC 33P 3SF 3WU 4.4 50Y 50Z 51W 51X 52M 52N 52O 52P 52S 52T 52U 52W 52X 5GY 5VS 66C 702 7PT 8-0 8-1 8-3 8-4 8-5 8UM 930 A03 AAESR AAEVG AAHHS AAHQN AAMNL AANLZ AAONW AASGY AAXRX AAYCA AAZKR ABCQN ABCUV ABEML ABIJN ACAHQ ACCFJ ACCZN ACPOU ACSCC ACXBN ACXQS ADBBV ADEOM ADIZJ ADKYN ADMGS ADOZA ADXAS ADZMN ADZOD AEEZP AEIGN AEIMD AEQDE AEUQT AEUYR AFBPY AFFPM AFGKR AFPWT AFWVQ AHBTC AITYG AIURR AIWBW AJBDE AJXKR ALMA_UNASSIGNED_HOLDINGS ALUQN ALVPJ AMBMR AMYDB ATUGU AUFTA AZBYB BAFTC BDRZF BFHJK BHBCM BMNLL BROTX BRXPI BY8 CS3 D-E D-F DCZOG DPXWK DR2 DRFUL DRSTM EBS F00 F01 F04 F5P G-S G.N GNP GODZA HGLYW HHY HZ~ IX1 JPC KQQ LATKE LAW LC2 LC3 LEEKS LH4 LITHE LOXES LP6 LP7 LUTES LYRES MEWTI MK4 MRFUL MRSTM MSFUL MSSTM MXFUL MXSTM N04 N05 N9A O66 O9- OIG P2W P2X P4D PQQKQ Q.N Q11 QB0 QRW R.K ROL RWI RX1 SUPJJ TN5 UB1 V2E W8V W99 WBKPD WIH WIK WOHZO WQJ WRC WXSBR WYISQ WZISG XG1 XV2 ~IA ~WT AAYXX ADMLS AEYWJ AGHNM AGYGG CITATION O8X 7SC 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c2937-e1d543e33071f2bca3654361c4d07f72991f7e4639b3492de7a322a6a80a9833 |
| IEDL.DBID | DRFUL |
| ISICitedReferencesCount | 4 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000496467600021&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1532-0626 |
| IngestDate | Fri Jul 25 01:59:20 EDT 2025 Tue Nov 18 22:33:10 EST 2025 Sat Nov 29 01:41:19 EST 2025 Wed Jan 22 16:38:02 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 23 |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c2937-e1d543e33071f2bca3654361c4d07f72991f7e4639b3492de7a322a6a80a9833 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-5201-3159 |
| PQID | 2312765632 |
| PQPubID | 2045170 |
| PageCount | 1 |
| ParticipantIDs | proquest_journals_2312765632 crossref_citationtrail_10_1002_cpe_4507 crossref_primary_10_1002_cpe_4507 wiley_primary_10_1002_cpe_4507_CPE4507 |
| PublicationCentury | 2000 |
| PublicationDate | 10 December 2019 |
| PublicationDateYYYYMMDD | 2019-12-10 |
| PublicationDate_xml | – month: 12 year: 2019 text: 10 December 2019 day: 10 |
| PublicationDecade | 2010 |
| PublicationPlace | Hoboken |
| PublicationPlace_xml | – name: Hoboken |
| PublicationTitle | Concurrency and computation |
| PublicationYear | 2019 |
| Publisher | Wiley Subscription Services, Inc |
| Publisher_xml | – name: Wiley Subscription Services, Inc |
| References | 2002; 14 2009; 42 2001 2011 2013; 51 2015; 42 2003; 7 2008 2014; 15 2007 2014; 58 2017 2016 2015 2014 2013 2009; 2 2010; 9 e_1_2_7_6_1 e_1_2_7_5_1 e_1_2_7_4_1 Glorot X (e_1_2_7_25_1) 2010; 9 e_1_2_7_3_1 e_1_2_7_9_1 e_1_2_7_8_1 e_1_2_7_7_1 e_1_2_7_19_1 e_1_2_7_18_1 e_1_2_7_17_1 e_1_2_7_16_1 e_1_2_7_2_1 e_1_2_7_15_1 e_1_2_7_14_1 e_1_2_7_13_1 e_1_2_7_12_1 e_1_2_7_11_1 e_1_2_7_10_1 e_1_2_7_26_1 e_1_2_7_27_1 e_1_2_7_28_1 Pascanu R (e_1_2_7_24_1) 2014; 58 e_1_2_7_23_1 e_1_2_7_22_1 e_1_2_7_20_1 Srivastava N (e_1_2_7_21_1) 2014; 15 |
| References_xml | – year: 2011 – volume: 15 start-page: 1929 issue: 1 year: 2014 end-page: 58 article-title: Dropout: a simple way to prevent neural networks from overfitting publication-title: J Mach Learn Res – volume: 7 start-page: 76 issue: 1 year: 2003 end-page: 80 article-title: Amazon.com recommendations: item‐to‐item collaborative filtering publication-title: IEEE Internet Comput – volume: 9 start-page: 249 year: 2010 end-page: 56 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: J Mach Learn Res – volume: 2 start-page: 1 issue: 1 year: 2009 end-page: 127 article-title: Learning deep architectures for AI publication-title: Found Trends Mach Learn – volume: 14 start-page: 1771 issue: 8 year: 2002 end-page: 880 article-title: Training products of experts by minimizing contrastive divergence publication-title: Neural Comput – year: 2001 – year: 2008 – year: 2007 – volume: 42 start-page: 4022 issue: 8 year: 2015 end-page: 8 article-title: Reversed CF: a fast collaborative filtering algorithm using a ‐nearest neighbor graph publication-title: Expert Syst Appl – volume: 42 start-page: 30 issue: 8 year: 2009 end-page: 37 article-title: Matrix factorization techniques for recommender systems publication-title: Computer – volume: 51 start-page: 27 issue: 19 year: 2013 end-page: 34 article-title: A similarity metric designed to speed up, using hardware, the recommender systems k ‐nearest neighbors algorithm publication-title: Knowl‐Based Syst – year: 2017 – year: 2016 – year: 2014 – year: 2015 – year: 2013 – volume: 58 start-page: 1823 issue: 6 year: 2014 end-page: 32 article-title: On the number of response regions of deep feed forward networks with piece‐wise linear activations publication-title: Arthritis Rheum – ident: e_1_2_7_20_1 doi: 10.21437/Interspeech.2010-487 – volume: 15 start-page: 1929 issue: 1 year: 2014 ident: e_1_2_7_21_1 article-title: Dropout: a simple way to prevent neural networks from overfitting publication-title: J Mach Learn Res – ident: e_1_2_7_3_1 doi: 10.1145/371920.372071 – ident: e_1_2_7_16_1 doi: 10.1145/3097983.3098077 – ident: e_1_2_7_22_1 – ident: e_1_2_7_28_1 – ident: e_1_2_7_17_1 – ident: e_1_2_7_12_1 doi: 10.1007/978-3-319-12643-2_35 – ident: e_1_2_7_4_1 doi: 10.1109/MC.2009.263 – ident: e_1_2_7_23_1 doi: 10.1561/2200000006 – ident: e_1_2_7_5_1 – ident: e_1_2_7_10_1 – ident: e_1_2_7_15_1 doi: 10.1145/2783258.2783273 – volume: 9 start-page: 249 year: 2010 ident: e_1_2_7_25_1 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: J Mach Learn Res – ident: e_1_2_7_13_1 – ident: e_1_2_7_11_1 doi: 10.1145/2988450.2988456 – ident: e_1_2_7_27_1 doi: 10.18653/v1/D15-1166 – ident: e_1_2_7_7_1 doi: 10.1145/1273496.1273596 – ident: e_1_2_7_14_1 doi: 10.1609/aaai.v29i1.9548 – ident: e_1_2_7_9_1 doi: 10.1145/2740908.2742726 – ident: e_1_2_7_2_1 doi: 10.1109/MIC.2003.1167344 – ident: e_1_2_7_6_1 – ident: e_1_2_7_18_1 doi: 10.1016/j.knosys.2013.06.010 – ident: e_1_2_7_26_1 – ident: e_1_2_7_8_1 doi: 10.1162/089976602760128018 – ident: e_1_2_7_19_1 doi: 10.1016/j.eswa.2015.01.001 – volume: 58 start-page: 1823 issue: 6 year: 2014 ident: e_1_2_7_24_1 article-title: On the number of response regions of deep feed forward networks with piece‐wise linear activations publication-title: Arthritis Rheum |
| SSID | ssj0011031 |
| Score | 2.2550395 |
| Snippet | Summary
Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models... Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have... |
| SourceID | proquest crossref wiley |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| SubjectTerms | Algorithms autoencoder Collaboration collaborative filtering deep learning Filtration Machine learning Neural networks recommender system Recommender systems Training |
| Title | An efficient method for autoencoder‐based collaborative filtering |
| URI | https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcpe.4507 https://www.proquest.com/docview/2312765632 |
| Volume | 31 |
| WOSCitedRecordID | wos000496467600021&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVWIB databaseName: Wiley Online Library - Journals customDbUrl: eissn: 1532-0634 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0011031 issn: 1532-0626 databaseCode: DRFUL dateStart: 20010101 isFulltext: true titleUrlDefault: https://onlinelibrary.wiley.com providerName: Wiley-Blackwell |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3LSgMxFA3SunBjfWK1SgTRVWwmSTOTZaktLqQUqdDdkCZ3oCBj6WvtJ_iNfonJPNoKCoKr2dzAcHNvzmEy9xyEboAKY0BxYqU1RAgAopXhxHERHslAWkWTzGwi7Pej0UgNir8q_SxMrg-x_uDmOyM7r32D6_G8uRENNVO4Fy0_SF5lrmxFBVUfnnsvT-s7BG9gkKulMkIdby-lZylrlmu_g9GGYW7z1AxoerX_vOIB2i_oJW7n9XCIdiA9QrXSugEXnXyMOu0UQ6Ye4UAH5zbS2PFXrJeLN69taWH2-f7hMc7irVpZAU4m_oLdId4JGva6w84jKfwUiHGgHhIIbEtw4K6tg4SNjeZ-sFQGRlgaJo5lqyAJQTjOMvaahRZC7dpdSx1RrSLOT1ElfUvhDGEJzKjI-qlcLZR1qJ-AlJBQEfibOVlHd2VeY1NojXvLi9c4V0lmsUtN7FNTR9fryGmur_FDTKPcmrjosHnseCkLHRnlrI5us034dX3cGXT98_yvgRdoz_GizCYioA1UWcyWcIl2zWoxmc-uijr7AvZw1h4 |
| linkProvider | Wiley-Blackwell |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1NSwMxEB1KK-jF-onVqhFET2t3N2l2g6dSWyrWUqRCb8s2mYWCtKVfZ3-Cv9FfYrIfbQUFwdNeJrBMZvIeCfMewDXaTEoU1FJcSYsxRCsUklqai1CfO1wJO4rNJrxOx-_3RTcH99ksTKIPsbpwM50Rn9emwc2FdGWtGioneMeqZpK8wHQVVfNQeHhpvrZXjwjGwSCRS3UtWxP3THvWdivZ2u9otKaYm0Q1Rppm8V__uAe7KcEktaQi9iGHowMoZuYNJO3lQ6jXRgRj_QgNOyQxkiaawZJwMR8bdUuF08_3D4NyimxUyxJJNDRP7BrzjqDXbPTqLSt1VLCkhnXPQkdVGUWqG9uJ3IEMqRkt5Y5kyvYizbOFE3mosykGRrVQoRfqhg956Nuh8Ck9hvxoPMITIBxdKXxl5nJDJpTG_Qg5x8hmjnmb4yW4zRIbyFRt3JhevAWJTrIb6NQEJjUluFpFThKFjR9iytneBGmPzQLNTF1P01HqluAm3oVf1wf1bsN8T_8aeAnbrd5zO2g_dp7OYEezpNg0wrHLkJ9PF3gOW3I5H86mF2nRfQHQUdoO |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3LSsNAFL2UVsSN9YnVqiOIrqJJZjrJ4Kr0gWIpRSp0F9KZGyhIWvpa-wl-o1_iTB5tBQXBVTYzEGbuyTnkcs8BuEabSYmCWooraTGGaIVCUktrEepzhythR0nYhNft-oOB6BXgIZ-FSf0hVj_cDDKS77UBOE5UdL92DZUTvGM1M0leYjXBNSpLzZf2a2fVRDAJBqldqmvZWrjn3rO2e5_v_c5Ga4m5KVQTpmmX__WOe7CbCUxSTytiHwoYH0A5D28gGZYPoVGPCSb-EZp2SBokTbSCJeFiPjbulgqnn-8fhuUU2aiWJZJoZFrsmvOOoN9u9RuPVpaoYElN656FjqoxilQD24ncoQypGS3ljmTK9iKts4UTeci0ahka10KFXqgBH_LQt0PhU3oMxXgc4wkQjq4UvjJzuSETSvN-hJxjZDPH9OZ4BW7zgw1k5jZuQi_egtQn2Q300QTmaCpwtVo5SR02flhTze8myDA2C7QydT0tR6lbgZvkFn7dHzR6LfM8_evCS9juNdtB56n7fAY7WiQlmRGOXYXifLrAc9iSy_loNr3Iau4Lk1PZiQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=An+efficient+method+for+autoencoder%E2%80%90based+collaborative+filtering&rft.jtitle=Concurrency+and+computation&rft.au=Yi%E2%80%90Lei+Wang&rft.au=Wen%E2%80%90Zhe+Tang&rft.au=Xian%E2%80%90Jun+Yang&rft.au=Ying%E2%80%90Jie+Wu&rft.date=2019-12-10&rft.pub=Wiley+Subscription+Services%2C+Inc&rft.issn=1532-0626&rft.eissn=1532-0634&rft.volume=31&rft.issue=23&rft_id=info:doi/10.1002%2Fcpe.4507&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1532-0626&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1532-0626&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1532-0626&client=summon |