Attacking Defocus Detection With Blur-Aware Transformation for Defocus Deblurring
Previous fully-supervised defocus deblurring has made significant progress. However, training such deep models requires abundant paired ground truth, which is expensive and error-prone. This paper makes an attempt to train a defocus deblurring model without using paired ground truth and any other un...
Uložené v:
| Vydané v: | IEEE transactions on multimedia Ročník 26; s. 1 - 11 |
|---|---|
| Hlavní autori: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Piscataway
IEEE
01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 1520-9210, 1941-0077 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Previous fully-supervised defocus deblurring has made significant progress. However, training such deep models requires abundant paired ground truth, which is expensive and error-prone. This paper makes an attempt to train a defocus deblurring model without using paired ground truth and any other unpaired data. Related reblur-to-deblur schemes generally use physics-based reblur or GAN-based reblur, suffering from the robustness of blur kernel and hallucination generated by GAN. Besides, the domain gap between the realistic blurred image and reblurred image hinders deblurring performance. Addressing these challenges, we propose a weakly-supervised defocus deblurring framework via defocus detection attack. On one hand, we build a focused area detection attack (FADA) to enforce the focused area to reblur, thereby reversing its detection result by a pretrained defocus blur detection network. Moreover, we introduce a blur-aware transfer modulated from the defocused region to help FADA render a robust reblurred region. On the other hand, we implement a defocused region detection attack to guide the realistic blurred region to deblur in the process of training deblurring network with simulated-paired areas. Extensive experiments on three widely-used datasets verify the effectiveness of our framework. Code is available at: https://github.com/wdzhao123/ADDBAT. |
|---|---|
| AbstractList | Previous fully-supervised defocus deblurring has made significant progress. However, training such deep models requires abundant paired ground truth, which is expensive and error-prone. This paper makes an attempt to train a defocus deblurring model without using paired ground truth and any other unpaired data. Related reblur-to-deblur schemes generally use physics-based reblur or GAN-based reblur, suffering from the robustness of blur kernel and hallucination generated by GAN. Besides, the domain gap between the realistic blurred image and reblurred image hinders deblurring performance. Addressing these challenges, we propose a weakly-supervised defocus deblurring framework via defocus detection attack. On one hand, we build a focused area detection attack (FADA) to enforce the focused area to reblur, thereby reversing its detection result by a pretrained defocus blur detection network. Moreover, we introduce a blur-aware transfer modulated from the defocused region to help FADA render a robust reblurred region. On the other hand, we implement a defocused region detection attack to guide the realistic blurred region to deblur in the process of training deblurring network with simulated-paired areas. Extensive experiments on three widely-used datasets verify the effectiveness of our framework. Code is available at: https://github.com/wdzhao123/ADDBAT. Previous fully-supervised defocus deblurring has made significant progress. However, training such deep models requires abundant paired ground truth, which is expensive and error-prone. This paper makes an attempt to train a defocus deblurring model without using paired ground truth and any other unpaired data. Related reblur-to-deblur schemes generally use physics-based reblur or GAN-based reblur, suffering from the robustness of blur kernel and hallucination generated by GAN. Besides, the domain gap between the realistic blurred image and reblurred image hinders deblurring performance. Addressing these challenges, we propose a weakly-supervised defocus deblurring framework via defocus detection attack. On one hand, we build a focused area detection attack (FADA) to enforce the focused area to reblur, thereby reversing its detection result by a pretrained defocus blur detection network. Moreover, we introduce a blur-aware transfer modulated from the defocused region to help FADA render a robust reblurred region. On the other hand, we implement a defocused region detection attack to guide the realistic blurred region to deblur in the process of training deblurring network with simulated-paired areas. Extensive experiments on three widely-used datasets verify the effectiveness of our framework. |
| Author | He, You Zhao, Wenda Lu, Huchuan Wang, Haipeng Hu, Guang Wei, Fei |
| Author_xml | – sequence: 1 givenname: Wenda orcidid: 0000-0002-7463-6103 surname: Zhao fullname: Zhao, Wenda organization: Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education and School of Information and Communication Engineering, Dalian University of Technology, Dalian, China – sequence: 2 givenname: Guang surname: Hu fullname: Hu, Guang organization: Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education and School of Information and Communication Engineering, Dalian University of Technology, Dalian, China – sequence: 3 givenname: Fei surname: Wei fullname: Wei, Fei organization: Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education and School of Information and Communication Engineering, Dalian University of Technology, Dalian, China – sequence: 4 givenname: Haipeng orcidid: 0000-0003-4201-1122 surname: Wang fullname: Wang, Haipeng organization: Research Institute of Information Fusion, Naval Aviation University, Yantai, China – sequence: 5 givenname: You orcidid: 0000-0002-6111-340X surname: He fullname: He, You organization: Research Institute of Information Fusion, Naval Aviation University, Yantai, China – sequence: 6 givenname: Huchuan orcidid: 0000-0002-6668-9758 surname: Lu fullname: Lu, Huchuan organization: Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education and School of Information and Communication Engineering, Dalian University of Technology, Dalian, China |
| BookMark | eNp9kL1PwzAQxS1UJNrCzsBQiTnFH0kcj6V8Sq0QUhFjdLHPkNImxXaE-O9xW4aKgek96d7v7vQGpNe0DRJyzuiYMaquFvP5mFMuxkKINOoR6TOVsoRSKXvRZ5wmijN6QgbeLyllaUZlnzxPQgD9UTdvoxu0re581IA61G0zeq3D--h61blk8gUORwsHjbetW8NuHN0BVMWci3tOybGFlcezXx2Sl7vbxfQhmT3dP04ns0RzxUNS6SyvwEChoahASMzQaKhMWuS2UAwrw60FAMUsGpFRyyVwLhHBYJYbI4bkcr9349rPDn0ol23nmniy5EoKLopc8ZjK9yntWu8d2lLXYfd-cFCvSkbLbX1lrK_c1lf-1hdB-gfcuHoN7vs_5GKP1Ih4EBe8SGUqfgD_cn8U |
| CODEN | ITMUF8 |
| CitedBy_id | crossref_primary_10_1007_s00371_024_03632_8 crossref_primary_10_3390_electronics13122265 crossref_primary_10_3390_wevj16020072 crossref_primary_10_1109_TCSVT_2025_3544030 crossref_primary_10_1109_ACCESS_2024_3475640 crossref_primary_10_1007_s11263_025_02522_3 |
| Cites_doi | 10.1109/WACV51458.2022.00016 10.1007/978-3-030-58607-2_36 10.1109/TPAMI.2019.2906588 10.1109/TIP.2017.2771563 10.1109/CVPR.2014.379 10.1109/TMM.2020.2985541 10.1109/CVPR.2019.00911 10.1109/CVPR.2019.00584 10.1109/CVPR42600.2020.00383 10.1109/CVPR.2019.01250 10.1109/TNNLS.2022.3151099 10.1109/TIP.2021.3127850 10.1109/cvpr46437.2021.00751 10.1007/978-3-031-20056-4_33 10.1109/ICCV.2019.00492 10.1109/TIP.2020.3045630 10.1109/ICME52920.2022.9859631 10.1007/978-3-030-58601-0_44 10.1109/CVPR46437.2021.00686 10.1109/JAS.2022.105563 10.1109/TMM.2020.3045303 10.1109/CVPR46437.2021.00489 10.1109/tmm.2023.3248162 10.1109/CVPR.2019.00453 10.1109/tnnls.2022.3219059 10.1016/j.neucom.2018.01.041 10.1109/ICCV.2019.00526 10.1109/TIP.2021.3084101 10.1109/CVPR42600.2020.00037 10.1109/CVPR46437.2021.01609 10.5555/2969033.2969125 10.1109/TPAMI.2020.3014629 10.1109/CVPR.2018.00324 10.1109/ICCV48922.2021.00264 10.1609/aaai.v37i2.25281 10.1109/CVPR.2018.00084 10.1109/TCSVT.2015.2406231 10.1007/978-3-030-58621-8_16 10.1007/978-3-030-01258-8_18 10.1109/TIP.2019.2942505 10.1109/ICCPHOT.2018.8368468 10.1109/CVPR.2018.00325 10.1609/aaai.v34i07.6884 10.1016/j.jvcir.2016.01.002 10.1109/CVPR46437.2021.01029 10.1109/TCSVT.2021.3074799 10.1109/ICCV.2017.574 10.1109/CVPR.2019.00281 10.1109/SP.2017.49 10.1109/CVPR.2015.7298665 10.1111/cgf.13567 10.1109/TCSVT.2021.3095347 10.1145/3052973.3053009 10.1109/CVPR52688.2022.00564 10.1109/CVPR46437.2021.00207 10.1007/978-3-030-58607-2_7 10.1109/CVPR52688.2022.01582 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TMM.2023.3334023 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 1941-0077 |
| EndPage | 11 |
| ExternalDocumentID | 10_1109_TMM_2023_3334023 10328474 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62176038; 61801077; U1903215 funderid: 10.13039/501100001809 |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS TN5 AAYXX AETIX AGSQL AI. AIBXA ALLEH CITATION EJD H~9 IFJZH VH1 ZY4 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c292t-bc56bada8ca8ba37e5edcabd486f891ebd2ffaaa91fed350f27a227eeade56dd3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 5 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001189435600019&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1520-9210 |
| IngestDate | Sun Nov 09 08:24:11 EST 2025 Tue Nov 18 22:43:53 EST 2025 Sat Nov 29 03:10:13 EST 2025 Wed Aug 27 02:17:11 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c292t-bc56bada8ca8ba37e5edcabd486f891ebd2ffaaa91fed350f27a227eeade56dd3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-6668-9758 0000-0003-4201-1122 0000-0002-6111-340X 0000-0002-7463-6103 |
| PQID | 2973238692 |
| PQPubID | 75737 |
| PageCount | 11 |
| ParticipantIDs | ieee_primary_10328474 crossref_primary_10_1109_TMM_2023_3334023 proquest_journals_2973238692 crossref_citationtrail_10_1109_TMM_2023_3334023 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-01-01 |
| PublicationDateYYYYMMDD | 2024-01-01 |
| PublicationDate_xml | – month: 01 year: 2024 text: 2024-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Piscataway |
| PublicationPlace_xml | – name: Piscataway |
| PublicationTitle | IEEE transactions on multimedia |
| PublicationTitleAbbrev | TMM |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 Simonyan (ref57) 2014 ref51 ref50 ref46 ref45 ref48 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 Madry (ref47) 2017 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 |
| References_xml | – ident: ref34 doi: 10.1109/WACV51458.2022.00016 – ident: ref16 doi: 10.1007/978-3-030-58607-2_36 – ident: ref26 doi: 10.1109/TPAMI.2019.2906588 – ident: ref29 doi: 10.1109/TIP.2017.2771563 – ident: ref24 doi: 10.1109/CVPR.2014.379 – ident: ref14 doi: 10.1109/TMM.2020.2985541 – ident: ref43 doi: 10.1109/CVPR.2019.00911 – ident: ref59 doi: 10.1109/CVPR.2019.00584 – ident: ref1 doi: 10.1109/CVPR42600.2020.00383 – ident: ref27 doi: 10.1109/CVPR.2019.01250 – ident: ref36 doi: 10.1109/TNNLS.2022.3151099 – year: 2017 ident: ref47 article-title: Towards deep learning models resistant to adversarial attacks – ident: ref32 doi: 10.1109/TIP.2021.3127850 – ident: ref48 doi: 10.1109/cvpr46437.2021.00751 – ident: ref38 doi: 10.1007/978-3-031-20056-4_33 – ident: ref54 doi: 10.1109/ICCV.2019.00492 – ident: ref4 doi: 10.1109/TIP.2020.3045630 – ident: ref9 doi: 10.1109/ICME52920.2022.9859631 – ident: ref17 doi: 10.1007/978-3-030-58601-0_44 – ident: ref18 doi: 10.1109/CVPR46437.2021.00686 – ident: ref31 doi: 10.1109/JAS.2022.105563 – ident: ref35 doi: 10.1109/TMM.2020.3045303 – ident: ref12 doi: 10.1109/CVPR46437.2021.00489 – ident: ref13 doi: 10.1109/tmm.2023.3248162 – ident: ref56 doi: 10.1109/CVPR.2019.00453 – ident: ref51 doi: 10.1109/tnnls.2022.3219059 – ident: ref42 doi: 10.1016/j.neucom.2018.01.041 – ident: ref52 doi: 10.1109/ICCV.2019.00526 – ident: ref15 doi: 10.1109/TIP.2021.3084101 – ident: ref10 doi: 10.1109/CVPR42600.2020.00037 – ident: ref55 doi: 10.1109/CVPR46437.2021.01609 – ident: ref45 doi: 10.5555/2969033.2969125 – ident: ref21 doi: 10.1109/TPAMI.2020.3014629 – ident: ref40 doi: 10.1109/CVPR.2018.00324 – ident: ref7 doi: 10.1109/ICCV48922.2021.00264 – ident: ref37 doi: 10.1609/aaai.v37i2.25281 – ident: ref53 doi: 10.1109/CVPR.2018.00084 – ident: ref3 doi: 10.1109/TCSVT.2015.2406231 – ident: ref11 doi: 10.1007/978-3-030-58621-8_16 – ident: ref23 doi: 10.1007/978-3-030-01258-8_18 – ident: ref20 doi: 10.1109/TIP.2019.2942505 – ident: ref22 doi: 10.1109/ICCPHOT.2018.8368468 – ident: ref25 doi: 10.1109/CVPR.2018.00325 – ident: ref58 doi: 10.1609/aaai.v34i07.6884 – ident: ref28 doi: 10.1016/j.jvcir.2016.01.002 – year: 2014 ident: ref57 article-title: Very deep convolutional networks for large-scale image recognition – ident: ref49 doi: 10.1109/CVPR46437.2021.01029 – ident: ref2 doi: 10.1109/TCSVT.2021.3074799 – ident: ref41 doi: 10.1109/ICCV.2017.574 – ident: ref44 doi: 10.1109/CVPR.2019.00281 – ident: ref46 doi: 10.1109/SP.2017.49 – ident: ref30 doi: 10.1109/CVPR.2015.7298665 – ident: ref19 doi: 10.1111/cgf.13567 – ident: ref39 doi: 10.1109/TCSVT.2021.3095347 – ident: ref50 doi: 10.1145/3052973.3053009 – ident: ref33 doi: 10.1109/CVPR52688.2022.00564 – ident: ref6 doi: 10.1109/CVPR46437.2021.00207 – ident: ref5 doi: 10.1007/978-3-030-58607-2_7 – ident: ref8 doi: 10.1109/CVPR52688.2022.01582 |
| SSID | ssj0014507 |
| Score | 2.431723 |
| Snippet | Previous fully-supervised defocus deblurring has made significant progress. However, training such deep models requires abundant paired ground truth, which is... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | Blur-aware transfer Bridges Convolution Defocus detection attack Feature extraction Generative adversarial networks Robustness Task analysis Training Weakly-supervised defocus deblurring |
| Title | Attacking Defocus Detection With Blur-Aware Transformation for Defocus Deblurring |
| URI | https://ieeexplore.ieee.org/document/10328474 https://www.proquest.com/docview/2973238692 |
| Volume | 26 |
| WOSCitedRecordID | wos001189435600019&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0077 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014507 issn: 1520-9210 databaseCode: RIE dateStart: 19990101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV05T8MwFH4CxAADN6JcysDC4DaNnTgeyyUGQCCVY4t8PEOlKqA2hb-P7aSoCIHEFEvxsyJ_sd_9HsBRrK1OqVFESkoJs5QTXwScoOHIhYq5DaWUHq74zU3-9CRum2T1kAuDiCH4DNt-GHz55lVPvKmsE4q_Mc7mYZ7zrE7W-nIZsDTkRjt-FBPhFJmpTzIWnf71ddu3CW9TSp2-RL_xoNBU5cdNHNjLxeo_P2wNVho5MurVwK_DHJYbsDrt0RA1R3YDlmcKDm7CXa-qpPbW8egMrVt37J5ViMYqo8dB9RKdDCcj0vuQI4z6MzKte-1GM0Rq6G2H5fMW3F-c908vSdNUgehEJBVROs2UNDLXMleSckzRaKkMyzObiy4qk1grpRRdi4amsU24TBKOPrA6zYyh27BQvpa4A1HaRSaFZf6WYMIqadOMJjwzuelKpLQFnek2F7qpOO4bXwyLoHnEonDAFB6YogGmBcdfFG91tY0_5m55IGbm1Ri0YH8KZdGcx3HhO3Q54SQTye4vZHuw5FZntXVlHxaq0QQPYFG_V4Px6DD8ap8SndLI |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bb9MwFD4aGxLwsMEYouxCHnjhwW3qSxw_ll20ibYCqcDeIl-OYVKVoTaFv4_tpFPRBNKeYik-iZUv9rmfA_Aut94K5gzRmjHCPZMkFgEn6CRKZXLpUymlr2M5nZbX1-pTl6yecmEQMQWfYT8Oky_f3dpVNJUNUvE3Lvkj2BGc07xN17pzGnCRsqMDR8qJCqrM2iuZq8FsMunHRuF9xljQmNhfXCi1Vbl3FicGc7H3wKU9h91OksxGLfQvYAvrfdhbd2nIuk27D882Sg6-hM-jptE22sezM_ThuctwbVI8Vp19u2l-ZB_mqwUZ_dYLzGYbUm24HUYbRGYerYf19wP4cnE-O70kXVsFYqmiDTFWFEY7XVpdGs0kCnRWG8fLwpdqiMZR77XWaujRMZF7KjWlEmNotSicY69gu76t8TVkYohcK8_jOcGVN9qLglFZuNINNTLWg8H6M1e2qzkeW1_Mq6R75KoKwFQRmKoDpgfv7yh-tvU2_jP3IAKxMa_FoAdHayirbkcuq9ijK4gnhaJv_kH2Fp5czibjanw1_XgIT8ObeGtrOYLtZrHCY3hsfzU3y8VJ-u3-ADC31g8 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Attacking+Defocus+Detection+With+Blur-Aware+Transformation+for+Defocus+Deblurring&rft.jtitle=IEEE+transactions+on+multimedia&rft.au=Zhao%2C+Wenda&rft.au=Hu%2C+Guang&rft.au=Wei%2C+Fei&rft.au=Wang%2C+Haipeng&rft.date=2024-01-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1520-9210&rft.eissn=1941-0077&rft.volume=26&rft.spage=5450&rft_id=info:doi/10.1109%2FTMM.2023.3334023&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1520-9210&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1520-9210&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1520-9210&client=summon |