An Accelerator Design Using a MTCA Decomposition Algorithm for CNNs

Due to the high throughput and high computing capability of convolutional neural networks (CNNs), researchers are paying increasing attention to the design of CNNs hardware accelerator architecture. Accordingly, in this paper, we propose a block parallel computing algorithm based on the matrix trans...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Vol. 20; no. 19; p. 5558
Main Authors: Zhao, Yunping, Lu, Jianzhuang, Chen, Xiaowen
Format: Journal Article
Language:English
Published: Basel MDPI AG 28.09.2020
MDPI
Subjects:
ISSN:1424-8220, 1424-8220
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Due to the high throughput and high computing capability of convolutional neural networks (CNNs), researchers are paying increasing attention to the design of CNNs hardware accelerator architecture. Accordingly, in this paper, we propose a block parallel computing algorithm based on the matrix transformation computing algorithm (MTCA) to realize the convolution expansion and resolve the block problem of the intermediate matrix. It enables high parallel implementation on hardware. Moreover, we also provide a specific calculation method for the optimal partition of matrix multiplication to optimize performance. In our evaluation, our proposed method saves more than 60% of hardware storage space compared with the im2col(image to column) approach. More specifically, in the case of large-scale convolutions, it saves nearly 82% of storage space. Under the accelerator architecture framework designed in this paper, we realize the performance of 26.7GFLOPS-33.4GFLOPS (depending on convolution type) on FPGA(Field Programmable Gate Array) by reducing bandwidth and improving data reusability. It is 1.2×–4.0× faster than memory-efficient convolution (MEC) and im2col, respectively, and represents an effective solution for a large-scale convolution accelerator.
AbstractList Due to the high throughput and high computing capability of convolutional neural networks (CNNs), researchers are paying increasing attention to the design of CNNs hardware accelerator architecture. Accordingly, in this paper, we propose a block parallel computing algorithm based on the matrix transformation computing algorithm (MTCA) to realize the convolution expansion and resolve the block problem of the intermediate matrix. It enables high parallel implementation on hardware. Moreover, we also provide a specific calculation method for the optimal partition of matrix multiplication to optimize performance. In our evaluation, our proposed method saves more than 60% of hardware storage space compared with the im2col(image to column) approach. More specifically, in the case of large-scale convolutions, it saves nearly 82% of storage space. Under the accelerator architecture framework designed in this paper, we realize the performance of 26.7GFLOPS-33.4GFLOPS (depending on convolution type) on FPGA(Field Programmable Gate Array) by reducing bandwidth and improving data reusability. It is 1.2×–4.0× faster than memory-efficient convolution (MEC) and im2col, respectively, and represents an effective solution for a large-scale convolution accelerator.
Due to the high throughput and high computing capability of convolutional neural networks (CNNs), researchers are paying increasing attention to the design of CNNs hardware accelerator architecture. Accordingly, in this paper, we propose a block parallel computing algorithm based on the matrix transformation computing algorithm (MTCA) to realize the convolution expansion and resolve the block problem of the intermediate matrix. It enables high parallel implementation on hardware. Moreover, we also provide a specific calculation method for the optimal partition of matrix multiplication to optimize performance. In our evaluation, our proposed method saves more than 60% of hardware storage space compared with the im2col(image to column) approach. More specifically, in the case of large-scale convolutions, it saves nearly 82% of storage space. Under the accelerator architecture framework designed in this paper, we realize the performance of 26.7GFLOPS-33.4GFLOPS (depending on convolution type) on FPGA(Field Programmable Gate Array) by reducing bandwidth and improving data reusability. It is 1.2×-4.0× faster than memory-efficient convolution (MEC) and im2col, respectively, and represents an effective solution for a large-scale convolution accelerator.Due to the high throughput and high computing capability of convolutional neural networks (CNNs), researchers are paying increasing attention to the design of CNNs hardware accelerator architecture. Accordingly, in this paper, we propose a block parallel computing algorithm based on the matrix transformation computing algorithm (MTCA) to realize the convolution expansion and resolve the block problem of the intermediate matrix. It enables high parallel implementation on hardware. Moreover, we also provide a specific calculation method for the optimal partition of matrix multiplication to optimize performance. In our evaluation, our proposed method saves more than 60% of hardware storage space compared with the im2col(image to column) approach. More specifically, in the case of large-scale convolutions, it saves nearly 82% of storage space. Under the accelerator architecture framework designed in this paper, we realize the performance of 26.7GFLOPS-33.4GFLOPS (depending on convolution type) on FPGA(Field Programmable Gate Array) by reducing bandwidth and improving data reusability. It is 1.2×-4.0× faster than memory-efficient convolution (MEC) and im2col, respectively, and represents an effective solution for a large-scale convolution accelerator.
Author Zhao, Yunping
Chen, Xiaowen
Lu, Jianzhuang
AuthorAffiliation College of Computer, National University of Defense Technology, Changsha 410073, China; zhaoyunping@nudt.edu.cn (Y.Z.); xwchen@nudt.edu.cn (X.C.)
AuthorAffiliation_xml – name: College of Computer, National University of Defense Technology, Changsha 410073, China; zhaoyunping@nudt.edu.cn (Y.Z.); xwchen@nudt.edu.cn (X.C.)
Author_xml – sequence: 1
  givenname: Yunping
  orcidid: 0000-0002-5600-3740
  surname: Zhao
  fullname: Zhao, Yunping
– sequence: 2
  givenname: Jianzhuang
  surname: Lu
  fullname: Lu, Jianzhuang
– sequence: 3
  givenname: Xiaowen
  surname: Chen
  fullname: Chen, Xiaowen
BookMark eNplkUtv1DAURi1URB-w4B9EYgOLoX4_NkijQEulUjbt2rpx7NSjJB7sTCX-PR6mIFpWtq7PPdf67ik6mtPsEXpL8EfGDD4vFBMjhNAv0AnhlK80pfjon_sxOi1lgzFljOlX6JhRYzST8gS167lZO-dHn2FJufnsSxzm5q7EeWig-XbbrmvNpWmbSlxiqvQ4pByX-6kJlW9vbspr9DLAWPybx_MM3V18uW2_rq6_X1616-uV41wuKzCKBy09EAPCSxcgcMdA8F5S7XQPUksVVN95Ljk3ymjFQQvamU4L3HfsDF0dvH2Cjd3mOEH-aRNE-7uQ8mAhL9GN3rIOeqyD8E4zLkzQoDriOahAvZNy7_p0cG133eR75-clw_hE-vRljvd2SA9WCc205FXw_lGQ04-dL4udYqk5jjD7tCuWcq40p4Tu0XfP0E3a5blGZakQmBHDCa3U-YFyOZWSfbAuLrCPvM6PoyXY7pdt_y67dnx41vHn-_-zvwCJbaiH
CitedBy_id crossref_primary_10_3390_a17080361
crossref_primary_10_1145_3632957
crossref_primary_10_1145_3643134
crossref_primary_10_3390_s21155081
crossref_primary_10_3390_electronics13183765
crossref_primary_10_3390_s21227468
crossref_primary_10_3390_s22114298
Cites_doi 10.1109/ISSCC.2017.7870349
10.1109/ISSCC.2017.7870350
10.1145/2654822.2541967
10.1109/CVPR.2016.435
10.3390/fi12070113
10.1109/ICCV.2015.178
10.1109/TCSI.2017.2767204
10.1109/TVLSI.2020.3002779
10.1109/JSSC.2016.2616357
10.1109/MICRO.2014.58
10.1145/3007787.3001179
10.1016/j.ces.2017.10.006
10.1007/s11263-013-0620-5
10.1109/JSSC.2017.2778281
10.1109/CVPR.2014.81
10.1109/TVLSI.2018.2815603
10.1109/ISCA.2016.40
ContentType Journal Article
Copyright 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2020 by the authors. 2020
Copyright_xml – notice: 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2020 by the authors. 2020
DBID AAYXX
CITATION
3V.
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
DOI 10.3390/s20195558
DatabaseName CrossRef
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
ProQuest Central
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
ProQuest Health & Medical Collection
Medical Database
Proquest Central Premium
ProQuest One Academic (New)
ProQuest Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central China
ProQuest Central
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList CrossRef


MEDLINE - Academic
Publicly Available Content Database
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1424-8220
ExternalDocumentID oai_doaj_org_article_3bad08f5ec83459f8a7b1e4a7f2ec66b
PMC7583864
10_3390_s20195558
GroupedDBID ---
123
2WC
53G
5VS
7X7
88E
8FE
8FG
8FI
8FJ
AADQD
AAHBH
AAYXX
ABDBF
ABUWG
ACUHS
ADBBV
ADMLS
AENEX
AFFHD
AFKRA
AFZYC
ALMA_UNASSIGNED_HOLDINGS
BENPR
BPHCQ
BVXVI
CCPQU
CITATION
CS3
D1I
DU5
E3Z
EBD
ESX
F5P
FYUFA
GROUPED_DOAJ
GX1
HH5
HMCUK
HYE
KQ8
L6V
M1P
M48
MODMG
M~E
OK1
OVT
P2P
P62
PHGZM
PHGZT
PIMPY
PJZUB
PPXIY
PQQKQ
PROAC
PSQYO
RNS
RPM
TUS
UKHRP
XSB
~8M
3V.
7XB
8FK
AZQEC
DWQXO
K9.
PKEHL
PQEST
PQUKI
PRINS
7X8
PUEGO
5PM
ID FETCH-LOGICAL-c446t-a974f86ea19a5e6cfaf4c3a54d628c8da6867f7dbe4644979874a852b9b850db3
IEDL.DBID DOA
ISICitedReferencesCount 8
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000586566400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1424-8220
IngestDate Fri Oct 03 12:45:53 EDT 2025
Tue Nov 04 01:47:18 EST 2025
Fri Sep 05 07:18:33 EDT 2025
Tue Oct 07 07:04:19 EDT 2025
Sat Nov 29 07:12:38 EST 2025
Tue Nov 18 22:11:24 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 19
Language English
License Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c446t-a974f86ea19a5e6cfaf4c3a54d628c8da6867f7dbe4644979874a852b9b850db3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-5600-3740
OpenAccessLink https://doaj.org/article/3bad08f5ec83459f8a7b1e4a7f2ec66b
PMID 32998366
PQID 2550319412
PQPubID 2032333
ParticipantIDs doaj_primary_oai_doaj_org_article_3bad08f5ec83459f8a7b1e4a7f2ec66b
pubmedcentral_primary_oai_pubmedcentral_nih_gov_7583864
proquest_miscellaneous_2447842124
proquest_journals_2550319412
crossref_citationtrail_10_3390_s20195558
crossref_primary_10_3390_s20195558
PublicationCentury 2000
PublicationDate 20200928
PublicationDateYYYYMMDD 2020-09-28
PublicationDate_xml – month: 9
  year: 2020
  text: 20200928
  day: 28
PublicationDecade 2020
PublicationPlace Basel
PublicationPlace_xml – name: Basel
PublicationTitle Sensors (Basel, Switzerland)
PublicationYear 2020
Publisher MDPI AG
MDPI
Publisher_xml – name: MDPI AG
– name: MDPI
References Wang (ref_12) 2018; 65
Yin (ref_9) 2018; 53
Chen (ref_19) 2017; 52
Liu (ref_26) 2018; 41
Zhang (ref_21) 2018; 52
ref_11
ref_10
You (ref_20) 2018; 15
Liu (ref_6) 2016; 44
Ma (ref_13) 2018; 26
ref_18
ref_16
Ardakani (ref_14) 2018; 65
Krizhevsky (ref_1) 2012; 60
Chen (ref_17) 2014; 49
Chaoyang (ref_23) 2020; 28
ref_25
ref_24
ref_22
Dong (ref_2) 2019; 12
ref_8
Fang (ref_15) 2019; 45
Uijlings (ref_3) 2013; 2
ref_5
ref_4
ref_7
References_xml – ident: ref_10
  doi: 10.1109/ISSCC.2017.7870349
– volume: 15
  start-page: 10
  year: 2018
  ident: ref_20
  article-title: MALMM: A Multi-array Architecture for Large-scale Matrix Multiplication on FPGA
  publication-title: IEICE Electron. Express
– volume: 60
  start-page: 1097
  year: 2012
  ident: ref_1
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Adv. Neural Inf. Process. Syst.
– volume: 45
  start-page: 217
  year: 2019
  ident: ref_15
  article-title: Optimization method of convolution calculation based on matrix transformation
  publication-title: Comput. Eng.
– ident: ref_11
  doi: 10.1109/ISSCC.2017.7870350
– volume: 49
  start-page: 269
  year: 2014
  ident: ref_17
  article-title: DianNao: A small-footprint high-throuhput accelerator for ubiquitous machine-learning
  publication-title: ACM SIGARCH Comput. Archit. News
  doi: 10.1145/2654822.2541967
– ident: ref_16
– ident: ref_7
  doi: 10.1109/CVPR.2016.435
– ident: ref_24
  doi: 10.3390/fi12070113
– ident: ref_5
  doi: 10.1109/ICCV.2015.178
– volume: 65
  start-page: 1941
  year: 2018
  ident: ref_12
  article-title: Efficient hardware architectures for deep convolutional neural network
  publication-title: IEEE Trans. Circuits Syst. I
  doi: 10.1109/TCSI.2017.2767204
– volume: 28
  start-page: 1953
  year: 2020
  ident: ref_23
  article-title: An Efficient Hardware Accelerator for Structured Sparse Convolutional Neural Networks on FPGAs
  publication-title: IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
  doi: 10.1109/TVLSI.2020.3002779
– volume: 12
  start-page: 96
  year: 2019
  ident: ref_2
  article-title: Target recognition in SAR images via sparse representation in the frequency domain
  publication-title: Pattern Recognit.
– volume: 65
  start-page: 1349
  year: 2018
  ident: ref_14
  article-title: An architecture to accelerate convolution in deep neural networks
  publication-title: IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
– volume: 52
  start-page: 127
  year: 2017
  ident: ref_19
  article-title: Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural net-works
  publication-title: IEEE J. Solid-State Circuits
  doi: 10.1109/JSSC.2016.2616357
– ident: ref_18
  doi: 10.1109/MICRO.2014.58
– ident: ref_25
– volume: 44
  start-page: 393
  year: 2016
  ident: ref_6
  article-title: Cambricon: An instruction set architecture for neural networks
  publication-title: ACM Sigarch Comput. Archit. News
  doi: 10.1145/3007787.3001179
– volume: 52
  start-page: 515
  year: 2018
  ident: ref_21
  article-title: Parallel computing method of two-dimensional matrix convolution
  publication-title: Eng. Sci.
  doi: 10.1016/j.ces.2017.10.006
– volume: 2
  start-page: 154
  year: 2013
  ident: ref_3
  article-title: Selective search for object recognition
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-013-0620-5
– volume: 53
  start-page: 968
  year: 2018
  ident: ref_9
  article-title: A high energy efficient reconfigurable hybrid neural network processor for deep learning applications
  publication-title: IEEE J. Solid-State Circuits
  doi: 10.1109/JSSC.2017.2778281
– ident: ref_4
  doi: 10.1109/CVPR.2014.81
– volume: 26
  start-page: 1354
  year: 2018
  ident: ref_13
  article-title: Optimizing the convolution operation to accelerate deep neural networks on FPGA
  publication-title: IEEE Trans. Very Large Scale Integr. (VLSI) Syst.
  doi: 10.1109/TVLSI.2018.2815603
– ident: ref_22
– volume: 41
  start-page: 2251
  year: 2018
  ident: ref_26
  article-title: Matrix multiplication and vectorization for multi-core vector processors
  publication-title: J. Comput. Sci.
– ident: ref_8
  doi: 10.1109/ISCA.2016.40
SSID ssj0023338
Score 2.3569024
Snippet Due to the high throughput and high computing capability of convolutional neural networks (CNNs), researchers are paying increasing attention to the design of...
SourceID doaj
pubmedcentral
proquest
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Enrichment Source
Index Database
StartPage 5558
SubjectTerms Algorithms
CNNs accelerator
Efficiency
Field programmable gate arrays
hardware architecture
parallel computing algorithm
Software
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB5BywEOUF5qaKlMxYFL1MRxbOeE0qUVF6IKtVJvkZ9tpZJtN1t-P-OsN91IiAtXeyRbnrE9n2f8DcBnqjJvTG7TwjmdMlP4VDFtUmq5KYSmmdDZUGxCNI28vKzO4oNbH9Mq12ficFDbuQlv5Efo-oYPNyynX-_u01A1KkRXYwmNp7AdmMrQzrePT5qznyPkKhCBrfiECgT3Rz0N_-PKUN994xYayPonHuY0P3Ljwjl99b9T3YGX0dUk9co2XsMT172BFxsEhG9hVnekNgavniHaTr4N-RxkyCMgivw4n9XYFtLOY24XqW-vcKjl9S-C3i6ZNU3_Di5OT85n39NYViE1iP2WqUII4SV3Kq9U6bjxyqOOVMksp9JIq7jkwgurHUNnqRKVFEzJkupKyzKzungPW928c7tAJDdVJi1DL0MwBEJSc50rXeIqC2-kSuDLeplbEznHQ-mL2xaxR9BIO2okgcNR9G5FtPE3oeOgq1EgcGMPDfPFVRu3WltoZTPpS2dkwcrKSyV07pgSnjrDuU5gf622Nm7Yvn3UWQKfxm7caiF-ojo3f0AZxoQMEXSWgJhYyGRC057u5nog7RYhPs3Zh38PvgfPaQD0Iewl92FruXhwH-GZ-b286RcH0br_APvSBmc
  priority: 102
  providerName: ProQuest
Title An Accelerator Design Using a MTCA Decomposition Algorithm for CNNs
URI https://www.proquest.com/docview/2550319412
https://www.proquest.com/docview/2447842124
https://pubmed.ncbi.nlm.nih.gov/PMC7583864
https://doaj.org/article/3bad08f5ec83459f8a7b1e4a7f2ec66b
Volume 20
WOSCitedRecordID wos000586566400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: DOA
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: M~E
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Health & Medical Collection
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: 7X7
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: BENPR
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 1424-8220
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0023338
  issn: 1424-8220
  databaseCode: PIMPY
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nj9MwEB3BwgEOiE-RZakM4sAl2nw49uSYLV3BoVGFFqmcItux2UpLitruHvntjJ20aiQkLlxysOfgvInjeZrxG4APmUqcMWkb59bqmJvcxYprE2etMLnUWSJ1EppNyLrG5bJcHLX68jVhvTxwD9x5rlWboCuswZwXpUMldWq5ki6zRgjt_76JLPdkaqBaOTGvXkcoJ1J_vs38vbjC93U_On2CSP8oshzXRR4dNJdP4ckQIbKqX9kzuGe75_D4SDfwBUyrjlXG0IkRkuTsUyjDYCH9zxSbX00rGvPV4kNJFqtufqw3q931T0ZBKpvW9fYlfLucXU0_x0M3hNgQZdvFiiJ_h8KqtFSFFcYpR9CqgrciQ4OtEiikk622nGKcUpYoucIi06XGIml1_gpOunVnXwNDYcoEW07BgeTEX1ALnSpdEEjSGVQRfNyj1JhBKtx3rLhpiDJ4QJsDoBG8P5j-6vUx_mZ04aE-GHhJ6zBAjm4GRzf_cnQEZ3tHNcM-2zZEiPw1LJ5mEbw7TNMO8WkP1dn1LdlwLtEnvnkEcuTg0YLGM93qOmhtS59WFvz0f7zBG3iUebbuc1p4Bie7za19Cw_N3W613UzgvlzK8MQJPLiY1Yuvk_BR03P-e0Zjiy_zxfc_LAL-qw
linkProvider Directory of Open Access Journals
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB2VLRJw4BsRKGAQSFyiJo5jOweEwpaqq3ajPSxSOQXbcdpKJVs2WxB_it_IOJuEjYS49cDVHiV2_DKe8TzPALymKiiNCQs_slb7zESlr5g2Pi24iYSmgdBBU2xCZJk8Pk5mW_CruwvjaJWdTmwUdbEw7ox8F01fd-GGhfT9xTffVY1y0dWuhMYaFof25w902ep3kz1c3zeU7n-cjw_8tqqAb9D1WfkKLehScqvCRMWWm1KVOEQVs4JTaWShuOSiFIW2DG2FRKBTzpSMqU60jINCR_jca7DNEOzBCLZnk-nsc-_iRejxrfMXRVES7NbU3ceLXT35jV2vKQ4wsGiHfMyNDW7_zv_2ae7C7daUJuka-_dgy1b34dZGgsUHME4rkhqDW2vDJiB7DV-FNDwJosh0Pk6xzdHqW-4aSc9PcGqr068ErXkyzrL6IXy6klk8glG1qOxjIJKbJJAFQytKMHT0pOY6VDrGVRWlkcqDt92y5qbNqe5Ke5zn6Fs5BOQ9Ajx41YterBOJ_E3og8NGL-ByfzcNi-VJ3qqSPNKqCGQZWyMjFielVEKHlilRUms41x7sdDDJW4VU538w4sHLvhtViYsPqcouLlGGMSEdQ4B5IAaIHAxo2FOdnTZJyYWLv3P25N8vfwE3DubTo_xokh0-hZvUHV64EJ_cgdFqeWmfwXXzfXVWL5-3fxaBL1eN2N9ze2RP
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Nb9QwEB2VghAc-EYNFDAIJC7RJo5jOweEwi4rqkLUQ5H2FmzHbiuVbLvZgvhr_DrG2WS7kRC3Hrjao8SJn8cznucZgNdURc6YuAoTa3XITOJCxbQJacVNIjSNhI7aYhOiKORslh1swe_-LoynVfY6sVXU1dz4M_IRmr7-wg2L6ch1tIiDyfT92XnoK0j5SGtfTmMFkX376ye6b827vQnO9RtKpx8Px5_CrsJAaNANWoYKrWknuVVxplLLjVMOh6tSVnEqjawUl1w4UWnL0G7IBDroTMmU6kzLNKp0gs-9Btd9SkGvFMTs0tlL0PdbZTJKkiwaNdTfzEt9ZfmN_a8tEzCwbYfMzI2tbnr3f_5J9-BOZ2CTfLUi7sOWrR_A7Y20iw9hnNckNwY33JZjQCYti4W07AmiyJfDcY5tnmzfMdpIfnqEn7Y8_k7Qxifjomgewdcr-YrHsF3Pa7sDRHKTRbJiaFsJhu6f1FzHSqc4w8IZqQJ4209xabpM677gx2mJHpdHQ7lGQwCv1qJnq_QifxP64HGyFvAZwduG-eKo7BRMmWhVRdKl1siEpZmTSujYMiUctYZzHcBuD5myU1NNeYmXAF6uu1HB-KiRqu38AmUYE9LzBlgAYoDOwYCGPfXJcZuqXPioPGdP_v3yF3ATYVp-3iv2n8It6k80fNxP7sL2cnFhn8EN82N50iyet0uMwLerhusflwJrfA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=An+Accelerator+Design+Using+a+MTCA+Decomposition+Algorithm+for+CNNs&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Yunping+Zhao&rft.au=Jianzhuang+Lu&rft.au=Xiaowen+Chen&rft.date=2020-09-28&rft.pub=MDPI+AG&rft.eissn=1424-8220&rft.volume=20&rft.issue=19&rft.spage=5558&rft_id=info:doi/10.3390%2Fs20195558&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_3bad08f5ec83459f8a7b1e4a7f2ec66b
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon