TCL: Time-Dependent Clustering Loss for Optimizing Post-Training Feature Map Quantization for Partitioned DNNs

This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication o...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE access Ročník 13; s. 103640 - 103648
Hlavní autori: Berg, Oscar Artur Bernd, Saqib, Eiraj, Jantsch, Axel, Shallari, Irida, Krug, Silvia, Sanchez Leal, Isaac, O'Nils, Mattias
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:2169-3536, 2169-3536
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication overhead, minimizing latency while maintaining accuracy. The autoencoder compresses feature maps at the partitioning point before quantization, effectively reducing data size and preserving accuracy. TCL regularization clusters activations at the partitioning point to align with quantization levels, minimizing quantization error and ensuring accuracy even with extreme low-bitwidth quantization. Our method is evaluated on classification models (ResNet-50, EfficientNetV2-S) and an object detection model (YOLOv10n) using the TinyImageNet-200 and Pascal VOC datasets. Deployed on Raspberry Pi 4 B and GPU, each model is tested across various partitioning points, quantization bit-widths (1-bit, 2-bit, and 3-bit), communication datarate (1MB/s to 10MB/s), and LZMA lossless compression. For a partitioned ResNet-50 after the convolutional stem block, the speed-up against a server solution is <inline-formula> <tex-math notation="LaTeX">2.33\times </tex-math></inline-formula> and 1.85x compared to the all-in-node solution, with only a minimal accuracy drop of less than one percentage points. The proposed framework offers a scalable solution for deploying high-performance AI models on IoT devices, extending the feasibility of real-time inference in resource-constrained environments.
AbstractList This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication overhead, minimizing latency while maintaining accuracy. The autoencoder compresses feature maps at the partitioning point before quantization, effectively reducing data size and preserving accuracy. TCL regularization clusters activations at the partitioning point to align with quantization levels, minimizing quantization error and ensuring accuracy even with extreme low-bitwidth quantization. Our method is evaluated on classification models (ResNet-50, EfficientNetV2-S) and an object detection model (YOLOv10n) using the TinyImageNet-200 and Pascal VOC datasets. Deployed on Raspberry Pi 4 B and GPU, each model is tested across various partitioning points, quantization bit-widths (1-bit, 2-bit, and 3-bit), communication datarate (1MB/s to 10MB/s), and LZMA lossless compression. For a partitioned ResNet-50 after the convolutional stem block, the speed-up against a server solution is [Formula Omitted] and 1.85x compared to the all-in-node solution, with only a minimal accuracy drop of less than one percentage points. The proposed framework offers a scalable solution for deploying high-performance AI models on IoT devices, extending the feasibility of real-time inference in resource-constrained environments.
This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication overhead, minimizing latency while maintaining accuracy. The autoencoder compresses feature maps at the partitioning point before quantization, effectively reducing data size and preserving accuracy. TCL regularization clusters activations at the partitioning point to align with quantization levels, minimizing quantization error and ensuring accuracy even with extreme low-bitwidth quantization. Our method is evaluated on classification models (ResNet-50, EfficientNetV2-S) and an object detection model (YOLOv10n) using the TinyImageNet-200 and Pascal VOC datasets. Deployed on Raspberry Pi 4 B and GPU, each model is tested across various partitioning points, quantization bit-widths (1-bit, 2-bit, and 3-bit), communication datarate (1MB/s to 10MB/s), and LZMA lossless compression. For a partitioned ResNet-50 after the convolutional stem block, the speed-up against a server solution is <inline-formula> <tex-math notation="LaTeX">2.33\times </tex-math></inline-formula> and 1.85x compared to the all-in-node solution, with only a minimal accuracy drop of less than one percentage points. The proposed framework offers a scalable solution for deploying high-performance AI models on IoT devices, extending the feasibility of real-time inference in resource-constrained environments.
This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication overhead, minimizing latency while maintaining accuracy. The autoencoder compresses feature maps at the partitioning point before quantization, effectively reducing data size and preserving accuracy. TCL regularization clusters activations at the partitioning point to align with quantization levels, minimizing quantization error and ensuring accuracy even with extreme low-bitwidth quantization. Our method is evaluated on classification models (ResNet-50, EfficientNetV2-S) and an object detection model (YOLOv10n) using the TinyImageNet-200 and Pascal VOC datasets. Deployed on Raspberry Pi 4 B and GPU, each model is tested across various partitioning points, quantization bit-widths (1-bit, 2-bit, and 3-bit), communication datarate (1MB/s to 10MB/s), and LZMA lossless compression. For a partitioned ResNet-50 after the convolutional stem block, the speed-up against a server solution is <tex-math notation="LaTeX">$2.33\times $ </tex-math> and 1.85x compared to the all-in-node solution, with only a minimal accuracy drop of less than one percentage points. The proposed framework offers a scalable solution for deploying high-performance AI models on IoT devices, extending the feasibility of real-time inference in resource-constrained environments.
This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning, autoencoder-based compression, quantization with Time Dependent Clustering Loss (TCL) regularization, and lossless compression, to reduce communication overhead, minimizing latency while maintaining accuracy. The autoencoder compresses feature maps at the partitioning point before quantization, effectively reducing data size and preserving accuracy. TCL regularization clusters activations at the partitioning point to align with quantization levels, minimizing quantization error and ensuring accuracy even with extreme low-bitwidth quantization. Our method is evaluated on classification models (ResNet-50, EfficientNetV2-S) and an object detection model (YOLOv10n) using the TinyImageNet-200 and Pascal VOC datasets. Deployed on Raspberry Pi 4 B and GPU, each model is tested across various partitioning points, quantization bit-widths (1-bit, 2-bit, and 3-bit), communication datarate (1MB/s to 10MB/s), and LZMA lossless compression. For a partitioned ResNet-50 after the convolutional stem block, the speed-up against a server solution is 2.33× and 1.85x compared to the all-in-node solution, with only a minimal accuracy drop of less than one percentage points. The proposed framework offers a scalable solution for deploying high-performance AI models on IoT devices, extending the feasibility of real-time inference in resource-constrained environments. 
Author Jantsch, Axel
O'Nils, Mattias
Krug, Silvia
Berg, Oscar Artur Bernd
Saqib, Eiraj
Sanchez Leal, Isaac
Shallari, Irida
Author_xml – sequence: 1
  givenname: Oscar Artur Bernd
  orcidid: 0009-0000-8343-9649
  surname: Berg
  fullname: Berg, Oscar Artur Bernd
  organization: Department of Computer and Electrical Engineering, Mid Sweden University, Sundsvall, Sweden
– sequence: 2
  givenname: Eiraj
  orcidid: 0000-0002-9903-1338
  surname: Saqib
  fullname: Saqib, Eiraj
  organization: Department of Computer and Electrical Engineering, Mid Sweden University, Sundsvall, Sweden
– sequence: 3
  givenname: Axel
  orcidid: 0000-0003-2251-0004
  surname: Jantsch
  fullname: Jantsch, Axel
  organization: Department of Computer and Electrical Engineering, Mid Sweden University, Sundsvall, Sweden
– sequence: 4
  givenname: Irida
  orcidid: 0000-0002-3774-4850
  surname: Shallari
  fullname: Shallari, Irida
  organization: Department of Computer and Electrical Engineering, Mid Sweden University, Sundsvall, Sweden
– sequence: 5
  givenname: Silvia
  orcidid: 0000-0003-0282-5471
  surname: Krug
  fullname: Krug, Silvia
  organization: Institut für Mikroelektronik- und Mechatronik-Systeme gemeinnützige GmbH (IMMS GmbH), Ilmenau, Germany
– sequence: 6
  givenname: Isaac
  orcidid: 0000-0002-3351-0491
  surname: Sanchez Leal
  fullname: Sanchez Leal, Isaac
  organization: Department of Computer and Electrical Engineering, Mid Sweden University, Sundsvall, Sweden
– sequence: 7
  givenname: Mattias
  orcidid: 0000-0001-8607-4083
  surname: O'Nils
  fullname: O'Nils, Mattias
  email: mattias.onils@miun.se
  organization: Department of Computer and Electrical Engineering, Mid Sweden University, Sundsvall, Sweden
BackLink https://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-54756$$DView record from Swedish Publication Index (Mittuniversitetet)
BookMark eNpVkVtv1DAQhSNUJErpL4CHSDxn8SW2Y95W2RYqLW1RF14tO55UXu3awXaE6K8n2VRc_GLP6JxPMz6vizMfPBTFW4xWGCP5Yd22Vw8PK4IIW1EmJEbiRXFOMJcVZZSf_fN-VVymtEfTaaYWE-eF37Xbj-XOHaHawADegs9lexhThuj8Y7kNKZV9iOXdkN3RPc29-5BytYva-bm6Bp3HCOUXPZRfR-2ze9LZBX9y3euY3VyBLTe3t-lN8bLXhwSXz_dF8e36atd-rrZ3n27a9bbqqJS5skZ0xkjaaCa5kTXQusNYyqZGmCPDLZbACaq11ESYhnPeadb3XAgpuJWWXhQ3C9cGvVdDdEcdf6mgnTo1QnxU82TdARQYbBvKiLFM1rqmujHGmqYBYYFY1E-samGlnzCM5j_axn1fn2hHN3rFasH4pH-_6IcYfoyQstqHMfppXUUJIVxMm7BJRRdVF6cvjtD_4WKk5lzVkquac1XPuU6ud4vLAcBfB0YU10zQ3_r4oP4
CODEN IAECCG
Cites_doi 10.1609/aaai.v35i3.16351
10.1145/3384419.3430898
10.1109/SENSORS56945.2023.10324987
10.1109/ACCESS.2019.2913703
10.1109/SAS58821.2023.10254054
10.1109/COMSNETS48256.2020.9027432
10.1109/icc45041.2023.10279110
10.1109/COMST.2024.3393230
10.1109/COMST.2022.3218527
10.1109/JPROC.2020.2976475
10.1145/3576914.3587518
10.1109/MNET.2024.3420755
10.1186/s13677-023-00493-9
10.1109/ICME.2018.8486500
10.1145/3093336.3037698
10.1109/ISLPED.2019.8824955
10.3390/computers12030060
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
DBID 97E
ESBDL
RIA
RIE
AAYXX
CITATION
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
ADTPV
AKRZP
AOWAS
D8T
DG5
ZZAVC
DOA
DOI 10.1109/ACCESS.2025.3579107
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE Xplore Open Access Journals
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
METADEX
Technology Research Database
Materials Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
SwePub
SWEPUB Mittuniversitetet full text
SwePub Articles
SWEPUB Freely available online
SWEPUB Mittuniversitetet
SwePub Articles full text
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
Materials Research Database
Engineered Materials Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
METADEX
Computer and Information Systems Abstracts Professional
DatabaseTitleList Materials Research Database



Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: RIE
  name: IEEE Xplore
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2169-3536
EndPage 103648
ExternalDocumentID oai_doaj_org_article_eb1d8352bd594a43a8bbdb88e7de2d0f
oai_DiVA_org_miun_54756
10_1109_ACCESS_2025_3579107
11031457
Genre orig-research
GrantInformation_xml – fundername: The Swedish Knowledge Foundation
  funderid: 10.13039/100003077
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
ABAZT
ABVLG
ACGFS
ADBBV
AGSQL
ALMA_UNASSIGNED_HOLDINGS
BCNDV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
ESBDL
GROUPED_DOAJ
IPLJI
JAVBF
KQ8
M43
M~E
O9-
OCL
OK1
RIA
RIE
RNS
AAYXX
CITATION
7SC
7SP
7SR
8BQ
8FD
JG9
JQ2
L7M
L~C
L~D
ADTPV
AKRZP
AOWAS
D8T
DG5
ZZAVC
ID FETCH-LOGICAL-c399t-db7cbb938a596b94e34c1199840160b6d19e6204a9a27b8666ca5ff677976d9d3
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001512606800010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2169-3536
IngestDate Fri Oct 03 12:51:13 EDT 2025
Tue Nov 04 17:02:34 EST 2025
Mon Dec 08 03:37:32 EST 2025
Sat Nov 29 07:49:55 EST 2025
Wed Aug 27 01:46:20 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License https://creativecommons.org/licenses/by/4.0/legalcode
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c399t-db7cbb938a596b94e34c1199840160b6d19e6204a9a27b8666ca5ff677976d9d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-8607-4083
0000-0002-9903-1338
0009-0000-8343-9649
0000-0003-0282-5471
0000-0002-3351-0491
0000-0003-2251-0004
0000-0002-3774-4850
OpenAccessLink https://ieeexplore.ieee.org/document/11031457
PQID 3222671195
PQPubID 4845423
PageCount 9
ParticipantIDs doaj_primary_oai_doaj_org_article_eb1d8352bd594a43a8bbdb88e7de2d0f
ieee_primary_11031457
crossref_primary_10_1109_ACCESS_2025_3579107
proquest_journals_3222671195
swepub_primary_oai_DiVA_org_miun_54756
PublicationCentury 2000
PublicationDate 20250000
2025-00-00
20250101
2025
2025-01-01
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – year: 2025
  text: 20250000
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE access
PublicationTitleAbbrev Access
PublicationYear 2025
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref23
ref14
ref20
ref11
ref22
ref10
Polino (ref19) 2018
ref21
Hubara (ref15) 2020
ref2
ref1
Mishra (ref4) 2020
Darabi (ref17)
ref18
Sung (ref5) 2015
ref8
ref7
ref9
Esser (ref16) 2019
ref6
Li (ref3) 2023; 12
References_xml – ident: ref7
  doi: 10.1609/aaai.v35i3.16351
– year: 2020
  ident: ref15
  article-title: Improving post training neural quantization: Layer-wise calibration and integer programming
  publication-title: arXiv:2006.10518
– ident: ref22
  doi: 10.1145/3384419.3430898
– ident: ref2
  doi: 10.1109/SENSORS56945.2023.10324987
– ident: ref23
  doi: 10.1109/ACCESS.2019.2913703
– ident: ref8
  doi: 10.1109/SAS58821.2023.10254054
– ident: ref12
  doi: 10.1109/COMSNETS48256.2020.9027432
– year: 2015
  ident: ref5
  article-title: Resiliency of deep neural networks under quantization
  publication-title: arXiv:1511.06488
– year: 2018
  ident: ref19
  article-title: Model compression via distillation and quantization
  publication-title: arXiv:1802.05668
– ident: ref21
  doi: 10.1109/icc45041.2023.10279110
– ident: ref9
  doi: 10.1109/COMST.2024.3393230
– year: 2020
  ident: ref4
  article-title: A survey on deep neural network compression: Challenges, overview, and solutions
  publication-title: arXiv:2010.03954
– start-page: 1
  volume-title: Proc. ICLR
  ident: ref17
  article-title: BNN+: Improved binary network training
– year: 2019
  ident: ref16
  article-title: Learned step size quantization
  publication-title: arXiv:1902.08153
– ident: ref10
  doi: 10.1109/COMST.2022.3218527
– ident: ref1
  doi: 10.1109/JPROC.2020.2976475
– ident: ref6
  doi: 10.1145/3576914.3587518
– ident: ref14
  doi: 10.1109/MNET.2024.3420755
– ident: ref13
  doi: 10.1186/s13677-023-00493-9
– ident: ref18
  doi: 10.1109/ICME.2018.8486500
– ident: ref11
  doi: 10.1145/3093336.3037698
– ident: ref20
  doi: 10.1109/ISLPED.2019.8824955
– volume: 12
  start-page: 60
  issue: 3
  year: 2023
  ident: ref3
  article-title: Model compression for deep neural networks: A survey
  publication-title: Computers
  doi: 10.3390/computers12030060
SSID ssj0000816957
Score 2.3342764
Snippet This paper introduces an enhanced approach for deploying deep learning models on resource-constrained IoT devices by combining model partitioning,...
SourceID doaj
swepub
proquest
crossref
ieee
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Publisher
StartPage 103640
SubjectTerms Accuracy
Adaptation models
Autoencoders
Clustering
CNN
Computational modeling
Constraints
Feature maps
Internet of Things
IoT
Load modeling
Machine learning
Object detection
Partitioning
Quantization
Quantization (signal)
Real time
Regularization
Servers
Time dependence
Training
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PT9swFLYmtAM7TBtjWjdAPkw7zSNp7NjmVlIQh65jUoe4Wf4VqRJUqG124K_nPcdAe-LCNXLy7O8lz99L8r5HyHevhapKbllRW8s4cGbmStsy553UroANOdWtXU3kdKqur_XlRqsv_CeslwfugTuGWBKQJbggNLe8ssq54JSKMsRhKFqMvoXUG8lUisGqrLWQWWaoLPTxqGlgRZAQDsWvSkiYgtzaipJif26xss02NxVE065z_oG8z3SRjvppfiRv4mKPvNsQEfxEFrNmckKxlIONc0fbNW1uOlRAgAF0ArYpUFP6B6LD7fwej2GLXjbL3SEo0sBuGelve0f_dgB1rs1MZ10iREnPKNDxdLraJ__Oz2bNBctdFJgH8rFmwUnvnK6UFbp2mseK-xIr6ziKy7k6lDqiKL3VdiidgnTGW9G2tZTAVIIO1WeyswAjXwiF1Mp74Yuq9YoDThrzSVHJItZRWmcH5OcjoOauF8swKckotOnxN4i_yfgPyCmC_jQUla7TAfC_yf43L_l_QPbRZc_2sG8FF3Dxg0cfmvxYrgx-VqolqtwNyI_er1vWx_OrUbJ-O-8WRnAp6q-vMclvZBcX3r-6OSA762UXD8lb_389Xy2P0q37AFno8u8
  priority: 102
  providerName: Directory of Open Access Journals
Title TCL: Time-Dependent Clustering Loss for Optimizing Post-Training Feature Map Quantization for Partitioned DNNs
URI https://ieeexplore.ieee.org/document/11031457
https://www.proquest.com/docview/3222671195
https://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-54756
https://doaj.org/article/eb1d8352bd594a43a8bbdb88e7de2d0f
Volume 13
WOSCitedRecordID wos001512606800010&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2169-3536
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000816957
  issn: 2169-3536
  databaseCode: DOA
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2169-3536
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000816957
  issn: 2169-3536
  databaseCode: M~E
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB7RigMceBaRUiofECfSJhs7jrkt2VYctkuRlqo3y69IK7VLtbvh0AO_nRnHLd0DBy5RZDkZ2984nrEz3wB8cEo0VclNXtTG5Bxt5tyWpsuts1LZAhfkGLd2MZWzWXN5qc5TsHqMhQkhxJ_PwhHdxrN8_9P1tFV2XFJOAi7kDuxIWQ_BWvcbKpRBQgmZmIXKQh2P2xY7gT7gSBxVQqJUubX6RJL-lFVl28B8SBoaF5rT5__ZxBfwLFmUbDyowEt4FJav4OkDnsHXsJy308-Moj3ySUp6u2HtVU8kCViBTbGtDK1X9g0_INeLWyqjLL75PCWQYGQp9qvAzswN-94jGil8Mz51TuoXKY88m8xm6z34cXoyb7_mKdFC7tA-2eTeSmetqhojVG0VDxV3JQXfceKfs7UvVSDeeqPMSNoGPR5nRNfVUqIx45Wv3sDuEoW8BYbel3PCFVXnGo6jo8jlFJUsQh2ksSaDT3cA6JuBT0NHP6RQesBLE1464ZXBFwLpviqRYccCHHSd5pbG5caTIWm9UNzwyjTWets0Qfow8kWXwR4B9VdewiiDgzvMdZq5a00nT7UkIrwMPg56sCV9srgYR-nXi36pBZei3v_H-9_BE-rLsGFzALubVR_ew2P3a7NYrw6j94_Xs98nh1GT_wCX6_EG
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB5BQQIOPIsIFPABccJtsrHjmNuSpSoiDUVaqt4svyKt1C7V7oYDvx6P45bugQO3yHIysb9xZsbOfAPwzkpelwXTNK-0piz4zNQUuqfGGiFNHgxyzFs7bUXX1Wdn8iQlq8dcGO99_PnM7-NlPMt3P-2AW2UHBdYkYFzchjs8GNJqTNe63lLBGhKSi8QtVOTyYNo0YRghCpzw_ZKLIFds2Z9I05_qqmy7mDdpQ6OpOXz0ny_5GB4mn5JMRyV4Arf88ik8uME0-AyW86b9SDDfg85S2dsNac4HpEkIHUgb3pUE_5V8C5-Qi8VvbMM6vnSeSkgQ9BWHlSfH-pJ8HwIeKYEz3nWCChhJjxyZdd16F34cfp43RzSVWqA2eCgb6oywxsiy1lxWRjJfMltg-h1DBjpTuUJ6ZK7XUk-EqUPMYzXv-0qI4M446crnsLMMQl4ACfGXtdzmZW9rFmZHYtDJS5H7ygttdAYfrgBQlyOjhoqRSC7ViJdCvFTCK4NPCNJ1V6TDjg1h0lVaXSoYHIeupHFcMs1KXRvjTF174fzE5X0GuwjUX3kJowz2rjBXae2uFZ49VQKp8DJ4P-rBlvTZ4nQapV8shqXiTPDq5T-e_xbuHc2PW9V-6b6-gvs4rnH7Zg92NqvBv4a79tdmsV69iZr8Bwcn8jM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=TCL%3A+Time-Dependent+Clustering+Loss+for+Optimizing+Post-Training+Feature+Map+Quantization+for+Partitioned+DNNs&rft.jtitle=IEEE+access&rft.au=Berg%2C+Oscar+Artur+Bernd&rft.au=Saqib%2C+Eiraj&rft.au=Jantsch%2C+Axel&rft.au=Shallari%2C+Irida&rft.date=2025&rft.pub=IEEE&rft.eissn=2169-3536&rft.volume=13&rft.spage=103640&rft.epage=103648&rft_id=info:doi/10.1109%2FACCESS.2025.3579107&rft.externalDocID=11031457
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon