Linearly compressed pages: A low-complexity, low-latency main memory compression framework

Data compression is a promising approach for meeting the increasing memory capacity demands expected in future systems. Unfortunately, existing compression algorithms do not translate well when directly applied to main memory because they require the memory controller to perform non-trivial computat...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:MICRO 46 : proceedings of the 46th annual IEEE/ACM International Symposium on Microarchitecture : December 7-11th, 2013, University of California, Davis s. 172 - 184
Hlavní autoři: Pekhimnko, Gennady, Seshadri, Vivek, Yoonqu Kim, Hongyi Xin, Mutlu, Onur, Gibbons, Phillip B., Kozuch, Michael A., Mowry, Todd C.
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: ACM 01.12.2013
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Data compression is a promising approach for meeting the increasing memory capacity demands expected in future systems. Unfortunately, existing compression algorithms do not translate well when directly applied to main memory because they require the memory controller to perform non-trivial computation to locate a cache line within a compressed memory page, thereby increasing access latency and degrading system performance. Prior proposals for addressing this performance degradation problem are either costly or energy inefficient. By leveraging the key insight that all cache lines within a page should be compressed to the same size, this paper proposes a new approach to main memory compression - Linearly Compressed Pages (LCP) - that avoids the performance degradation problem without requiring costly or energy-inefficient hardware. We show that any compression algorithm can be adapted to fit the requirements of LCP, and we specifically adapt two previously-proposed compression algorithms to LCP: Frequent Pattern Compression and Base-Delta-Immediate Compression. Evaluations using benchmarks from SPEC CPU2006 and five server benchmarks show that our approach can significantly increase the effective memory capacity (by 69% on average). In addition to the capacity gains, we evaluate the benefit of transferring consecutive compressed cache lines between the memory controller and main memory. Our new mechanism considerably reduces the memory bandwidth requirements of most of the evaluated benchmarks (by 24% on average), and improves overall performance (by 6.1%/13.9%/10.7% for single-/two-/four-core workloads on average) compared to a baseline system that does not employ main memory compression. LCP also decreases energy consumed by the main memory subsystem (by 9.5% on average over the best prior mechanism).
AbstractList Data compression is a promising approach for meeting the increasing memory capacity demands expected in future systems. Unfortunately, existing compression algorithms do not translate well when directly applied to main memory because they require the memory controller to perform non-trivial computation to locate a cache line within a compressed memory page, thereby increasing access latency and degrading system performance. Prior proposals for addressing this performance degradation problem are either costly or energy inefficient. By leveraging the key insight that all cache lines within a page should be compressed to the same size, this paper proposes a new approach to main memory compression - Linearly Compressed Pages (LCP) - that avoids the performance degradation problem without requiring costly or energy-inefficient hardware. We show that any compression algorithm can be adapted to fit the requirements of LCP, and we specifically adapt two previously-proposed compression algorithms to LCP: Frequent Pattern Compression and Base-Delta-Immediate Compression. Evaluations using benchmarks from SPEC CPU2006 and five server benchmarks show that our approach can significantly increase the effective memory capacity (by 69% on average). In addition to the capacity gains, we evaluate the benefit of transferring consecutive compressed cache lines between the memory controller and main memory. Our new mechanism considerably reduces the memory bandwidth requirements of most of the evaluated benchmarks (by 24% on average), and improves overall performance (by 6.1%/13.9%/10.7% for single-/two-/four-core workloads on average) compared to a baseline system that does not employ main memory compression. LCP also decreases energy consumed by the main memory subsystem (by 9.5% on average over the best prior mechanism).
Author Yoonqu Kim
Pekhimnko, Gennady
Kozuch, Michael A.
Mowry, Todd C.
Hongyi Xin
Seshadri, Vivek
Mutlu, Onur
Gibbons, Phillip B.
Author_xml – sequence: 1
  givenname: Gennady
  surname: Pekhimnko
  fullname: Pekhimnko, Gennady
  email: gpekhime@cs.cmu.edu
  organization: Carnegie Mellon Univ., Pittsburgh, PA, USA
– sequence: 2
  givenname: Vivek
  surname: Seshadri
  fullname: Seshadri, Vivek
  email: vseshadr@cs.cmu.edu
  organization: Carnegie Mellon Univ., Pittsburgh, PA, USA
– sequence: 3
  surname: Yoonqu Kim
  fullname: Yoonqu Kim
  email: yoongukim@cmu.edu
  organization: Carnegie Mellon Univ., Pittsburgh, PA, USA
– sequence: 4
  surname: Hongyi Xin
  fullname: Hongyi Xin
  email: hxin@cs.cmu.edu
  organization: Carnegie Mellon Univ., Pittsburgh, PA, USA
– sequence: 5
  givenname: Onur
  surname: Mutlu
  fullname: Mutlu, Onur
  email: onur@cmu.edu
  organization: Carnegie Mellon Univ., Pittsburgh, PA, USA
– sequence: 6
  givenname: Phillip B.
  surname: Gibbons
  fullname: Gibbons, Phillip B.
  email: phillip.b.gibbons@intel.com
  organization: Intel Labs. Pittsburqh, Pittsburqh, PA, USA
– sequence: 7
  givenname: Michael A.
  surname: Kozuch
  fullname: Kozuch, Michael A.
  email: michael.a.kozuch@intel.com
  organization: Intel Labs. Pittsburqh, Pittsburqh, PA, USA
– sequence: 8
  givenname: Todd C.
  surname: Mowry
  fullname: Mowry, Todd C.
  email: tcm@cs.cmu.edu
  organization: Carnegie Mellon Univ., Pittsburgh, PA, USA
BookMark eNpFjMtKxDAUhiMoqGPXLtzkAex40qRp4m4YvEHBjW7cDKeTU6k2TUkKY9_ecRRcffzXc3Y8hIEYuxSwFEKVN0WpoAKzPLBQRyyzldkHIAstjTplWUofACD2BWP1GXuru4Ew9jPfBj9GSokcH_Gd0i1f8T7s8h-_p69umq8PuseJhu3MPXYD9-RD_N92YeBtRE-7ED8v2EmLfaLsjwv2en_3sn7M6-eHp_WqzlHoYspJWOEQoWkdWiBFttLNVqOqnFVOIWiNWEqHjXEAKApntSnLVoM2aCTKBbv6_e2IaDPGzmOcN5VRlS6U_AZ6wVQa
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1145/2540708.2540724
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Xplore POP ALL
IEEE Xplore All Conference Proceedings
IEEE/IET Electronic Library
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9781450326384
1450326382
EndPage 184
ExternalDocumentID 7847624
Genre orig-research
GroupedDBID 6IE
6IL
ACM
ALMA_UNASSIGNED_HOLDINGS
APO
CBEJK
GUFHI
LHSKQ
RIE
RIL
ID FETCH-LOGICAL-a162t-e191daa0bfda90e4e976bc6a47d94d4a066aa53dab8d00a12d96855f6068a83a3
IEDL.DBID RIE
IngestDate Wed Aug 27 04:38:28 EDT 2025
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a162t-e191daa0bfda90e4e976bc6a47d94d4a066aa53dab8d00a12d96855f6068a83a3
PageCount 13
ParticipantIDs ieee_primary_7847624
PublicationCentury 2000
PublicationDate 2013-Dec.
PublicationDateYYYYMMDD 2013-12-01
PublicationDate_xml – month: 12
  year: 2013
  text: 2013-Dec.
PublicationDecade 2010
PublicationTitle MICRO 46 : proceedings of the 46th annual IEEE/ACM International Symposium on Microarchitecture : December 7-11th, 2013, University of California, Davis
PublicationTitleAbbrev MICRO
PublicationYear 2013
Publisher ACM
Publisher_xml – name: ACM
SSID ssj0001254896
Score 1.7730395
Snippet Data compression is a promising approach for meeting the increasing memory capacity demands expected in future systems. Unfortunately, existing compression...
SourceID ieee
SourceType Publisher
StartPage 172
SubjectTerms Bandwidth
Compression algorithms
Data compression
DRAM
Encoding
Memory
Memory Bandwidth
Memory Capacity
Memory Controller
Memory management
Operating systems
Random access memory
Title Linearly compressed pages: A low-complexity, low-latency main memory compression framework
URI https://ieeexplore.ieee.org/document/7847624
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1JS0MxEB7a4sFT1VbcycFj0768JS_xJmLxIKUHleKlzMsChS7SxeXfm6Svy8GLpywQAhmSfJN88w3Arf-7kjFTVNkIaYoZp4iKUSUzaRVGOQZRn7fnvNcTg4HsV6C1jYUxxgTymWn7avjL1zO18k9lndwdpTxOq1DNc76O1dp7T3HYW_JSvYelWSf22nKRaIfSR7TvpU8Jt0e3_r95j6C5C8Mj_e0FcwwVMz2B-iYPAym3ZQPenUNpvFAx8QTxoAauiT8oFnfknoxnXzQQx823Q9yt0B6jh8o_ZIKjKZl4su1urDMUsRvKVhNeu48vD0-0zJlAkfF4SY3zvzRiVFiNMjKpcXCjUBzTXMtUp-gQBmKWaCyEjiJksZZcZJl1foxAkWByCrXpbGrOgCSW2YQrLKQ03muRlgulNTKrlDCMn0PDL9XwYy2LMSxX6eLv7ks4jH0micAEuYLacr4y13CgPpejxfwm2PIXTdyjQQ
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1JTwIxFG4QTfSECsbdHjwy0M50OlNvxkgwIuGAhnghjy4JCQyGxeXf25ZhOXjx1CVpmvSl7ffa730PoVv3dyVCKgNpCAQMYh4ASBpIEQsjgSTgRX3eWkm7nfZ6olNA1XUsjNbak890zVX9X76ayIV7Kqsn9ijlIdtBuzFjIVlGa229qFj0LXiu30NZXA-duhxJa750Me1bCVT8_dEo_W_mQ1TZBOLhzvqKOUIFnR2j0ioTA843Zhm9W5dSO6li7CjiXg9cYXdUzO7wPR5NvgJPHdffFnNXfXsEDiz_4DEMMzx2dNvNWGsqbFakrQp6bTx2H5pBnjUhAMrDeaCtB6YAyMAoEEQzbQHHQHJgiRJMMbAYAyCOFAxSRQjQUAmexrGxnkwKaQTRCSpmk0yfIhwZaiIuYSCEdn6LMDyVSgE1Uqaa8jNUdkvV_1gKY_TzVTr_u_sG7Te7L61-66n9fIEOQpdXwvNCLlFxPl3oK7QnP-fD2fTa2_UX9zymiA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=MICRO+46+%3A+proceedings+of+the+46th+annual+IEEE%2FACM+International+Symposium+on+Microarchitecture+%3A+December+7-11th%2C+2013%2C+University+of+California%2C+Davis&rft.atitle=Linearly+compressed+pages%3A+A+low-complexity%2C+low-latency+main+memory+compression+framework&rft.au=Pekhimnko%2C+Gennady&rft.au=Seshadri%2C+Vivek&rft.au=Yoonqu+Kim&rft.au=Hongyi+Xin&rft.date=2013-12-01&rft.pub=ACM&rft.spage=172&rft.epage=184&rft_id=info:doi/10.1145%2F2540708.2540724&rft.externalDocID=7847624