Taming the killer microsecond

Modern applications require access to vast datasets at low latencies. Emerging memory technologies can enable faster access to significantly larger volumes of data than what is possible today. However, these memory technologies have a significant caveat: their random access latency falls in a range...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2018 51st Annual IEEE ACM International Symposium on Microarchitecture (MICRO) s. 627 - 640
Hlavní autori: Cho, Shenghsun, Suresh, Amoghavarsha, Palit, Tapti, Ferdman, Michael, Honarmand, Nima
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: Piscataway, NJ, USA IEEE Press 20.10.2018
IEEE
Edícia:ACM Conferences
Predmet:
ISBN:9781538662403, 153866240X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Modern applications require access to vast datasets at low latencies. Emerging memory technologies can enable faster access to significantly larger volumes of data than what is possible today. However, these memory technologies have a significant caveat: their random access latency falls in a range that cannot be effectively hidden using current hardware and software latency-hiding techniques---namely, the microsecond range. Finding the root cause of this "Killer Microsecond" problem, is the subject of this work. Our goal is to answer the critical question of why existing hardware and software cannot hide microsecond-level latencies, and whether drastic changes to existing platforms are necessary to utilize microsecond-latency devices effectively. We use an FPGA-based microsecond-latency device emulator, a carefully-crafted microbenchmark, and three open-source data-intensive applications to show that existing systems are indeed incapable of effectively hiding such latencies. However, after uncovering the root causes of the problem, we show that simple changes to existing systems are sufficient to support microsecond-latency devices. In particular, we show that by replacing on-demand memory accesses with prefetch requests followed by fast user-mode context switches (to increase access-level parallelism) and enlarging hardware queues that track in-flight accesses (to accommodate many parallel accesses), conventional architectures can effectively hide microsecond-level latencies, and approach the performance of DRAM-based implementations of the same applications. In other words, we show that successful usage of microsecond-level devices is not predicated on drastically new hardware and software architectures.
ISBN:9781538662403
153866240X
DOI:10.1109/MICRO.2018.00057