Taming the killer microsecond

Modern applications require access to vast datasets at low latencies. Emerging memory technologies can enable faster access to significantly larger volumes of data than what is possible today. However, these memory technologies have a significant caveat: their random access latency falls in a range...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2018 51st Annual IEEE ACM International Symposium on Microarchitecture (MICRO) s. 627 - 640
Hlavní autoři: Cho, Shenghsun, Suresh, Amoghavarsha, Palit, Tapti, Ferdman, Michael, Honarmand, Nima
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: Piscataway, NJ, USA IEEE Press 20.10.2018
IEEE
Edice:ACM Conferences
Témata:
ISBN:9781538662403, 153866240X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Modern applications require access to vast datasets at low latencies. Emerging memory technologies can enable faster access to significantly larger volumes of data than what is possible today. However, these memory technologies have a significant caveat: their random access latency falls in a range that cannot be effectively hidden using current hardware and software latency-hiding techniques---namely, the microsecond range. Finding the root cause of this "Killer Microsecond" problem, is the subject of this work. Our goal is to answer the critical question of why existing hardware and software cannot hide microsecond-level latencies, and whether drastic changes to existing platforms are necessary to utilize microsecond-latency devices effectively. We use an FPGA-based microsecond-latency device emulator, a carefully-crafted microbenchmark, and three open-source data-intensive applications to show that existing systems are indeed incapable of effectively hiding such latencies. However, after uncovering the root causes of the problem, we show that simple changes to existing systems are sufficient to support microsecond-latency devices. In particular, we show that by replacing on-demand memory accesses with prefetch requests followed by fast user-mode context switches (to increase access-level parallelism) and enlarging hardware queues that track in-flight accesses (to accommodate many parallel accesses), conventional architectures can effectively hide microsecond-level latencies, and approach the performance of DRAM-based implementations of the same applications. In other words, we show that successful usage of microsecond-level devices is not predicated on drastically new hardware and software architectures.
ISBN:9781538662403
153866240X
DOI:10.1109/MICRO.2018.00057