HIVE: A High-Priority Victim Cache for Accelerating GPU Memory Accesses
The victim cache was originally designed as a secondary cache to handle misses in the L1 data (L1D) cache in CPUs. However, this design is often sub-optimal for GPUs. Accessing the high-latency L1D cache and its victim cache can lead to significant latency overhead, severely degrading the performanc...
Gespeichert in:
| Veröffentlicht in: | 2025 62nd ACM/IEEE Design Automation Conference (DAC) S. 1 - 7 |
|---|---|
| Hauptverfasser: | , , , , , , , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
22.06.2025
|
| Schlagworte: | |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | The victim cache was originally designed as a secondary cache to handle misses in the L1 data (L1D) cache in CPUs. However, this design is often sub-optimal for GPUs. Accessing the high-latency L1D cache and its victim cache can lead to significant latency overhead, severely degrading the performance of certain applications. We introduce HIVE, a high-priority victim cache designed to accelerate GPU memory accesses. HIVE handles memory requests first, before they reach the L1D cache. Our experimental results show that HIVE achieves an average performance improvement of \mathbf{7 7. 1 \%} and \mathbf{2 1. 7 \%} compared to the baseline and the state-of-the-art architecture, respectively. |
|---|---|
| DOI: | 10.1109/DAC63849.2025.11133338 |