Alternate Path μ-op Cache Prefetching

Datacenter applications are well-known for their large code footprints. This has caused frontend design to evolve by implementing decoupled fetching and large prediction structures - branch predictors, Branch Target Buffers (BTBs) - to mitigate the stagnating size of the instruction cache by prefetc...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA) s. 1230 - 1245
Hlavní autoři: Singh, Sawan, Perais, Arthur, Jimborean, Alexandra, Ros, Alberto
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 29.06.2024
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Datacenter applications are well-known for their large code footprints. This has caused frontend design to evolve by implementing decoupled fetching and large prediction structures - branch predictors, Branch Target Buffers (BTBs) - to mitigate the stagnating size of the instruction cache by prefetching instructions well in advance. In addition, many designs feature a micro operation (\mu-op) cache, which primarily provides power savings by bypassing the instruction cache and decoders once warmed up. However, this \mu-op cache often has lower reach than the instruction cache, and it is not filled up speculatively using the decoupled fetcher. As a result, the \mu-op cache is often over-subscribed by datacenter applications, up to the point of becoming a burden. This paper first shows that because of this pressure, blindly prefetching into the \mu-op cache using state-of-the-art standalone prefetchers would not provide significant gains. As a consequence, this paper proposes to prefetch only critical \mu-ops into the \mu op cache, by focusing on execution points where the \mu-op cache provides the most gains: Pipeline refills. Concretely, we use hardto-predict conditional branches as indicators that a pipeline refill is likely to happen in the near future, and prefetch into the \mu-op cache the \mu-ops that belong to the path opposed to the predicted path, which we call alternate path. Identifying hard-to-predict branches requires no additional state if the branch predictor confidence is used to classify branches. Including extra alternate branch predictors with limited budget (8.95 KB to 12.95 KB), our proposal provides average speedups of 1.9 \% to 2 \% and as high as \mathbf{1 2 \%} on a subset of CVP-1 traces.
DOI:10.1109/ISCA59077.2024.00092