IterL2Norm: Fast Iterative L2-Normalization

Transformer-based large language models are a memory-bound model whose operation is based on a large amount of data that are marginally reused. Thus, the data movement between a host and accelerator likely dictates the total wall-clock time. Layer normalization is one of the key workloads in the tra...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings - Design, Automation, and Test in Europe Conference and Exhibition S. 1 - 7
Hauptverfasser: Ye, ChangMin, Sim, Yonguk, Kim, Youngchae, Jin, SeongMin, Jeong, Doo Seok
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: EDAA 31.03.2025
Schlagworte:
ISSN:1558-1101
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Transformer-based large language models are a memory-bound model whose operation is based on a large amount of data that are marginally reused. Thus, the data movement between a host and accelerator likely dictates the total wall-clock time. Layer normalization is one of the key workloads in the transformer model, following each of multi-head attention and feed-forward network blocks. To reduce data movement, layer normalization needs to be performed on the same chip as the matrix-matrix multiplication engine. To this end, we introduce an iterative L2-normalization method for 1D input (IterL2Norm), ensuring fast convergence to the steady-state solution within five iteration steps and high precision, outperforming the fast inverse square root algorithm in six out of nine cases for FP32 and five out of nine for BFloat16 across the embedding lengths used in the OPT models. Implemented in 32/28nm CMOS, the IterL2Norm macro normalizes d-dimensional vectors, where 64 ≤ d ≤ 1024, with a latency of 116-227 cycles at 100MHz/l.05V.
ISSN:1558-1101
DOI:10.23919/DATE64628.2025.10992867