PREMA: A Predictive Multi-Task Scheduling Algorithm For Preemptible Neural Processing Units

To amortize cost, cloud vendors providing DNN acceleration as a service to end-users employ consolidation and virtualization to share the underlying resources among multiple DNN service requests. This paper makes a case for a "preemptible" neural processing unit (NPU) and a "predictiv...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings - International Symposium on High-Performance Computer Architecture s. 220 - 233
Hlavní autori: Choi, Yujeong, Rhu, Minsoo
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.02.2020
Predmet:
ISSN:2378-203X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:To amortize cost, cloud vendors providing DNN acceleration as a service to end-users employ consolidation and virtualization to share the underlying resources among multiple DNN service requests. This paper makes a case for a "preemptible" neural processing unit (NPU) and a "predictive" multi-task scheduler to meet the latency demands of high-priority inference while maintaining high throughput. We evaluate both the mechanisms that enable NPUs to be preemptible and the policies that utilize them to meet scheduling objectives. We show that preemptive NPU multi-tasking can achieve an average 7.8×, 1.4×, and 4.8× improvement in latency, throughput, and SLA satisfaction, respectively.
ISSN:2378-203X
DOI:10.1109/HPCA47549.2020.00027