Performance Analysis and Characterization of Training Deep Learning Models on Mobile Device

Training deep learning models on mobile devices recently becomes possible, because of increasing computation power on mobile hardware and the advantages of enhancing user experiences. Most of the existing work on machine learning at mobile devices is focused on the inference of deep learning models,...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS) s. 506 - 515
Hlavní autoři: Liu, Jie, Liu, Jiawen, Du, Wan, Li, Dong
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.12.2019
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Training deep learning models on mobile devices recently becomes possible, because of increasing computation power on mobile hardware and the advantages of enhancing user experiences. Most of the existing work on machine learning at mobile devices is focused on the inference of deep learning models, but not training. The performance characterization of training deep learning models on mobile devices is largely unexplored, although understanding the performance characterization is critical for designing and implementing deep learning models on mobile devices. In this paper, we perform a variety of experiments on a representative mobile device (the NVIDIA TX2) to study the performance of training deep learning models. We introduce a benchmark suite and a tool to study performance of training deep learning models on mobile devices, from the perspectives of memory consumption, hardware utilization, and power consumption. The tool can correlate performance results with fine-grained operations in deep learning models, providing capabilities to capture performance variance and problems at a fine granularity. We reveal interesting performance problems and opportunities, including under-utilization of heterogeneous hardware, large energy consumption of the memory, and high predictability of workload characterization. Based on the performance analysis, we suggest interesting research directions.
DOI:10.1109/ICPADS47876.2019.00077