Lightning Talk 6: Bringing Together Foundation Models and Edge Devices

Deep learning models have been widely used in natural language processing and computer vision. These models require heavy computation, large memory, and massive amounts of training data. Deep learning models may be deployed on edge devices when transferring data to cloud is infeasible or undesirable...

Full description

Saved in:
Bibliographic Details
Published in:2023 60th ACM/IEEE Design Automation Conference (DAC) pp. 1 - 2
Main Authors: Eliopoulos, Nick John, Lu, Yung-Hsiang
Format: Conference Proceeding
Language:English
Published: IEEE 09.07.2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning models have been widely used in natural language processing and computer vision. These models require heavy computation, large memory, and massive amounts of training data. Deep learning models may be deployed on edge devices when transferring data to cloud is infeasible or undesirable. Running these models on edge devices require significant improvement in the efficiency by reducing the models' resource demands. Existing methods to improve efficiency often require new architectures and retraining. The recent trend in machine learning is to create general-purpose models (called foundation models). These pre-trained models can be repurposed for different applications. This paper reviews the methods for improving efficiency of machine learning models, the rise of foundation models, challenges and possible solutions improving efficiency of pre-trained models. Future solutions for better efficiency should focus on improving existing trained models with no or limited training.
DOI:10.1109/DAC56929.2023.10247694