Lightning Talk 6: Bringing Together Foundation Models and Edge Devices
Deep learning models have been widely used in natural language processing and computer vision. These models require heavy computation, large memory, and massive amounts of training data. Deep learning models may be deployed on edge devices when transferring data to cloud is infeasible or undesirable...
Gespeichert in:
| Veröffentlicht in: | 2023 60th ACM/IEEE Design Automation Conference (DAC) S. 1 - 2 |
|---|---|
| Hauptverfasser: | , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
09.07.2023
|
| Schlagworte: | |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Deep learning models have been widely used in natural language processing and computer vision. These models require heavy computation, large memory, and massive amounts of training data. Deep learning models may be deployed on edge devices when transferring data to cloud is infeasible or undesirable. Running these models on edge devices require significant improvement in the efficiency by reducing the models' resource demands. Existing methods to improve efficiency often require new architectures and retraining. The recent trend in machine learning is to create general-purpose models (called foundation models). These pre-trained models can be repurposed for different applications. This paper reviews the methods for improving efficiency of machine learning models, the rise of foundation models, challenges and possible solutions improving efficiency of pre-trained models. Future solutions for better efficiency should focus on improving existing trained models with no or limited training. |
|---|---|
| DOI: | 10.1109/DAC56929.2023.10247694 |