Assessing the Efficacy of TinyML Implementations on STM32 Microcontrollers: A Performance Evaluation Study
In recent years, there has been a growing need for efficient deployment of deep learning models on resource-limited edge devices. This trend has prompted the development of NVIDIA's TAO Toolkit and TensorFlow Lite Micro framework as promising solutions for running deep-learning models on STM32...
Saved in:
| Published in: | International Conference on Advanced Technologies for Signal and Image Processing (Online) Vol. 1; pp. 267 - 271 |
|---|---|
| Main Authors: | , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
11.07.2024
|
| Subjects: | |
| ISSN: | 2687-878X |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In recent years, there has been a growing need for efficient deployment of deep learning models on resource-limited edge devices. This trend has prompted the development of NVIDIA's TAO Toolkit and TensorFlow Lite Micro framework as promising solutions for running deep-learning models on STM32 microcontrollers. This paper presents a comparative study between these two tools, aiming to evaluate their performance, resource usage, and ease of deployment. By delving into their respective strengths and limitations, we seek to provide insights into the best practices for optimizing deep learning models in edge computing scenarios. Through empirical analysis, we assess the impact of model optimization techniques on classification accuracy, memory usage, and computational efficiency. Our findings reveal trade-offs between model complexity and resource consumption, shedding light on the strengths and limitations of each tool. Additionally, we explore the feasibility of deploying optimized models on STM32 microcontrollers using the STM32Cube.AI Developer Cloud platform. Insights from this study contribute to the advancement of efficient edge AI solutions by providing guidance on selecting appropriate optimization tools for specific deployment scenarios. |
|---|---|
| ISSN: | 2687-878X |
| DOI: | 10.1109/ATSIP62566.2024.10638900 |