Topological Dependencies in Deep Learning for Mobile Edge: Distributed and Collaborative High-Speed Inference

Edge computing is now widely deployed across the globe. Although the Internet serves as the foundation for edge computing, the actual usefulness of this type of computing is seen when it is combined with the process of obtaining data from sensors and deriving relevant information from that data. It...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 Second International Conference on Electronics and Renewable Systems (ICEARS) s. 1165 - 1171
Hlavní autoři: Abd Algani, Yousef Methkal, Kumar, A Suresh, Ala Walid, Md. Abul, S, Balu, Velayutham, Priya, Sasi Kumar, A.
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 02.03.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Edge computing is now widely deployed across the globe. Although the Internet serves as the foundation for edge computing, the actual usefulness of this type of computing is seen when it is combined with the process of obtaining data from sensors and deriving relevant information from that data. It is predicted that in the not-too-distant era, most edge devices will be equipped with intelligent systems powered by deep learning. Unfortunately, in order to train, methods based on deep learning require a significant quantity of data of a high standard, and these methods are quite costly in terms of the amount of processing, memory, and power that they use. Distributed deep neural networks, or DDNNs, are something suggested by using distributed computing hierarchies. The cloud, fog, and devices make up these tiers. Although a DDNN can facilitate the interpretation of a DNN in the cloud, it also allows for interpretation to be carried out quickly and precisely on the edges and on end devices by making use of shallow parts of the neural network. With the help of a scalable cloud-based infrastructure, a DDNN can expand both in terms of volume of its neural network and the number of users it serves around the world. For DNN applications, the distributed nature of DDNNs results in improvements to sensor fusion, fault tolerance in the system, and data privacy. In order to implement DDNN, first the portions of a DNN are mapped onto a dispersed computing structure. By learning both components together, the devices' need become limited for both connectivity and energy while the model provided the value of the selected features in the cloud. The final product includes provision for instinctive sensor fusion as well as fault tolerance that has been built directly into the system. This study demonstrates as a proof of concept that a DDNN may make use of the geographical variety of sensors to improve the accuracy of object detection and lower the cost of communication. The suggested method achieves both rapid convergence and good accuracy thanks to the use of stochastic gradient (SGD), which capitalizes on edge collaborative learning.
DOI:10.1109/ICEARS56392.2023.10084935