Cost-aware neural network splitting and dynamic rescheduling for edge intelligence
2023 (English)In: EdgeSys '23: Proceedings of the 6th International Workshop on Edge Systems, Analytics and Networking, ACM Digital Library, 2023, p. 42-47Conference paper, Published paper (Refereed)
Abstract [en]
With the rise of IoT devices and the necessity of intelligent applications, inference tasks are often offloaded to the cloud due to the computation limitation of the end devices. Yet, requests to the cloud are costly in terms of latency, and therefore a shift of the computation from the cloud to the network's edge is unavoidable. This shift is called edge intelligence and promises lower latency, among other advantages. However, some algorithms, like deep neural networks, are computationally intensive, even for local edge servers (ES). To keep latency low, such DNNs can be split into two parts and distributed between the ES and the cloud. We present a dynamic scheduling algorithm that takes real-Time parameters like the clock speed of the ES, bandwidth, and latency into account and predicts the optimal splitting point regarding latency. Furthermore, we estimate the overall costs for the ES and cloud during run-Time and integrate them into our prediction and decision models. We present a cost-Aware prediction of the splitting point, which can be tuned with a parameter toward faster response or lower costs.
Place, publisher, year, edition, pages
ACM Digital Library, 2023. p. 42-47
Keywords [en]
cost-Awareness, DNN splitting, edge computing, edge intelligence
National Category
Computer Sciences Computer Systems
Identifiers
URN: urn:nbn:se:umu:diva-209279DOI: 10.1145/3578354.3592871ISI: 001124802400008Scopus ID: 2-s2.0-85159359631ISBN: 9798400700828 (electronic)OAI: oai:DiVA.org:umu-209279DiVA, id: diva2:1764377
Conference
6th International Workshop on Edge Systems, Analytics and Networking, EdgeSys 2023, in conjunction with ACM EuroSys 2023, Rome, Italy, May 8, 2023
2023-06-082023-06-082025-04-24Bibliographically approved