Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
RAVAS: interference-aware model selection and resource allocation for live edge video analytics
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0001-9249-1633
Chalmers University of Technology, Gothenburg, Sweden.
Ericsson Research, Stockholm, Sweden.
Ericsson Research, Stockholm, Sweden.
Show others and affiliations
2023 (English)In: 2023 IEEE/ACM Symposium on Edge Computing (SEC): Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 27-39Conference paper, Published paper (Refereed)
Abstract [en]

Numerous edge applications that rely on video analytics demand precise, low-latency processing of multiple video streams from cameras. When these cameras are mobile, such as when mounted on a car or a robot, the processing load on the shared edge GPU can vary considerably. Provisioning the edge with GPUs for the worst-case load can be expensive and, for many applications, not feasible. In this paper, we introduce RAVAS, a Real-time Adaptive stream Video Analytics System that enables efficient edge GPU sharing for processing streams from various mobile cameras. RAVAS uses Q-Learning to choose between a set of Deep Neural Network (DNN) models with varying accuracy and processing requirements based on the current GPU utilization and workload. RAVAS employs an innovative resource allocation strategy to mitigate interference during concurrent GPU execution. Compared to state-of-the-art approaches, our results show that RAVAS incurs 57% less compute overhead, achieves 41% improvement in latency, and 43% savings in total GPU usage for a single video stream. Processing multiple concurrent video streams results in up to 99% and 40% reductions in latency and overall GPU usage, respectively, while meeting the accuracy constraints.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023. p. 27-39
Keywords [en]
Edge Video Analytics, Model Selection, Resource Allocation, Interference-aware GPU Multiplexing
National Category
Computer Systems
Research subject
Computer Systems
Identifiers
URN: urn:nbn:se:umu:diva-220744DOI: 10.1145/3583740.3628443ISI: 001164050000003Scopus ID: 2-s2.0-85186111633ISBN: 979-8-4007-0123-8 (print)OAI: oai:DiVA.org:umu-220744DiVA, id: diva2:1836734
Conference
2023 IEEE/ACM Symposium on Edge Computing (SEC), Wilmington, USA, December 6-9, 2023
Available from: 2024-02-11 Created: 2024-02-11 Last updated: 2024-04-08Bibliographically approved
In thesis
1. Edge orchestration for latency-sensitive applications
Open this publication in new window or tab >>Edge orchestration for latency-sensitive applications
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Orkestrering av distribuerade resurser för latenskänsliga applikationer
Abstract [en]

The emerging edge computing infrastructure provides distributed and heterogeneous resources closer to where data is generated and where end-users are located, thereby significantly reducing latency. With the recent advances in telecommunication systems, software architecture, and machine learning, there is a noticeable increase in applications that require processing times within tight latency constraints, i.e. latency-sensitive applications. For instance, numerous video analytics applications, such as traffic control systems, necessitate real-time processing capabilities. Orchestrating such applications at the edge offers numerous advantages, including lower latency, optimized bandwidth utilization, and enhanced scalability. However, despite its potential, effectively managing such latency-sensitive applications at the edge poses several challenges such as constrained compute resources, which holds back the full promise of edge computing.

This thesis proposes approaches to efficiently deploy latency-sensitive applications on the edge infrastructure. It partly addresses general applications with microservice architectures and party addresses the increasingly more important video analytics applications for the edge. To do so, this thesis proposes various application- and system-level solutions aiming to efficiently utilize constrained compute capacity on the edge while meeting prescribed latency constraints. These solutions primarily focus on effective resource management approaches and optimizing incoming workload inputs, considering the constrained compute capacity of edge resources. Additionally, the thesis explores the synergy effects of employing both application- and system-level resource optimization approaches together.

The results demonstrate  the effectiveness of the proposed solutions in enhancing the utilization of edge resources for latency-sensitive applications while adhering to application constraints. The proposed resource management solutions, alongside application-level optimization techniques, significantly improve resource efficiency while satisfying application requirements. Our results show that our solutions for microservice architectures significantly improve end-to-end latency by up to 800% while minimizing edge resource usage. Additionally, the results indicate that our application- and system-level optimizations for orchestrating edge resources for video analytics applications can increase the overall throughput by up to 60%. 

Place, publisher, year, edition, pages
Umeå: Umeå University, 2024. p. 46
Series
UMINF, ISSN 0348-0542 ; 24.05
Keywords
Edge Computing, Resource Management, Latency-Sensitive Applications, Edge Video Analytics
National Category
Computer Sciences
Research subject
Computer Science; Computer Systems
Identifiers
urn:nbn:se:umu:diva-223021 (URN)978-91-8070-350-5 (ISBN)978-91-8070-351-2 (ISBN)
Public defence
2024-04-29, Hörsal UB.A.240 - Lindellhallen 4, 13:00 (English)
Opponent
Supervisors
Note

Incorrect date of publication on the posting sheet.

In publication: UMINF 24.04

Available from: 2024-04-08 Created: 2024-04-08 Last updated: 2024-04-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusPublisher's full text

Authority records

Rahmanian, AliElmroth, Erik

Search in DiVA

By author/editor
Rahmanian, AliElmroth, Erik
By organisation
Department of Computing Science
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 95 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf