Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Why Cloud Applications Are not Ready for the Edge (yet)
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-9156-3364
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0003-0106-3049
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-2633-6798
2019 (Engelska)Ingår i: Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, IEEE, 2019, s. 250-263Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

Mobile Edge Clouds (MECs) are distributed platforms in which distant data-centers are complemented with computing and storage capacity located at the edge of the network. Their wide resource distribution enables MECs to fulfill the need of low latency and high bandwidth to offer an improved user experience.

As modern cloud applications are increasingly architected as collections of small, independently deployable services, they can be flexibly deployed in various configurations that combines resources from both centralized datacenters and edge locations. In principle, such applications should therefore be well-placed to exploit the advantages of MECs so as to reduce service response times.

In this paper, we quantify the benefits of deploying such cloud micro-service applications on MECs. Using two popular benchmarks, we show that, against conventional wisdom, end-to-end latency does not improve significantly even when most application services are deployed in the edge location. We developed a profiler to better understand this phenomenon, allowing us to develop recommendations for adapting applications to MECs. Further, by quantifying the gains of those recommendations, we show that the performance of an application can be made to reach the ideal scenario, in which the latency between an edge datacenter and a remote datacenter has no impact on the application performance.

This work thus presents ways of adapting cloud-native applications to take advantage of MECs and provides guidance for developing MEC-native applications. We believe that both these elements are necessary to drive MEC adoption.

Ort, förlag, år, upplaga, sidor
IEEE, 2019. s. 250-263
Nyckelord [en]
Mobile Edge Clouds, Edge Latency, Mobile Application Development, Micro-service, Profiling
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:umu:diva-162930DOI: 10.1145/3318216.3363298ISI: 000680020300019Scopus ID: 2-s2.0-85076258710ISBN: 978-1-4503-6733-2 (tryckt)OAI: oai:DiVA.org:umu-162930DiVA, id: diva2:1347841
Konferens
The Fourth ACM/IEEE Symposium on Edge Computing, Washington DC, November 7–9, 2019
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Tillgänglig från: 2019-09-02 Skapad: 2019-09-02 Senast uppdaterad: 2023-09-05Bibliografiskt granskad
Ingår i avhandling
1. Autonomous resource management for Mobile Edge Clouds
Öppna denna publikation i ny flik eller fönster >>Autonomous resource management for Mobile Edge Clouds
2019 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Mobile Edge Clouds (MECs) are platforms that complement today's centralized clouds by distributing computing and storage capacity across the edge of the network, in Edge Data Centers (EDCs) located in close proximity to end-users. They are particularly attractive because of their potential benefits for the delivery of bandwidth-hungry, latency-critical applications. However, the control of resource allocation and provisioning in MECs is challenging because of the  heterogeneous distributed resource capacity of EDCs as well as the need for flexibility in application deployment and the dynamic nature of mobile users. To realize the potential of MECs, efficient resource management systems that can deal with these challenges must be designed and built.

This thesis focuses on two problems. The first relates to the fact that it is unrealistic to expect MECs to become successful based solely on MEC-native applications. Thus, to spur the development of MECs, we investigated the benefits MECs can offer to non-MEC-native applications, i.e., applications not specifically engineered for MECs. One class of popular applications that may benefit strongly from deployment on MECs are cloud-native applications, particularly microservice-based applications with high deployment flexibility. We therefore quantified the performance of cloud-native applications deployed using resources from both cloud datacenters and edge locations. We also developed a network communication profiling tool to identify the aspects of these applications that reduce the benefits they derive from deployment on MECs, and proposed design improvements that would allow such applications to better exploit MECs' capabilities.

The second problem examined in this thesis relates to the dynamic nature of resource demand in MECs. To overcome the challenges arising from this dynamicity, we make use of statistical time series models and machine learning techniques to develop two workload prediction models for EDCs that account for both user mobility and the correlation of workload changes among EDCs in close physical proximity.  

Ort, förlag, år, upplaga, sidor
Umeå: Institutionen för datavetenskap, Umeå universitet, 2019. s. 31
Serie
Report / UMINF, ISSN 0348-0542 ; 19.07
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-162924 (URN)9789178551163 (ISBN)
Presentation
2019-09-19, MA121, MIT building, Umeå University, Umeå, 13:15
Opponent
Handledare
Tillgänglig från: 2019-09-02 Skapad: 2019-09-02 Senast uppdaterad: 2021-03-18Bibliografiskt granskad
2. Location-aware resource allocation in mobile edge clouds
Öppna denna publikation i ny flik eller fönster >>Location-aware resource allocation in mobile edge clouds
2021 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Over the last decade, cloud computing has realized the long-held dream of computing as a utility, in which computational and storage services are made available via the Internet to anyone at any time and from anywhere. This has transformed Information Technology (IT) and given rise to new ways of designing and purchasing hardware and software. However, the rapid development of the Internet of Things (IoTs) and mobile technology has brought a new wave of disruptive applications and services whose performance requirements are stretching the limits of current cloud computing systems and platforms. In particular, novel large scale mission-critical IoT systems and latency-intolerant applications strictly require very low latency and strong guarantees of privacy, and can generate massive amounts of data that are only of local interest. These requirements are not readily satisfied using modern application deployment strategies that rely on resources from distant large cloud datacenters because they easily cause network congestion and high latency in service delivery. This has provoked a paradigm shift leading to the emergence of new distributed computing infrastructures known as Mobile Edge Clouds (MECs) in which resource capabilities are widely distributed at the edge of the network, in close proximity to end-users.  Experimental studies have validated and quantified many benefits of MECs, which include considerable improvements in response times and enormous reductions in ingress bandwidth demand. However, MECs must cope with several challenges not commonly encountered in traditional cloud systems, including user mobility, hardware heterogeneity, and considerable flexibility in terms of where computing capacity can be used. This makes it especially difficult to analyze, predict, and control resource usage and allocation so as to minimize cost and maximize performance while delivering the expected end-user Quality-of-Service (QoS). Realizing the potential of MECs will thus require the design and development of efficient resource allocation systems that take these factors into consideration. 

Since the introduction of the MEC concept, the performance benefits achieved by running MEC-native applications (i.e., applications engineered specifically for MECs) on MECs have been clearly demonstrated. However, the benefits of MECs for non-MEC-native applications (i.e., application not specifically engineered for MECs) are still questioned. This is a fundamental issue that must be explored because it will affect the incentives for service providers and application developers to invest in MECs. To spur the development of MECs, the first part of this thesis presents an extensive investigation of the benefits that MECs can offer to non-MEC-native applications. One class of non-MEC-native applications that could potentially benefit significantly from deployment on an MEC is cloud-native applications, particularly micro-service-based applications with high deployment flexibility. We therefore quantitatively compared the performance of cloud-native applications deployed using resources from cloud datacenters and edge locations. We then developed a network communication profiling tool to identify aspects of these applications that reduce the benefits derived from deployment on MECs, and proposed design improvements that would allow such applications to better exploit MECs' capabilities.  

The second part of this thesis addresses problems related to resource allocation in highly distributed MECs. First, to overcome challenges arising from the dynamic nature of resource demand in MECs, we used statistical time series models and machine learning techniques to develop two location-aware workload prediction models for EDCs that account for both user mobility and the correlation of workload changes among EDCs in close physical proximity. These models were then utilized to develop an elasticity controller for MECs. In essence, the controller helps MECs to perform resource allocation, i.e. to answer the intertwined questions of what and how many resources should be allocated and when and where they should be deployed.

The third part of the thesis focuses on problems relating to the real-time placement of stateful applications on MECs. Specifically, it examines the questions of where to place applications so as to minimize total operating costs while delivering the required end-user QoS and whether the requested applications should be migrated to follow the user's movements. Such questions are easy to pose but intrinsically hard to answer due to the scale and complexity of MEC infrastructures and the stochastic nature of user mobility. To this end, we first thoroughly modeled the workloads, stateful applications, and infrastructures to be expected in MECs. We then formulated the various costs associated with operating applications, namely the resource cost, migration cost, and service quality degradation cost. Based on our model, we proposed two online application placement algorithms that take these factors into account to minimize the total cost of operating the application.

The methods and algorithms proposed in this thesis were evaluated by implementing prototypes on simulated testbeds and conducting experiments using workloads based on real mobility traces. These evaluations showed that the proposed approaches outperformed alternative state-of-the-art approaches and could thus help improve the efficiency of resource allocation in MECs.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå universitet, 2021. s. 50
Serie
Report / UMINF, ISSN 0348-0542 ; 21.01
Nyckelord
Mobile Edge Clouds, Resource Allocation, Quality of Service, Application Placement, Workload Prediction
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
datorteknik; datalogi
Identifikatorer
urn:nbn:se:umu:diva-178926 (URN)978-91-7855-467-6 (ISBN)978-91-7855-466-9 (ISBN)
Disputation
2021-02-17, Aula Biologica, Umeå Universitet, 901 87 Umeå, Umeå, 13:00 (Engelska)
Opponent
Handledare
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Tillgänglig från: 2021-01-27 Skapad: 2021-01-21 Senast uppdaterad: 2021-06-11Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Nguyen, Chanh Le TanMehta, AmardeepKlein, CristianElmroth, Erik

Sök vidare i DiVA

Av författaren/redaktören
Nguyen, Chanh Le TanMehta, AmardeepKlein, CristianElmroth, Erik
Av organisationen
Institutionen för datavetenskap
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 1438 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf