umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Mehta, Amardeep
Publications (10 of 11) Show all publications
Nguyen, C. L., Mehta, A., Klein, C. & Elmroth, E. (2019). Why Cloud Applications Are not Ready for the Edge (yet). In: 4th ACM/IEEE Symposium on Edge Computing: . Paper presented at 4th ACM/IEEE Symposium on Edge Computing (SEC 2019). IEEE
Open this publication in new window or tab >>Why Cloud Applications Are not Ready for the Edge (yet)
2019 (English)In: 4th ACM/IEEE Symposium on Edge Computing, IEEE, 2019Conference paper, Published paper (Other academic)
Abstract [en]

Mobile Edge Clouds (MECs) are distributed platforms in which distant data-centers are complemented with computing and storage capacity located at the edge of the network. With such high resource distribution, MECs potentially fulfill the need of low latency and high bandwidth to offer an improved user experience.

As modern cloud applications are increasingly architected as collections of small, independently deployable services, it enables them to be flexibly deployed in various configurations combining resources from both centralized datacenters and edge location. Therefore, one might expect them to be well-placed to benefit from the advantage of MECs in order to reduce the service response time.In this paper, we quantify the benefits of deploying such cloud micro-service applications on MECs. Using two popular benchmarks, we show that, against conventional wisdom, end-to-end latency does not improve significantly even when most application services are deployed in the edge location. We developed a profiler to better understand this phenomenon, allowing us to develop recommendations for adapting applications to MECs. Further, by quantifying the gains of those recommendations, we show that the performance of an application can be made to reach the ideal scenario, in which the latency between an edge datacenter and a remote datacenter has no impact on the application performance.

This work thus presents ways of adapting cloud-native applications to take advantage of MECs and provides guidance for developing MEC-native applications. We believe that both these elements are necessary to drive MEC adoption.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
Mobile Edge Clouds, Edge Latency, Mobile Application Development, Micro-service, Profiling
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-162930 (URN)
Conference
4th ACM/IEEE Symposium on Edge Computing (SEC 2019)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2019-09-02 Created: 2019-09-02 Last updated: 2019-10-29
Mehta, A. & Elmroth, E. (2018). Distributed Cost-Optimized Placement for Latency-Critical Applications in Heterogeneous Environments. In: Proceedings of the IEEE 15th International Conference on Autonomic Computing (ICAC): . Paper presented at 2018 IEEE International Conference on Autonomic Computing, Trento, Italy, September 3-7, 2018 (pp. 121-130). IEEE Computer Society
Open this publication in new window or tab >>Distributed Cost-Optimized Placement for Latency-Critical Applications in Heterogeneous Environments
2018 (English)In: Proceedings of the IEEE 15th International Conference on Autonomic Computing (ICAC), IEEE Computer Society, 2018, p. 121-130Conference paper, Published paper (Refereed)
Abstract [en]

Mobile Edge Clouds (MECs) with 5G will create new opportunities to develop latency-critical applications in domains such as intelligent transportation systems, process automation, and smart grids. However, it is not clear how one can costefficiently deploy and manage a large number of such applications given the heterogeneity of devices, application performance requirements, and workloads. This work explores cost and performance dynamics for IoT applications, and proposes distributed algorithms for automatic deployment of IoT applications in heterogeneous environments. Placement algorithms were evaluated with respect to metrics including number of required runtimes, applications’ slowdown, and the number of iterations used to place an application. Iterative search-based distributed algorithms such as Size Interval Actor Assignment in Groups (SIAA G) outperformed random and bin packing algorithms, and are therefore recommended for this purpose. Size Interval Actor Assignment in Groups at Least Utilized Runtime (SIAA G LUR) algorithm is also recommended when minimizing the number of iterations is important. The tradeoff of using SIAA G algorithms is a few extra runtimes compared to bin packing algorithms.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Series
Proceedings of the International Conference on Autonomic Computing, ISSN 2474-0764
Keywords
Mobile Edge Clouds, Fog Computing, IoTs, Distributed algorithms
National Category
Computer Systems
Identifiers
urn:nbn:se:umu:diva-151457 (URN)10.1109/ICAC.2018.00022 (DOI)978-1-5386-5139-1 (ISBN)
Conference
2018 IEEE International Conference on Autonomic Computing, Trento, Italy, September 3-7, 2018
Available from: 2018-09-04 Created: 2018-09-04 Last updated: 2019-06-26Bibliographically approved
Mehta, A. (2018). Resource allocation for Mobile Edge Clouds. (Doctoral dissertation). Umeå: Umeå University
Open this publication in new window or tab >>Resource allocation for Mobile Edge Clouds
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Recent advances in Internet technologies have led to the proliferation of new distributed applications in the transportation, healthcare, mining, security, and entertainment sectors. The emerging applications have characteristics such as being bandwidth-hungry, latency-critical, and applications with a user population contained within a limited geographical area, and require high availability, low jitter, and security.

One way of addressing the challenges arising because of these emerging applications, is to move the computing capabilities closer to the end-users, at the logical edge of a network, in order to improve the performance, operating cost, and reliability of applications and services. These distributed new resources and software stacks, situated on the path between today's centralized data centers and devices in close proximity to the last mile network, are known as Mobile Edge Clouds (MECs). The distributed MECs provides new opportunities for the management of compute resources and the allocation of applications to those resources in order to minimize the overall cost of application deployment while satisfying end-user demands in terms of application performance.

However, these opportunities also present three significant challenges. The first challenge is where and how much computing resources to deploy along the path between today's centralized data centers and devices for cost-optimal operations. The second challenge is where and how much resources should be allocated to which applications to meet the applications' performance requirements while minimizing operational costs. The third challenge is how to provide a framework for application deployment on resource-constrained IoT devices in heterogeneous environments. 

This thesis addresses the above challenges by proposing several models, algorithms, and simulation and software frameworks. In the first part, we investigate methods for early detection of short-lived and significant increase in demand for computing resources (also called spikes) which may cause significant degradation in the performance of a distributed application. We make use of adaptive signal processing techniques for early detection of spikes. We then consider trade-offs between parameters such as the time taken to detect a spike and the number of false spikes that are detected. In the second part, we study the resource planning problem where we study the cost benefits of adding new compute resources based on performance requirements for emerging applications. In the third part, we study the problem of allocating resources to applications by formulating as an optimization problem, where the objective is to minimize overall operational cost while meeting the performance targets of applications. We also propose a hierarchical scheduling framework and policies for allocating resources to applications based on performance metrics of both applications and compute resources. In the last part, we propose a framework, Calvin Constrained, for resource-constrained devices, which is an extension of the Calvin framework and supports a limited but essential subset of the features of the reference framework taking into account the limited memory and processing power of the resource-constrained IoT devices.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2018. p. 30
Series
Report / UMINF, ISSN 0348-0542 ; 18.10
Keywords
Mobile Edge Clouds, Edge/Fog Computing, IoTs, Distributed Resource Allocation
National Category
Computer Systems
Research subject
Computer Science; Computer Systems
Identifiers
urn:nbn:se:umu:diva-151480 (URN)978-91-7601-925-2 (ISBN)
Public defence
2018-10-01, MA121, MIT-huset, Umeå, 13:30 (English)
Opponent
Supervisors
Available from: 2018-09-10 Created: 2018-09-04 Last updated: 2018-09-07Bibliographically approved
Mehta, A., Bayuh Lakew, E., Tordsson, J. & Elmroth, E. (2018). Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds. Umeå: Umeå universitet
Open this publication in new window or tab >>Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
2018 (English)Report (Other academic)
Abstract [en]

Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoTs) applications in MECs. The scheduler at the higher level runs periodically – monitors system state and the performance of applications – and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application’s performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example.

We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.

Place, publisher, year, edition, pages
Umeå: Umeå universitet, 2018. p. 28
Series
Report / UMINF, ISSN 0348-0542 ; 18.11
Keywords
Edge/Fog Computing, Hierarchical Resource Allocation, IoTs, Mobile Edge Clouds
National Category
Computer Systems
Research subject
Computer Systems
Identifiers
urn:nbn:se:umu:diva-151455 (URN)
Available from: 2018-09-04 Created: 2018-09-04 Last updated: 2018-09-07Bibliographically approved
Mehta, A., Bayuh Lakew, E., Tordsson, J. & Elmroth, E. (2018). Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds. In: 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC): . Paper presented at 37th IEEE International Performance Computing and Communications Conference (IPCCC), Orlando, FL, November 17-19, 2018. IEEE
Open this publication in new window or tab >>Utility-based Allocation of Industrial IoT Applications in Mobile Edge Clouds
2018 (English)In: 2018 IEEE 37th International Performance Computing and Communications Conference (IPCCC), IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Mobile Edge Clouds (MECs) create new opportunities and challenges in terms of scheduling and running applications that have a wide range of latency requirements, such as intelligent transportation systems, process automation, and smart grids. We propose a two-tier scheduler for allocating runtime resources to Industrial Internet of Things (IIoT) applications in MECs. The scheduler at the higher level runs periodically - monitors system state and the performance of applications - and decides whether to admit new applications and migrate existing applications. In contrast, the lower-level scheduler decides which application will get the runtime resource next. We use performance based metrics that tells the extent to which the runtimes are meeting the Service Level Objectives (SLOs) of the hosted applications. The Application Happiness metric is based on a single application's performance and SLOs. The Runtime Happiness metric is based on the Application Happiness of the applications the runtime is hosting. These metrics may be used for decision-making by the scheduler, rather than runtime utilization, for example. We evaluate four scheduling policies for the high-level scheduler and five for the low-level scheduler. The objective for the schedulers is to minimize cost while meeting the SLO of each application. The policies are evaluated with respect to the number of runtimes, the impact on the performance of applications and utilization of the runtimes. The results of our evaluation show that the high-level policy based on Runtime Happiness combined with the low-level policy based on Application Happiness outperforms other policies for the schedulers, including the bin packing and random strategies. In particular, our combined policy requires up to 30% fewer runtimes than the simple bin packing strategy and increases the runtime utilization up to 40% for the Edge Data Center (DC) in the scenarios we evaluated.

Place, publisher, year, edition, pages
IEEE, 2018
Series
IEEE International Performance Computing and Communications Conference (IPCCC), ISSN 1097-2641
National Category
Computer Systems Computer Engineering
Identifiers
urn:nbn:se:umu:diva-160322 (URN)10.1109/PCCC.2018.8711075 (DOI)000469326500052 ()978-1-5386-6808-5 (ISBN)978-1-5386-6807-8 (ISBN)978-1-5386-6809-2 (ISBN)
Conference
37th IEEE International Performance Computing and Communications Conference (IPCCC), Orlando, FL, November 17-19, 2018
Funder
EU, Horizon 2020, ICT30
Available from: 2019-06-17 Created: 2019-06-17 Last updated: 2019-06-17Bibliographically approved
Mehta, A., Baddour, R., Svensson, F., Gustafsson, H. & Elmroth, E. (2017). Calvin Constrained: A Framework for IoT Applications in Heterogeneous Environments. In: Lee, K., Liu, L. (Ed.), 2017 IEEE 37TH International Conference on Distributed Computing Systems (ICDCS 2017): . Paper presented at 37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA (pp. 1063-1073). IEEE Computer Society
Open this publication in new window or tab >>Calvin Constrained: A Framework for IoT Applications in Heterogeneous Environments
Show others...
2017 (English)In: 2017 IEEE 37TH International Conference on Distributed Computing Systems (ICDCS 2017) / [ed] Lee, K., Liu, L., IEEE Computer Society, 2017, p. 1063-1073Conference paper, Published paper (Refereed)
Abstract [en]

Calvin is an IoT framework for application development, deployment and execution in heterogeneous environments, that includes clouds, edge resources, and embedded or constrained resources. Inside Calvin, all the distributed resources are viewed as one environment by the application. The framework provides multi-tenancy and simplifies development of IoT applications, which are represented using a dataflow of application components (named actors) and their communication. The idea behind Calvin poses similarity with the serverless architecture and can be seen as Actor as a Service instead of Function as a Service. This makes Calvin very powerful as it does not only scale actors quickly but also provides an easy actor migration capability. In this work, we propose Calvin Constrained, an extension to the Calvin framework to cover resource-constrained devices. Due to limited memory and processing power of embedded devices, the constrained side of the framework can only support a limited subset of the Calvin features. The current implementation of Calvin Constrained supports actors implemented in C as well as Python, where the support for Python actors is enabled by using MicroPython as a statically allocated library, by this we enable the automatic management of state variables and enhance code re-usability. As would be expected, Python-coded actors demand more resources over C-coded ones. We show that the extra resources needed are manageable on current off-the-shelve micro-controller-equipped devices when using the Calvin framework.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Series
IEEE International Conference on Distributed Computing Systems, ISSN 1063-6927
Keywords
IoT, Distributed Cloud, Serverless Architecture, Dataflow Application Development Model
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-142013 (URN)10.1109/ICDCS.2017.181 (DOI)000412759500098 ()978-1-5386-1791-5 (ISBN)978-1-5386-1792-2 (ISBN)978-1-5386-1793-9 (ISBN)
Conference
37th IEEE International Conference on Distributed Computing Systems (ICDCS), JUN 05-08, 2017, Atlanta, GA
Funder
Swedish Research Council, C0590801
Available from: 2017-11-21 Created: 2017-11-21 Last updated: 2018-09-07Bibliographically approved
Tärneberg, W., Papadopoulos, A. V., Mehta, A., Tordsson, J. & Kihl, M. (2017). Distributed Approach to the Holistic Resource Management of a Mobile Cloud Network. In: 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC): . Paper presented at 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 14-15 May 2017, Madrid (pp. 51-60).
Open this publication in new window or tab >>Distributed Approach to the Holistic Resource Management of a Mobile Cloud Network
Show others...
2017 (English)In: 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 2017, p. 51-60Conference paper, Published paper (Refereed)
Abstract [en]

The Mobile Cloud Network is an emerging cost and capacity heterogeneous distributed cloud topological paradigm that aims to remedy the application performance constraints imposed by centralised cloud infrastructures. A centralised cloud infrastructure and the adjoining Telecom network will struggle to accommodate the exploding amount of traffic generated by forthcoming highly interactive applications. Cost effectively managing a Mobile Cloud Network computing infrastructure while meeting individual application's performance goals is non-trivial and is at the core of our contribution. Due to the scale of a Mobile Cloud Network, a centralised approach is infeasible. Therefore, in this paper a distributed algorithm that addresses these challenges is presented. The presented approach works towards meeting individual application's performance objectives, constricting system-wide operational cost, and mitigating resource usage skewness. The presented distributed algorithm does so by iteratively and independently acting on the objectives of each component with a common heuristic objective function. Systematic evaluations reveal that the presented algorithm quickly converges and performs near optimal in terms of system-wide operational cost and application performance, and significantly outperforms similar naïve and random methods.

National Category
Communication Systems
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-145491 (URN)10.1109/ICFEC.2017.10 (DOI)000426944700006 ()978-1-5090-3047-7 (ISBN)
Conference
2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 14-15 May 2017, Madrid
Available from: 2018-03-07 Created: 2018-03-07 Last updated: 2018-06-09Bibliographically approved
Tärneberg, W., Mehta, A., Wadbro, E., Tordsson, J., Eker, J., Kihl, M. & Elmroth, E. (2017). Dynamic application placement in the Mobile Cloud Network. Future generations computer systems, 70, 163-177
Open this publication in new window or tab >>Dynamic application placement in the Mobile Cloud Network
Show others...
2017 (English)In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 70, p. 163-177Article in journal (Refereed) Published
Abstract [en]

To meet the challenges of consistent performance, low communication latency, and a high degree of user mobility, cloud and Telecom infrastructure vendors and operators foresee a Mobile Cloud Network that incorporates public cloud infrastructures with cloud augmented Telecom nodes in forthcoming mobile access networks. A Mobile Cloud Network is composed of distributed cost- and capacityheterogeneous resources that host applications that in turn are subject to a spatially and quantitatively rapidly changing demand. Such an infrastructure requires a holistic management approach that ensures that the resident applications’ performance requirements are met while sustainably supported by the underlying infrastructure. The contribution of this paper is three-fold. Firstly, this paper contributes with a model that captures the cost- and capacity-heterogeneity of a Mobile Cloud Network infrastructure. The model bridges the Mobile Edge Computing and Distributed Cloud paradigms by modelling multiple tiers of resources across the network and serves not just mobile devices but any client beyond and within the network. A set of resource management challenges is presented based on this model. Secondly, an algorithm that holistically and optimally solves these challenges is proposed. The algorithm is formulated as an application placement method that incorporates aspects of network link capacity, desired user latency and user mobility, as well as data centre resource utilisation and server provisioning costs. Thirdly, to address scalability, a tractable locally optimal algorithm is presented. The evaluation demonstrates that the placement algorithm significantly improves latency, resource utilisation skewness while minimising the operational cost of the system. Additionally, the proposed model and evaluation method demonstrate the viability of dynamic resource management of the Mobile Cloud Network and the need for accommodating rapidly mobile demand in a holistic manner.

Keywords
Cloud computing, Distributed, Edge, Graph, Infrastructure, Mobile, Mobile Cloud, Modelling, Networks, Optimisation, Placement, Telco-cloud
National Category
Communication Systems Computer Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-129247 (URN)10.1016/j.future.2016.06.021 (DOI)000394401800015 ()2-s2.0-85006970632 (Scopus ID)
Available from: 2016-12-21 Created: 2016-12-21 Last updated: 2018-09-04Bibliographically approved
Mehta, A., Tärneberg, W., Klein, C., Tordsson, J., Kihl, M. & Elmroth, E. (2016). How beneficial are intermediate layer Data Centers in Mobile Edge Networks?. In: Sameh Elnikety, Peter R. Lewis and Christian Müller-Schloer (Ed.), 2016 IEEE 1st International Workshops on Foundations and Applications of Self-* Systems: . Paper presented at FAS* Foundations and Applications of Self* Systems University of Augsburg, Augsburg, Germany, 12-16 September 2016 (pp. 222-229).
Open this publication in new window or tab >>How beneficial are intermediate layer Data Centers in Mobile Edge Networks?
Show others...
2016 (English)In: 2016 IEEE 1st International Workshops on Foundations and Applications of Self-* Systems / [ed] Sameh Elnikety, Peter R. Lewis and Christian Müller-Schloer, 2016, p. 222-229Conference paper, Published paper (Refereed)
Abstract [en]

To reduce the congestion due to the future bandwidth-hungry applications in domains such as Health care, Internet of Things (IoT), etc., we study the benefit of introducing additional Data Centers (DCs) closer to the network edge for the optimal application placement. Our study shows that the edge layer DCs in a Mobile Edge Network (MEN) infrastructure is cost beneficial for the bandwidth-hungry applications having their strong demand locality and in the scenarios where large capacity is deployed at the edge layer DCs. The cost savings for such applications can go up to 67%. Additional intermediate layer DCs close to the root DC can be marginally cost beneficial for the compute intensive applications with medium or low demand locality. Hence, a Telecom Network Operator should start building an edge DC first having capacity up to hundreds of servers at the network edge to cater the emerging bandwidth-hungry applications and to minimize its operational cost.

National Category
Communication Systems
Identifiers
urn:nbn:se:umu:diva-125640 (URN)10.1109/FAS-W.2016.55 (DOI)000391523100042 ()978-1-5090-3651-6 (ISBN)
Conference
FAS* Foundations and Applications of Self* Systems University of Augsburg, Augsburg, Germany, 12-16 September 2016
Available from: 2016-09-13 Created: 2016-09-13 Last updated: 2018-09-04Bibliographically approved
Mehta, A., Durango, J., Tordsson, J. & Elmroth, E. (2015). Online Spike Detection in Cloud Workloads. In: 2015 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING (IC2E 2015): . Paper presented at 2015 IEEE International Conference on Cloud Engineering, Arizona State University, Tempe, AZ, Mar 09-12, 2015. (pp. 446-451). New York: IEEE Computer Society
Open this publication in new window or tab >>Online Spike Detection in Cloud Workloads
2015 (English)In: 2015 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING (IC2E 2015), New York: IEEE Computer Society, 2015, p. 446-451Conference paper, Published paper (Refereed)
Abstract [en]

We investigate methods for detection of rapid workload increases (load spikes) for cloud workloads. Such rapid and unexpected workload spikes are a main cause for poor performance or even crashing applications as the allocated cloud resources become insufficient. To detect the spikes early is fundamental to perform corrective management actions, like allocating additional resources, before the spikes become large enough to cause problems. For this, we propose a number of methods for early spike detection, based on established techniques from adaptive signal processing. A comparative evaluation shows, for example, to what extent the different methods manage to detect the spikes, how early the detection is made, and how frequently they falsely report spikes.

Place, publisher, year, edition, pages
New York: IEEE Computer Society, 2015
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-125610 (URN)10.1109/IC2E.2015.50 (DOI)000380449000072 ()978-1-4799-8218-9 (ISBN)
Conference
2015 IEEE International Conference on Cloud Engineering, Arizona State University, Tempe, AZ, Mar 09-12, 2015.
Available from: 2016-10-11 Created: 2016-09-13 Last updated: 2018-09-04Bibliographically approved
Organisations

Search in DiVA

Show all publications