Designing and Managing Large-scale Interactive Microservices in Datacenters

Saved in:
Bibliographic Details
Title: Designing and Managing Large-scale Interactive Microservices in Datacenters
Authors: Gan, Yu
Contributors: Delimitrou, Christina, Martínez, José F., Weatherspoon, Hakim
Publication Year: 2021
Collection: Cornell University: eCommons@Cornell
Subject Terms: Benchmark, Cloud Computing, Distributed System, Microservice, ML for System, Performance Debugging
Description: 247 pages ; Cloud computing has greatly increased in prevalence and impact. Datacenter applications today have a strong focus towards cloud-native architectures. The cloud-native architecture utilizes many technologies, including microservices, containerization, service meshes and orchestration, cloud telemetry, and serverless, to fully exploit the flexibility, scalability, and robustness of public, private, or hybrid clouds. As a fundamental programming model to design modern cloud applications, microservices have drawn much attention from both academia and industry. Many cloud service providers, including Google, Facebook, Netflix, Twitter, Amazon, Uber, and Alibaba, have adopted or supported microservices in their systems over the past decade. Microservices have several advantages, including accelerating development and deployment, enabling software heterogeneity, and promoting elasticity and decoupled design. Despite these benefits, microservice architecture raises several challenges and opportunities in system design. First of all, microservices have different requirements from traditional monolithic applications. They are more sensitive to performance unpredictability from both hardware and software sources, spend large fractions of their end-to-end latency processing network requests, and introduce backpressure effects due to the dependencies between different microservices. To study the implications that this new programming model introduces across the system stack, we need representative applications built with end-to-end microservices. In this thesis, we first design a representative large-scale microservice benchmark suite and use it to study the system implications of microservices across the system stack. Then we use the benchmark suite to highlight the benefits machine learning-based techniques have in addressing the performance and resource efficiency issues of microservices in a practical and scalable way. We first present DeathStarBench, an open-source microservice benchmark suite built with ...
Document Type: thesis
File Description: application/pdf
Language: English
Relation: https://newcatalog.library.cornell.edu/catalog/15312683; Gan_cornellgrad_0058F_12799; http://dissertations.umi.com/cornellgrad:12799; bibid: 15312683; https://hdl.handle.net/1813/110825
DOI: 10.7298/hebp-st29
Availability: https://hdl.handle.net/1813/110825
http://dissertations.umi.com/cornellgrad:12799
https://doi.org/10.7298/hebp-st29
Rights: Attribution 4.0 International ; https://creativecommons.org/licenses/by/4.0/
Accession Number: edsbas.E48B21FD
Database: BASE
Description
Abstract:247 pages ; Cloud computing has greatly increased in prevalence and impact. Datacenter applications today have a strong focus towards cloud-native architectures. The cloud-native architecture utilizes many technologies, including microservices, containerization, service meshes and orchestration, cloud telemetry, and serverless, to fully exploit the flexibility, scalability, and robustness of public, private, or hybrid clouds. As a fundamental programming model to design modern cloud applications, microservices have drawn much attention from both academia and industry. Many cloud service providers, including Google, Facebook, Netflix, Twitter, Amazon, Uber, and Alibaba, have adopted or supported microservices in their systems over the past decade. Microservices have several advantages, including accelerating development and deployment, enabling software heterogeneity, and promoting elasticity and decoupled design. Despite these benefits, microservice architecture raises several challenges and opportunities in system design. First of all, microservices have different requirements from traditional monolithic applications. They are more sensitive to performance unpredictability from both hardware and software sources, spend large fractions of their end-to-end latency processing network requests, and introduce backpressure effects due to the dependencies between different microservices. To study the implications that this new programming model introduces across the system stack, we need representative applications built with end-to-end microservices. In this thesis, we first design a representative large-scale microservice benchmark suite and use it to study the system implications of microservices across the system stack. Then we use the benchmark suite to highlight the benefits machine learning-based techniques have in addressing the performance and resource efficiency issues of microservices in a practical and scalable way. We first present DeathStarBench, an open-source microservice benchmark suite built with ...
DOI:10.7298/hebp-st29