Separating Storage and Compute with the Databricks Lakehouse Platform

As a part of The Arena Group's Data & AI Team, we are architecting a new unified data platform that can handle both Data Engineering and Data Science use cases for all the company's needs. In order to accomplish this, we are working to create a scalable and cost-effective data platform...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) S. 1 - 2
Hauptverfasser: Kumar, Deeptaanshu, Li, Suxi
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 13.10.2022
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:As a part of The Arena Group's Data & AI Team, we are architecting a new unified data platform that can handle both Data Engineering and Data Science use cases for all the company's needs. In order to accomplish this, we are working to create a scalable and cost-effective data platform that will allow us to store large volumes of historical data and process, transform, and query it with variable workloads. This means that our current Redshift data warehouse cannot serve as the backbone of our data platform, since it couples storage and compute, which forces us to pay for increased compute nodes just to store the growing amounts of historical data. As a result, we set out to explore data platforms that decoupled storage and compute, such as Snowflake and Databricks. We chose Databricks because it adequately serves our Data Engineering needs by keeping storage on AWS S3 and gives us flexibility with compute by using ad-hoc Spark clusters. It also offers us more capabilities for Data Science needs. In this paper, we will go over our proposed architecture and explain how we will take advantage of these Data Engineering and Data Science capabilities to address our initial use cases.
DOI:10.1109/DSAA54385.2022.10032386