Separating Storage and Compute with the Databricks Lakehouse Platform

As a part of The Arena Group's Data & AI Team, we are architecting a new unified data platform that can handle both Data Engineering and Data Science use cases for all the company's needs. In order to accomplish this, we are working to create a scalable and cost-effective data platform...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) s. 1 - 2
Hlavní autori: Kumar, Deeptaanshu, Li, Suxi
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 13.10.2022
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:As a part of The Arena Group's Data & AI Team, we are architecting a new unified data platform that can handle both Data Engineering and Data Science use cases for all the company's needs. In order to accomplish this, we are working to create a scalable and cost-effective data platform that will allow us to store large volumes of historical data and process, transform, and query it with variable workloads. This means that our current Redshift data warehouse cannot serve as the backbone of our data platform, since it couples storage and compute, which forces us to pay for increased compute nodes just to store the growing amounts of historical data. As a result, we set out to explore data platforms that decoupled storage and compute, such as Snowflake and Databricks. We chose Databricks because it adequately serves our Data Engineering needs by keeping storage on AWS S3 and gives us flexibility with compute by using ad-hoc Spark clusters. It also offers us more capabilities for Data Science needs. In this paper, we will go over our proposed architecture and explain how we will take advantage of these Data Engineering and Data Science capabilities to address our initial use cases.
DOI:10.1109/DSAA54385.2022.10032386