zCeph: Design and implementation of a ZNS-friendly distributed file system

This article presents zCeph, a ZNS-friendly distributed file system designed to efficiently utilize zoned namespace (ZNS) SSDs. Specifically, we first propose MZAllocator which enables multiple zones to be utilized simultaneously to maximize the performance of ZNS SSDs. Second, we adopt an append co...

Full description

Saved in:
Bibliographic Details
Published in:Future generation computer systems Vol. 169; p. 107763
Main Authors: Ha, Jin Yong, Son, Yongseok
Format: Journal Article
Language:English
Published: Elsevier B.V 01.08.2025
Subjects:
ISSN:0167-739X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article presents zCeph, a ZNS-friendly distributed file system designed to efficiently utilize zoned namespace (ZNS) SSDs. Specifically, we first propose MZAllocator which enables multiple zones to be utilized simultaneously to maximize the performance of ZNS SSDs. Second, we adopt an append command to eliminate the need for synchronization in write ordering within distributed storage systems to improve scalability. Third, we present zBlueFS, a ZNS-aware user-level file system based on BlueFS to update the metadata on the ZNS SSD without a conventional SSD. Finally, we propose a delta write technique, DeltaWriter, which writes only a modified part of the metadata (i.e., onode) to reduce read–modify–write overhead whenever the metadata are updated. We implement zCeph with four techniques based on Ceph, an open-source distributed file system. Further, we evaluate zCeph on a pair of 48-core machines with ZNS SSDs using micro and macro benchmarks, and the results reveal that zCeph improves performance by up to 4.2× and 8.8× compared with Ceph, respectively. [Display omitted] •zCeph introduces the techniques to maximize its performance on the ZNS SSD.•zCeph utilizes the parallelism of core and ZNS SSD to improve performance.•zCeph presents efficient zone-to-object mapping management.•zCeph improves performance by up to 8.8× compared with Ceph.
ISSN:0167-739X
DOI:10.1016/j.future.2025.107763