Evaluating the benefits of an extended memory hierarchy for parallel streamline algorithms

The increasing cost of achieving sufficient I/O bandwidth for high end supercomputers is leading to architectural evolutions in the I/O subsystem space. Currently popular designs create a staging area on each compute node for data output via solid state drives (SSDs), local hard drives, or both. In...

Full description

Saved in:
Bibliographic Details
Published in:2011 IEEE Symposium on Large Data Analysis and Visualization pp. 57 - 64
Main Authors: Camp, D., Childs, H., Chourasia, A., Garth, C., Joy, K. I.
Format: Conference Proceeding
Language:English
Published: IEEE 01.10.2011
Subjects:
ISBN:9781467301565, 1467301566
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The increasing cost of achieving sufficient I/O bandwidth for high end supercomputers is leading to architectural evolutions in the I/O subsystem space. Currently popular designs create a staging area on each compute node for data output via solid state drives (SSDs), local hard drives, or both. In this paper, we investigate whether these extensions to the memory hierarchy, primarily intended for computer simulations that produce data, can also benefit visualization and analysis programs that consume data. Some algorithms, such as those that read the data only once and store the data in primary memory, can not draw obvious benefit from the presence of a deeper memory hierarchy. However, algorithms that read data repeatedly from disk are excellent candidates, since the repeated reads can be accelerated by caching the first read of a block on the new resources (i.e. SSDs or hard drives). We study such an algorithm, streamline computation, and quantify the benefits it can derive.
ISBN:9781467301565
1467301566
DOI:10.1109/LDAV.2011.6092318