Suchergebnisse - computation on a cluster with distributed memory
-
1
Parallel Large Scale High Accuracy Navier-Stokes Computations on Distributed Memory Clusters
ISSN: 0920-8542, 1573-0484Veröffentlicht: 01.01.2004Veröffentlicht in The Journal of supercomputing (01.01.2004)Volltext
Journal Article -
2
Generating Coupled Cluster Code for Modern Distributed-Memory Tensor Software
ISSN: 1549-9626, 1549-9626Veröffentlicht: United States 12.08.2025Veröffentlicht in Journal of chemical theory and computation (12.08.2025)“… Using GPU-based HPC platforms efficiently for coupled cluster computations is a challenge due to heterogeneous hardware structures …”
Weitere Angaben
Journal Article -
3
A massively parallel tensor contraction framework for coupled-cluster computations
ISSN: 0743-7315, 1096-0848Veröffentlicht: Elsevier Inc 01.12.2014Veröffentlicht in Journal of parallel and distributed computing (01.12.2014)“… Precise calculation of molecular electronic wavefunctions by methods such as coupled-cluster requires the computation of tensor contractions, the cost of which has polynomial computational scaling …”
Volltext
Journal Article -
4
Using distributed memory parallel computers and GPU clusters for multidimensional Monte Carlo integration
ISSN: 1532-0626, 1532-0634Veröffentlicht: Blackwell Publishing Ltd 25.03.2015Veröffentlicht in Concurrency and computation (25.03.2015)“… SummaryThe aim of this paper is to show that the multidimensional Monte Carlo integration can be efficiently implemented on various distributed memory parallel computers and clusters of multicore …”
Volltext
Journal Article -
5
Fast distributed large-pixel-count hologram computation using a GPU cluster
ISSN: 1539-4522, 2155-3165, 1539-4522Veröffentlicht: United States 10.09.2013Veröffentlicht in Applied optics. Optical technology and biomedical optics (10.09.2013)“… with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution …”
Weitere Angaben
Journal Article -
6
Parallel Computation of Component Trees on Distributed Memory Machines
ISSN: 1045-9219, 1558-2183Veröffentlicht: New York IEEE 01.11.2018Veröffentlicht in IEEE transactions on parallel and distributed systems (01.11.2018)“… This work proposes a new efficient hybrid algorithm for the parallel computation of two particular component trees-the max- and min-tree-in shared and distributed memory environments …”
Volltext
Journal Article -
7
In-Memory Distributed Matrix Computation Processing and Optimization
ISSN: 2375-026XVeröffentlicht: IEEE 01.04.2017Veröffentlicht in 2017 IEEE 33rd International Conference on Data Engineering (ICDE) (01.04.2017)“… This paper presents new efficient and scalable matrix processing and optimization techniques for in-memory distributed clusters …”
Volltext
Tagungsbericht -
8
Automatic Generation of Distributed-Memory Mappings for Tensor Computations
ISSN: 2167-4337Veröffentlicht: ACM 11.11.2023Veröffentlicht in International Conference for High Performance Computing, Networking, Storage and Analysis (Online) (11.11.2023)“… We introduce an innovative approach to automatically produce distributed-memory parallel code for an important sub-class of affine tensor computations common to Coupled Cluster (CC …”
Volltext
Tagungsbericht -
9
3D DFT by block tensor-matrix multiplication via a modified Cannon's algorithm: Implementation and scaling on distributed-memory clusters with fat tree networks
ISSN: 0743-7315Veröffentlicht: Elsevier Inc 01.11.2024Veröffentlicht in Journal of parallel and distributed computing (01.11.2024)“… We demonstrate S3DFT's efficient use of hardware resources, and its scaling using up to 16,464 cores of the JUWELS Cluster …”
Volltext
Journal Article -
10
Nime: a native in-memory compute framework for cluster computing
ISSN: 1386-7857, 1573-7543Veröffentlicht: New York Springer US 01.09.2025Veröffentlicht in Cluster computing (01.09.2025)“… In this paper, we present NIME–a native in-memory compute framework for cluster computing–that aims to perform parallel task processing using native executors …”
Volltext
Journal Article -
11
MespaConfig: Memory-Sparing Configuration Auto-Tuning for Co-Located In-Memory Cluster Computing Jobs
ISSN: 1939-1374, 2372-0204Veröffentlicht: Piscataway IEEE 01.09.2022Veröffentlicht in IEEE transactions on services computing (01.09.2022)“… Distributed in-memory computing frameworks usually have lots of parameters (e.g., the buffer size of shuffle …”
Volltext
Journal Article -
12
Efficient GPU Computation of Large Protein Solvent-Excluded Surface
ISSN: 1077-2626, 1941-0506, 1941-0506Veröffentlicht: United States IEEE 01.04.2025Veröffentlicht in IEEE transactions on visualization and computer graphics (01.04.2025)“… While several methods targeted its computation, the ability to process large molecular structures to address the introduction of big complex analysis while leveraging the massively parallel …”
Volltext
Journal Article -
13
A distributed in-memory key-value store system on heterogeneous CPU–GPU cluster
ISSN: 1066-8888, 0949-877XVeröffentlicht: Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2017Veröffentlicht in The VLDB journal (01.10.2017)“… In-memory key-value stores play a critical role in many data-intensive applications to provide high-throughput and low latency data accesses …”
Volltext
Journal Article -
14
Memristive Computational Memory Using Memristor Overwrite Logic (MOL)
ISSN: 1063-8210, 1557-9999Veröffentlicht: New York IEEE 01.11.2020Veröffentlicht in IEEE Transactions on Very Large Scale Integration (VLSI) Systems (01.11.2020)“… ), associated with an original MOL-based computational memory. MOL relies on a fully digital representation of memristor and can operate with different memristive device technologies …”
Volltext
Journal Article -
15
Fast Computation of Electromagnetic Scattering From a Metal-Dielectric Composite and Randomly Distributed BoRs Cluster
ISSN: 0018-926X, 1558-2221Veröffentlicht: New York IEEE 01.12.2019Veröffentlicht in IEEE transactions on antennas and propagation (01.12.2019)“… An efficient equivalence principle algorithm with spherical equivalent source (SEPA) is proposed to analyze the electromagnetic scattering from a metal-dielectric composite and randomly distributed bodies of revolution …”
Volltext
Journal Article -
16
Parallel Spectral Clustering in Distributed Systems
ISSN: 0162-8828, 1939-3539, 2160-9292, 1939-3539Veröffentlicht: Los Alamitos, CA IEEE 01.03.2011Veröffentlicht in IEEE transactions on pattern analysis and machine intelligence (01.03.2011)“… We parallelize both memory use and computation on distributed computers. Through an empirical study on a document data set of 193,844 instances and a photo data set of 2,121,863, we show that our parallel algorithm can effectively handle large problems …”
Volltext
Journal Article -
17
Distributed Discrete Morse Sandwich: Efficient Computation of Persistence Diagrams for Massive Scalar Data
ISSN: 1045-9219Veröffentlicht: Institute of Electrical and Electronics Engineers 2025Veröffentlicht in IEEE transactions on parallel and distributed systems (2025)“… In this work, we extend DMS to distributed-memory parallelism for the efficient and scalable computation of persistence diagrams for massive datasets across multiple compute nodes …”
Volltext
Journal Article -
18
Scalable distributed data cube computation for large-scale multidimensional data analysis on a Spark cluster
ISSN: 1386-7857, 1573-7543Veröffentlicht: New York Springer US 01.01.2019Veröffentlicht in Cluster computing (01.01.2019)“… However, MapReduce incurs the overhead of disk I/Os and network traffic. To overcome these MapReduce limitations, Spark was recently proposed as a memory-based parallel/distributed processing framework …”
Volltext
Journal Article -
19
DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update
ISSN: 1000-9000, 1860-4749Veröffentlicht: Singapore Springer Nature Singapore 01.05.2025Veröffentlicht in Journal of computer science and technology (01.05.2025)“… Moreover, existing distributed training architectures often fail to consider the constraints of resources and communication efficiency in edge environments …”
Volltext
Journal Article -
20
GraNNDis: Fast Distributed Graph Neural Network Training Framework for Multi-Server Clusters
Veröffentlicht: ACM 13.10.2024Veröffentlicht in 2024 33rd International Conference on Parallel Architectures and Compilation Techniques (PACT) (13.10.2024)“… ) Redundant memory usage and computation hinder the scalability of the distributed frameworks …”
Volltext
Tagungsbericht