Search Results - computation on a cluster with distributed memory

Refine Results
  1. 1
  2. 2

    Generating Coupled Cluster Code for Modern Distributed-Memory Tensor Software by Brandejs, Jan, Pototschnig, Johann, Saue, Trond

    ISSN: 1549-9626, 1549-9626
    Published: United States 12.08.2025
    Published in Journal of chemical theory and computation (12.08.2025)
    “…Using GPU-based HPC platforms efficiently for coupled cluster computations is a challenge due to heterogeneous hardware structures…”
    Get more information
    Journal Article
  3. 3

    A massively parallel tensor contraction framework for coupled-cluster computations by Solomonik, Edgar, Matthews, Devin, Hammond, Jeff R., Stanton, John F., Demmel, James

    ISSN: 0743-7315, 1096-0848
    Published: Elsevier Inc 01.12.2014
    “…Precise calculation of molecular electronic wavefunctions by methods such as coupled-cluster requires the computation of tensor contractions, the cost of which has polynomial computational scaling…”
    Get full text
    Journal Article
  4. 4

    Using distributed memory parallel computers and GPU clusters for multidimensional Monte Carlo integration by Szakowski, Dominik, Stpiczyski, Przemysaw

    ISSN: 1532-0626, 1532-0634
    Published: Blackwell Publishing Ltd 25.03.2015
    Published in Concurrency and computation (25.03.2015)
    “…SummaryThe aim of this paper is to show that the multidimensional Monte Carlo integration can be efficiently implemented on various distributed memory parallel computers and clusters of multicore…”
    Get full text
    Journal Article
  5. 5

    Fast distributed large-pixel-count hologram computation using a GPU cluster by Pan, Yuechao, Xu, Xuewu, Liang, Xinan

    ISSN: 1539-4522, 2155-3165, 1539-4522
    Published: United States 10.09.2013
    “… with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution…”
    Get more information
    Journal Article
  6. 6

    Parallel Computation of Component Trees on Distributed Memory Machines by Gotz, Markus, Cavallaro, Gabriele, Geraud, Thierry, Book, Matthias, Riedel, Morris

    ISSN: 1045-9219, 1558-2183
    Published: New York IEEE 01.11.2018
    “… This work proposes a new efficient hybrid algorithm for the parallel computation of two particular component trees-the max- and min-tree-in shared and distributed memory environments…”
    Get full text
    Journal Article
  7. 7

    In-Memory Distributed Matrix Computation Processing and Optimization by Yongyang Yu, Mingjie Tang, Aref, Walid G., Malluhi, Qutaibah M., Abbas, Mostafa M., Ouzzani, Mourad

    ISSN: 2375-026X
    Published: IEEE 01.04.2017
    “… This paper presents new efficient and scalable matrix processing and optimization techniques for in-memory distributed clusters…”
    Get full text
    Conference Proceeding
  8. 8

    Automatic Generation of Distributed-Memory Mappings for Tensor Computations by Kong, Martin, Abu-Yosef, Raneem, Rountev, Atanas, Sadayappan, P.

    ISSN: 2167-4337
    Published: ACM 11.11.2023
    “… We introduce an innovative approach to automatically produce distributed-memory parallel code for an important sub-class of affine tensor computations common to Coupled Cluster (CC…”
    Get full text
    Conference Proceeding
  9. 9

    3D DFT by block tensor-matrix multiplication via a modified Cannon's algorithm: Implementation and scaling on distributed-memory clusters with fat tree networks by Malapally, Nitin, Bolnykh, Viacheslav, Suarez, Estela, Carloni, Paolo, Lippert, Thomas, Mandelli, Davide

    ISSN: 0743-7315
    Published: Elsevier Inc 01.11.2024
    “… We demonstrate S3DFT's efficient use of hardware resources, and its scaling using up to 16,464 cores of the JUWELS Cluster…”
    Get full text
    Journal Article
  10. 10

    Nime: a native in-memory compute framework for cluster computing by Chen, Chao, Wang, Zhenghua, Jiang, Chen, Wang, Zheng

    ISSN: 1386-7857, 1573-7543
    Published: New York Springer US 01.09.2025
    Published in Cluster computing (01.09.2025)
    “… In this paper, we present NIME–a native in-memory compute framework for cluster computing–that aims to perform parallel task processing using native executors…”
    Get full text
    Journal Article
  11. 11

    MespaConfig: Memory-Sparing Configuration Auto-Tuning for Co-Located In-Memory Cluster Computing Jobs by Zong, Zan, Wen, Lijie, Hu, Xuming, Han, Rui, Qian, Chen, Lin, Li

    ISSN: 1939-1374, 2372-0204
    Published: Piscataway IEEE 01.09.2022
    Published in IEEE transactions on services computing (01.09.2022)
    “…Distributed in-memory computing frameworks usually have lots of parameters (e.g., the buffer size of shuffle…”
    Get full text
    Journal Article
  12. 12

    Efficient GPU Computation of Large Protein Solvent-Excluded Surface by PlateauHolleville, Cyprien, Maria, Maxime, Merillou, Stephane, Montes, Matthieu

    ISSN: 1077-2626, 1941-0506, 1941-0506
    Published: United States IEEE 01.04.2025
    “… While several methods targeted its computation, the ability to process large molecular structures to address the introduction of big complex analysis while leveraging the massively parallel…”
    Get full text
    Journal Article
  13. 13

    A distributed in-memory key-value store system on heterogeneous CPU–GPU cluster by Zhang, Kai, Wang, Kaibo, Yuan, Yuan, Guo, Lei, Li, Rubao, Zhang, Xiaodong, He, Bingsheng, Hu, Jiayu, Hua, Bei

    ISSN: 1066-8888, 0949-877X
    Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2017
    Published in The VLDB journal (01.10.2017)
    “…In-memory key-value stores play a critical role in many data-intensive applications to provide high-throughput and low latency data accesses…”
    Get full text
    Journal Article
  14. 14

    Memristive Computational Memory Using Memristor Overwrite Logic (MOL) by Alhaj Ali, Khaled, Rizk, Mostafa, Baghdadi, Amer, Diguet, Jean-Philippe, Jomaah, Jalal, Onizawa, Naoya, Hanyu, Takahiro

    ISSN: 1063-8210, 1557-9999
    Published: New York IEEE 01.11.2020
    “…), associated with an original MOL-based computational memory. MOL relies on a fully digital representation of memristor and can operate with different memristive device technologies…”
    Get full text
    Journal Article
  15. 15

    Fast Computation of Electromagnetic Scattering From a Metal-Dielectric Composite and Randomly Distributed BoRs Cluster by Gu, Jihong, He, Zi, Yin, Hongcheng, Chen, Rushan

    ISSN: 0018-926X, 1558-2221
    Published: New York IEEE 01.12.2019
    “…An efficient equivalence principle algorithm with spherical equivalent source (SEPA) is proposed to analyze the electromagnetic scattering from a metal-dielectric composite and randomly distributed bodies of revolution…”
    Get full text
    Journal Article
  16. 16

    Parallel Spectral Clustering in Distributed Systems by Chen, Wen-Yen, Song, Yangqiu, Bai, Hongjie, Lin, Chih-Jen, Chang, Edward Y.

    ISSN: 0162-8828, 1939-3539, 2160-9292, 1939-3539
    Published: Los Alamitos, CA IEEE 01.03.2011
    “… We parallelize both memory use and computation on distributed computers. Through an empirical study on a document data set of 193,844 instances and a photo data set of 2,121,863, we show that our parallel algorithm can effectively handle large problems…”
    Get full text
    Journal Article
  17. 17

    Distributed Discrete Morse Sandwich: Efficient Computation of Persistence Diagrams for Massive Scalar Data by Le Guillou, Eve, Fortin, P., Tierny, J.

    ISSN: 1045-9219
    Published: Institute of Electrical and Electronics Engineers 2025
    “… In this work, we extend DMS to distributed-memory parallelism for the efficient and scalable computation of persistence diagrams for massive datasets across multiple compute nodes…”
    Get full text
    Journal Article
  18. 18

    Scalable distributed data cube computation for large-scale multidimensional data analysis on a Spark cluster by Lee, Suan, Kang, Seok, Kim, Jinho, Yu, Eun Jung

    ISSN: 1386-7857, 1573-7543
    Published: New York Springer US 01.01.2019
    Published in Cluster computing (01.01.2019)
    “… However, MapReduce incurs the overhead of disk I/Os and network traffic. To overcome these MapReduce limitations, Spark was recently proposed as a memory-based parallel/distributed processing framework…”
    Get full text
    Journal Article
  19. 19

    DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update by Peng, Xiao-Hui, Sun, Yi-Xuan, Zhang, Zheng-Hui, Wang, Yi-Fan

    ISSN: 1000-9000, 1860-4749
    Published: Singapore Springer Nature Singapore 01.05.2025
    Published in Journal of computer science and technology (01.05.2025)
    “… Moreover, existing distributed training architectures often fail to consider the constraints of resources and communication efficiency in edge environments…”
    Get full text
    Journal Article
  20. 20

    GraNNDis: Fast Distributed Graph Neural Network Training Framework for Multi-Server Clusters by Song, Jaeyong, Jang, Hongsun, Lim, Hunseong, Jung, Jaewon, Kim, Youngsok, Lee, Jinho

    Published: ACM 13.10.2024
    “…) Redundant memory usage and computation hinder the scalability of the distributed frameworks…”
    Get full text
    Conference Proceeding