Suchergebnisse - computation on a cluster with distributed memory

  1. 1
  2. 2

    Generating Coupled Cluster Code for Modern Distributed-Memory Tensor Software von Brandejs, Jan, Pototschnig, Johann, Saue, Trond

    ISSN: 1549-9626, 1549-9626
    Veröffentlicht: United States 12.08.2025
    Veröffentlicht in Journal of chemical theory and computation (12.08.2025)
    “… Using GPU-based HPC platforms efficiently for coupled cluster computations is a challenge due to heterogeneous hardware structures …”
    Weitere Angaben
    Journal Article
  3. 3

    A massively parallel tensor contraction framework for coupled-cluster computations von Solomonik, Edgar, Matthews, Devin, Hammond, Jeff R., Stanton, John F., Demmel, James

    ISSN: 0743-7315, 1096-0848
    Veröffentlicht: Elsevier Inc 01.12.2014
    Veröffentlicht in Journal of parallel and distributed computing (01.12.2014)
    “… Precise calculation of molecular electronic wavefunctions by methods such as coupled-cluster requires the computation of tensor contractions, the cost of which has polynomial computational scaling …”
    Volltext
    Journal Article
  4. 4

    Using distributed memory parallel computers and GPU clusters for multidimensional Monte Carlo integration von Szakowski, Dominik, Stpiczyski, Przemysaw

    ISSN: 1532-0626, 1532-0634
    Veröffentlicht: Blackwell Publishing Ltd 25.03.2015
    Veröffentlicht in Concurrency and computation (25.03.2015)
    “… SummaryThe aim of this paper is to show that the multidimensional Monte Carlo integration can be efficiently implemented on various distributed memory parallel computers and clusters of multicore …”
    Volltext
    Journal Article
  5. 5

    Fast distributed large-pixel-count hologram computation using a GPU cluster von Pan, Yuechao, Xu, Xuewu, Liang, Xinan

    ISSN: 1539-4522, 2155-3165, 1539-4522
    Veröffentlicht: United States 10.09.2013
    “… with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution …”
    Weitere Angaben
    Journal Article
  6. 6

    Parallel Computation of Component Trees on Distributed Memory Machines von Gotz, Markus, Cavallaro, Gabriele, Geraud, Thierry, Book, Matthias, Riedel, Morris

    ISSN: 1045-9219, 1558-2183
    Veröffentlicht: New York IEEE 01.11.2018
    “… This work proposes a new efficient hybrid algorithm for the parallel computation of two particular component trees-the max- and min-tree-in shared and distributed memory environments …”
    Volltext
    Journal Article
  7. 7

    In-Memory Distributed Matrix Computation Processing and Optimization von Yongyang Yu, Mingjie Tang, Aref, Walid G., Malluhi, Qutaibah M., Abbas, Mostafa M., Ouzzani, Mourad

    ISSN: 2375-026X
    Veröffentlicht: IEEE 01.04.2017
    “… This paper presents new efficient and scalable matrix processing and optimization techniques for in-memory distributed clusters …”
    Volltext
    Tagungsbericht
  8. 8

    Automatic Generation of Distributed-Memory Mappings for Tensor Computations von Kong, Martin, Abu-Yosef, Raneem, Rountev, Atanas, Sadayappan, P.

    ISSN: 2167-4337
    Veröffentlicht: ACM 11.11.2023
    “… We introduce an innovative approach to automatically produce distributed-memory parallel code for an important sub-class of affine tensor computations common to Coupled Cluster (CC …”
    Volltext
    Tagungsbericht
  9. 9

    3D DFT by block tensor-matrix multiplication via a modified Cannon's algorithm: Implementation and scaling on distributed-memory clusters with fat tree networks von Malapally, Nitin, Bolnykh, Viacheslav, Suarez, Estela, Carloni, Paolo, Lippert, Thomas, Mandelli, Davide

    ISSN: 0743-7315
    Veröffentlicht: Elsevier Inc 01.11.2024
    Veröffentlicht in Journal of parallel and distributed computing (01.11.2024)
    “… We demonstrate S3DFT's efficient use of hardware resources, and its scaling using up to 16,464 cores of the JUWELS Cluster …”
    Volltext
    Journal Article
  10. 10

    Nime: a native in-memory compute framework for cluster computing von Chen, Chao, Wang, Zhenghua, Jiang, Chen, Wang, Zheng

    ISSN: 1386-7857, 1573-7543
    Veröffentlicht: New York Springer US 01.09.2025
    Veröffentlicht in Cluster computing (01.09.2025)
    “… In this paper, we present NIME–a native in-memory compute framework for cluster computing–that aims to perform parallel task processing using native executors …”
    Volltext
    Journal Article
  11. 11

    MespaConfig: Memory-Sparing Configuration Auto-Tuning for Co-Located In-Memory Cluster Computing Jobs von Zong, Zan, Wen, Lijie, Hu, Xuming, Han, Rui, Qian, Chen, Lin, Li

    ISSN: 1939-1374, 2372-0204
    Veröffentlicht: Piscataway IEEE 01.09.2022
    Veröffentlicht in IEEE transactions on services computing (01.09.2022)
    “… Distributed in-memory computing frameworks usually have lots of parameters (e.g., the buffer size of shuffle …”
    Volltext
    Journal Article
  12. 12

    Efficient GPU Computation of Large Protein Solvent-Excluded Surface von PlateauHolleville, Cyprien, Maria, Maxime, Merillou, Stephane, Montes, Matthieu

    ISSN: 1077-2626, 1941-0506, 1941-0506
    Veröffentlicht: United States IEEE 01.04.2025
    “… While several methods targeted its computation, the ability to process large molecular structures to address the introduction of big complex analysis while leveraging the massively parallel …”
    Volltext
    Journal Article
  13. 13

    A distributed in-memory key-value store system on heterogeneous CPU–GPU cluster von Zhang, Kai, Wang, Kaibo, Yuan, Yuan, Guo, Lei, Li, Rubao, Zhang, Xiaodong, He, Bingsheng, Hu, Jiayu, Hua, Bei

    ISSN: 1066-8888, 0949-877X
    Veröffentlicht: Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2017
    Veröffentlicht in The VLDB journal (01.10.2017)
    “… In-memory key-value stores play a critical role in many data-intensive applications to provide high-throughput and low latency data accesses …”
    Volltext
    Journal Article
  14. 14

    Memristive Computational Memory Using Memristor Overwrite Logic (MOL) von Alhaj Ali, Khaled, Rizk, Mostafa, Baghdadi, Amer, Diguet, Jean-Philippe, Jomaah, Jalal, Onizawa, Naoya, Hanyu, Takahiro

    ISSN: 1063-8210, 1557-9999
    Veröffentlicht: New York IEEE 01.11.2020
    “… ), associated with an original MOL-based computational memory. MOL relies on a fully digital representation of memristor and can operate with different memristive device technologies …”
    Volltext
    Journal Article
  15. 15

    Fast Computation of Electromagnetic Scattering From a Metal-Dielectric Composite and Randomly Distributed BoRs Cluster von Gu, Jihong, He, Zi, Yin, Hongcheng, Chen, Rushan

    ISSN: 0018-926X, 1558-2221
    Veröffentlicht: New York IEEE 01.12.2019
    Veröffentlicht in IEEE transactions on antennas and propagation (01.12.2019)
    “… An efficient equivalence principle algorithm with spherical equivalent source (SEPA) is proposed to analyze the electromagnetic scattering from a metal-dielectric composite and randomly distributed bodies of revolution …”
    Volltext
    Journal Article
  16. 16

    Parallel Spectral Clustering in Distributed Systems von Chen, Wen-Yen, Song, Yangqiu, Bai, Hongjie, Lin, Chih-Jen, Chang, Edward Y.

    ISSN: 0162-8828, 1939-3539, 2160-9292, 1939-3539
    Veröffentlicht: Los Alamitos, CA IEEE 01.03.2011
    “… We parallelize both memory use and computation on distributed computers. Through an empirical study on a document data set of 193,844 instances and a photo data set of 2,121,863, we show that our parallel algorithm can effectively handle large problems …”
    Volltext
    Journal Article
  17. 17

    Distributed Discrete Morse Sandwich: Efficient Computation of Persistence Diagrams for Massive Scalar Data von Le Guillou, Eve, Fortin, P., Tierny, J.

    ISSN: 1045-9219
    Veröffentlicht: Institute of Electrical and Electronics Engineers 2025
    “… In this work, we extend DMS to distributed-memory parallelism for the efficient and scalable computation of persistence diagrams for massive datasets across multiple compute nodes …”
    Volltext
    Journal Article
  18. 18

    Scalable distributed data cube computation for large-scale multidimensional data analysis on a Spark cluster von Lee, Suan, Kang, Seok, Kim, Jinho, Yu, Eun Jung

    ISSN: 1386-7857, 1573-7543
    Veröffentlicht: New York Springer US 01.01.2019
    Veröffentlicht in Cluster computing (01.01.2019)
    “… However, MapReduce incurs the overhead of disk I/Os and network traffic. To overcome these MapReduce limitations, Spark was recently proposed as a memory-based parallel/distributed processing framework …”
    Volltext
    Journal Article
  19. 19

    DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update von Peng, Xiao-Hui, Sun, Yi-Xuan, Zhang, Zheng-Hui, Wang, Yi-Fan

    ISSN: 1000-9000, 1860-4749
    Veröffentlicht: Singapore Springer Nature Singapore 01.05.2025
    Veröffentlicht in Journal of computer science and technology (01.05.2025)
    “… Moreover, existing distributed training architectures often fail to consider the constraints of resources and communication efficiency in edge environments …”
    Volltext
    Journal Article
  20. 20

    GraNNDis: Fast Distributed Graph Neural Network Training Framework for Multi-Server Clusters von Song, Jaeyong, Jang, Hongsun, Lim, Hunseong, Jung, Jaewon, Kim, Youngsok, Lee, Jinho

    Veröffentlicht: ACM 13.10.2024
    “… ) Redundant memory usage and computation hinder the scalability of the distributed frameworks …”
    Volltext
    Tagungsbericht