Výsledky vyhledávání - computation on a cluster with distributed memory

Upřesnit hledání
  1. 1
  2. 2

    Generating Coupled Cluster Code for Modern Distributed-Memory Tensor Software Autor Brandejs, Jan, Pototschnig, Johann, Saue, Trond

    ISSN: 1549-9626, 1549-9626
    Vydáno: United States 12.08.2025
    “…Using GPU-based HPC platforms efficiently for coupled cluster computations is a challenge due to heterogeneous hardware structures…”
    Zjistit podrobnosti o přístupu
    Journal Article
  3. 3

    A massively parallel tensor contraction framework for coupled-cluster computations Autor Solomonik, Edgar, Matthews, Devin, Hammond, Jeff R., Stanton, John F., Demmel, James

    ISSN: 0743-7315, 1096-0848
    Vydáno: Elsevier Inc 01.12.2014
    “…Precise calculation of molecular electronic wavefunctions by methods such as coupled-cluster requires the computation of tensor contractions, the cost of which has polynomial computational scaling…”
    Získat plný text
    Journal Article
  4. 4

    Using distributed memory parallel computers and GPU clusters for multidimensional Monte Carlo integration Autor Szakowski, Dominik, Stpiczyski, Przemysaw

    ISSN: 1532-0626, 1532-0634
    Vydáno: Blackwell Publishing Ltd 25.03.2015
    Vydáno v Concurrency and computation (25.03.2015)
    “…SummaryThe aim of this paper is to show that the multidimensional Monte Carlo integration can be efficiently implemented on various distributed memory parallel computers and clusters of multicore…”
    Získat plný text
    Journal Article
  5. 5

    Fast distributed large-pixel-count hologram computation using a GPU cluster Autor Pan, Yuechao, Xu, Xuewu, Liang, Xinan

    ISSN: 1539-4522, 2155-3165, 1539-4522
    Vydáno: United States 10.09.2013
    “… with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution…”
    Zjistit podrobnosti o přístupu
    Journal Article
  6. 6

    Parallel Computation of Component Trees on Distributed Memory Machines Autor Gotz, Markus, Cavallaro, Gabriele, Geraud, Thierry, Book, Matthias, Riedel, Morris

    ISSN: 1045-9219, 1558-2183
    Vydáno: New York IEEE 01.11.2018
    “… This work proposes a new efficient hybrid algorithm for the parallel computation of two particular component trees-the max- and min-tree-in shared and distributed memory environments…”
    Získat plný text
    Journal Article
  7. 7

    In-Memory Distributed Matrix Computation Processing and Optimization Autor Yongyang Yu, Mingjie Tang, Aref, Walid G., Malluhi, Qutaibah M., Abbas, Mostafa M., Ouzzani, Mourad

    ISSN: 2375-026X
    Vydáno: IEEE 01.04.2017
    “… This paper presents new efficient and scalable matrix processing and optimization techniques for in-memory distributed clusters…”
    Získat plný text
    Konferenční příspěvek
  8. 8

    Automatic Generation of Distributed-Memory Mappings for Tensor Computations Autor Kong, Martin, Abu-Yosef, Raneem, Rountev, Atanas, Sadayappan, P.

    ISSN: 2167-4337
    Vydáno: ACM 11.11.2023
    “… We introduce an innovative approach to automatically produce distributed-memory parallel code for an important sub-class of affine tensor computations common to Coupled Cluster (CC…”
    Získat plný text
    Konferenční příspěvek
  9. 9

    3D DFT by block tensor-matrix multiplication via a modified Cannon's algorithm: Implementation and scaling on distributed-memory clusters with fat tree networks Autor Malapally, Nitin, Bolnykh, Viacheslav, Suarez, Estela, Carloni, Paolo, Lippert, Thomas, Mandelli, Davide

    ISSN: 0743-7315
    Vydáno: Elsevier Inc 01.11.2024
    “… We demonstrate S3DFT's efficient use of hardware resources, and its scaling using up to 16,464 cores of the JUWELS Cluster…”
    Získat plný text
    Journal Article
  10. 10

    Nime: a native in-memory compute framework for cluster computing Autor Chen, Chao, Wang, Zhenghua, Jiang, Chen, Wang, Zheng

    ISSN: 1386-7857, 1573-7543
    Vydáno: New York Springer US 01.09.2025
    Vydáno v Cluster computing (01.09.2025)
    “… In this paper, we present NIME–a native in-memory compute framework for cluster computing–that aims to perform parallel task processing using native executors…”
    Získat plný text
    Journal Article
  11. 11

    MespaConfig: Memory-Sparing Configuration Auto-Tuning for Co-Located In-Memory Cluster Computing Jobs Autor Zong, Zan, Wen, Lijie, Hu, Xuming, Han, Rui, Qian, Chen, Lin, Li

    ISSN: 1939-1374, 2372-0204
    Vydáno: Piscataway IEEE 01.09.2022
    “…Distributed in-memory computing frameworks usually have lots of parameters (e.g., the buffer size of shuffle…”
    Získat plný text
    Journal Article
  12. 12

    Efficient GPU Computation of Large Protein Solvent-Excluded Surface Autor PlateauHolleville, Cyprien, Maria, Maxime, Merillou, Stephane, Montes, Matthieu

    ISSN: 1077-2626, 1941-0506, 1941-0506
    Vydáno: United States IEEE 01.04.2025
    “… While several methods targeted its computation, the ability to process large molecular structures to address the introduction of big complex analysis while leveraging the massively parallel…”
    Získat plný text
    Journal Article
  13. 13

    A distributed in-memory key-value store system on heterogeneous CPU–GPU cluster Autor Zhang, Kai, Wang, Kaibo, Yuan, Yuan, Guo, Lei, Li, Rubao, Zhang, Xiaodong, He, Bingsheng, Hu, Jiayu, Hua, Bei

    ISSN: 1066-8888, 0949-877X
    Vydáno: Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2017
    Vydáno v The VLDB journal (01.10.2017)
    “…In-memory key-value stores play a critical role in many data-intensive applications to provide high-throughput and low latency data accesses…”
    Získat plný text
    Journal Article
  14. 14

    Memristive Computational Memory Using Memristor Overwrite Logic (MOL) Autor Alhaj Ali, Khaled, Rizk, Mostafa, Baghdadi, Amer, Diguet, Jean-Philippe, Jomaah, Jalal, Onizawa, Naoya, Hanyu, Takahiro

    ISSN: 1063-8210, 1557-9999
    Vydáno: New York IEEE 01.11.2020
    “…), associated with an original MOL-based computational memory. MOL relies on a fully digital representation of memristor and can operate with different memristive device technologies…”
    Získat plný text
    Journal Article
  15. 15

    Fast Computation of Electromagnetic Scattering From a Metal-Dielectric Composite and Randomly Distributed BoRs Cluster Autor Gu, Jihong, He, Zi, Yin, Hongcheng, Chen, Rushan

    ISSN: 0018-926X, 1558-2221
    Vydáno: New York IEEE 01.12.2019
    “…An efficient equivalence principle algorithm with spherical equivalent source (SEPA) is proposed to analyze the electromagnetic scattering from a metal-dielectric composite and randomly distributed bodies of revolution…”
    Získat plný text
    Journal Article
  16. 16

    Parallel Spectral Clustering in Distributed Systems Autor Chen, Wen-Yen, Song, Yangqiu, Bai, Hongjie, Lin, Chih-Jen, Chang, Edward Y.

    ISSN: 0162-8828, 1939-3539, 2160-9292, 1939-3539
    Vydáno: Los Alamitos, CA IEEE 01.03.2011
    “… We parallelize both memory use and computation on distributed computers. Through an empirical study on a document data set of 193,844 instances and a photo data set of 2,121,863, we show that our parallel algorithm can effectively handle large problems…”
    Získat plný text
    Journal Article
  17. 17

    Distributed Discrete Morse Sandwich: Efficient Computation of Persistence Diagrams for Massive Scalar Data Autor Le Guillou, Eve, Fortin, P., Tierny, J.

    ISSN: 1045-9219
    Vydáno: Institute of Electrical and Electronics Engineers 2025
    “… In this work, we extend DMS to distributed-memory parallelism for the efficient and scalable computation of persistence diagrams for massive datasets across multiple compute nodes…”
    Získat plný text
    Journal Article
  18. 18

    Scalable distributed data cube computation for large-scale multidimensional data analysis on a Spark cluster Autor Lee, Suan, Kang, Seok, Kim, Jinho, Yu, Eun Jung

    ISSN: 1386-7857, 1573-7543
    Vydáno: New York Springer US 01.01.2019
    Vydáno v Cluster computing (01.01.2019)
    “… However, MapReduce incurs the overhead of disk I/Os and network traffic. To overcome these MapReduce limitations, Spark was recently proposed as a memory-based parallel/distributed processing framework…”
    Získat plný text
    Journal Article
  19. 19

    DSparse: A Distributed Training Method for Edge Clusters Based on Sparse Update Autor Peng, Xiao-Hui, Sun, Yi-Xuan, Zhang, Zheng-Hui, Wang, Yi-Fan

    ISSN: 1000-9000, 1860-4749
    Vydáno: Singapore Springer Nature Singapore 01.05.2025
    “… Moreover, existing distributed training architectures often fail to consider the constraints of resources and communication efficiency in edge environments…”
    Získat plný text
    Journal Article
  20. 20