Search Results - Computing methodologies → Parallel computing methodologies
-
1
SFLU: Synchronization-Free Sparse LU Factorization for Fast Circuit Simulation on GPUs
Published: IEEE 05.12.2021Published in 2021 58th ACM/IEEE Design Automation Conference (DAC) (05.12.2021)“…Sparse LU factorization is one of the key building blocks of sparse direct solvers and often dominates the computing time of circuit simulation programs…”
Get full text
Conference Proceeding -
2
Skywalker: Efficient Alias-Method-Based Graph Sampling and Random Walk on GPUs
Published: IEEE 01.09.2021Published in 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT) (01.09.2021)“…Graph sampling and random walk operations, capturing the structural properties of graphs, are playing an important role today as we cannot directly adopt computing-intensive algorithms on large-scale graphs…”
Get full text
Conference Proceeding -
3
Parallelizing Maximal Clique Enumeration on GPUs
Published: IEEE 21.10.2023Published in 2023 32nd International Conference on Parallel Architectures and Compilation Techniques (PACT) (21.10.2023)“… We propose to parallelize MCE on GPUs by performing depth-first traversal of independent subtrees in parallel…”
Get full text
Conference Proceeding -
4
Leveraging Difference Recurrence Relations for High-Performance GPU Genome Alignment
Published: ACM 13.10.2024Published in 2024 33rd International Conference on Parallel Architectures and Compilation Techniques (PACT) (13.10.2024)“…Genome pairwise sequence alignment is one of the most computationally intensive workloads in many genomic pipelines, often accounting for over 90% of the…”
Get full text
Conference Proceeding -
5
MAD-Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems
Published: IEEE 29.06.2024Published in 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA) (29.06.2024)“…Training and deploying large-scale machine learning models is time-consuming, requires significant distributed computing infrastructures, and incurs high operational costs…”
Get full text
Conference Proceeding -
6
TSUNAMI: A GPU Implementation of the WFA Algorithm
Published: IEEE 21.10.2023Published in 2023 32nd International Conference on Parallel Architectures and Compilation Techniques (PACT) (21.10.2023)“… TSUNAMI exploits GPU high-parallel computing to accelerate…”
Get full text
Conference Proceeding -
7
pSyncPIM: Partially Synchronous Execution of Sparse Matrix Operations for All-Bank PIM Architectures
Published: IEEE 29.06.2024Published in 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA) (29.06.2024)“…Recent commercial incarnations of processing-in-memory (PIM) maintain the standard DRAM interface and employ the all-bank mode execution to maximize bank-level…”
Get full text
Conference Proceeding -
8
OpenDRC: An Efficient Open-Source Design Rule Checking Engine with Hierarchical GPU Acceleration
Published: IEEE 09.07.2023Published in 2023 60th ACM/IEEE Design Automation Conference (DAC) (09.07.2023)“… OpenDRC maintains hierarchical layouts with layer-wise bounding volume hierarchies and performs adaptive row-based partition to identify independent regions for check pruning and/or parallel processing…”
Get full text
Conference Proceeding -
9
Accelerating Fourier and Number Theoretic Transforms using Tensor Cores and Warp Shuffles
Published: IEEE 01.09.2021Published in 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT) (01.09.2021)“… However, despite their usefulness and utility, their adoption continues to be a challenge as computing the DFT of a signal can be a time-consuming and expensive operation…”
Get full text
Conference Proceeding -
10
Seer: Predictive Runtime Kernel Selection for Irregular Problems
ISSN: 2643-2838Published: IEEE 02.03.2024Published in Proceedings / International Symposium on Code Generation and Optimization (02.03.2024)“…Modern GPUs are designed for regular problems and suffer from load imbalance when processing irregular data. Prior to our work, a domain expert selects the…”
Get full text
Conference Proceeding -
11
NDFT: Accelerating Density Functional Theory Calculations via Hardware/Software Co-Design on Near-Data Computing System
Published: IEEE 22.06.2025Published in 2025 62nd ACM/IEEE Design Automation Conference (DAC) (22.06.2025)“…Linear-response time-dependent Density Functional Theory (LR-TDDFT) is a widely used method for accurately predicting the excited-state properties of physical…”
Get full text
Conference Proceeding -
12
DARIS: An Oversubscribed Spatio-Temporal Scheduler for Real-Time DNN Inference on GPUs
Published: IEEE 22.06.2025Published in 2025 62nd ACM/IEEE Design Automation Conference (DAC) (22.06.2025)“… In particular, DARIS improves GPU utilization and uniquely analyzes GPU concurrency by oversubscribing computing resources…”
Get full text
Conference Proceeding -
13
Ultra Efficient Acceleration for De Novo Genome Assembly via Near-Memory Computing
Published: IEEE 01.09.2021Published in 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT) (01.09.2021)“…De novo assembly of genomes for which there is no reference, is essential for novel species discovery and metagenomics. In this work, we accelerate two key…”
Get full text
Conference Proceeding -
14
SpV8: Pursuing Optimal Vectorization and Regular Computation Pattern in SpMV
Published: IEEE 05.12.2021Published in 2021 58th ACM/IEEE Design Automation Conference (DAC) (05.12.2021)“…Sparse Matrix-Vector Multiplication (SpMV) plays an important role in many scientific and industry applications, and remains a well-known challenge due to the…”
Get full text
Conference Proceeding -
15
HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference
Published: IEEE 22.06.2025Published in 2025 62nd ACM/IEEE Design Automation Conference (DAC) (22.06.2025)“…The Mixture of Experts (MoE) architecture has demonstrated significant advantages as it enables to increase the model capacity without a proportional increase…”
Get full text
Conference Proceeding -
16
Gluon-Async: A Bulk-Asynchronous System for Distributed and Heterogeneous Graph Analytics
ISSN: 2641-7936Published: IEEE 01.09.2019Published in Proceedings / International Conference on Parallel Architectures and Compilation Techniques (01.09.2019)“…Distributed graph analytics systems for CPUs, like D-Galois and Gemini, and for GPUs, like D-IrGL and Lux, use a bulk-synchronous parallel (BSP…”
Get full text
Conference Proceeding -
17
Versatile Cross-platform Compilation Toolchain for Schrödinger-style Quantum Circuit Simulation
Published: IEEE 22.06.2025Published in 2025 62nd ACM/IEEE Design Automation Conference (DAC) (22.06.2025)“…While existing quantum hardware resources have limited availability and reliability, there is a growing demand for exploring and verifying quantum algorithms…”
Get full text
Conference Proceeding -
18
PID-Comm: A Fast and Flexible Collective Communication Framework for Commodity Processing-in-DIMM Devices
Published: IEEE 29.06.2024Published in 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA) (29.06.2024)“… Many highly parallel applications have been shown to benefit from these PIM-enabled DIMMs, but further speedup is often limited by the huge overhead of inter-PE collective communication…”
Get full text
Conference Proceeding -
19
GPU Acceleration of RSA is Vulnerable to Side-channel Timing Attacks
ISSN: 1558-2434Published: ACM 01.11.2018Published in 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) (01.11.2018)“… With the advent of general-purpose GPUs, the performance of RSA has been improved significantly by exploiting parallel computing on a GPU [9], [18], [23], [26…”
Get full text
Conference Proceeding -
20
DenSparSA: A Balanced Systolic Array Approach for Dense and Sparse Matrix Multiplication
Published: IEEE 22.06.2025Published in 2025 62nd ACM/IEEE Design Automation Conference (DAC) (22.06.2025)“…Numerous studies have proposed hardware architectures to accelerate sparse matrix multiplication, but these approaches often incur substantial area and power…”
Get full text
Conference Proceeding