Novel Parallel Algorithms for Fast Multi-GPU-Based Generation of Massive Scale-Free Networks

Saved in:
Bibliographic Details
Title: Novel Parallel Algorithms for Fast Multi-GPU-Based Generation of Massive Scale-Free Networks
Authors: Alam, M., Perumalla, K. S., Sanders, P.
Source: Data science and engineering, 4 (1), 61–75 ; ISSN: 2364-1185, 2364-1541
Publisher Information: SpringerOpen
Publication Year: 2019
Collection: KITopen (Karlsruhe Institute of Technologie)
Subject Terms: GPU, Preferential attachment, Random networks, Scale-free networks, ddc:004, DATA processing & computer science, info:eu-repo/classification/ddc/004
Description: A novel parallel algorithm is presented for generating random scale-free networks using the preferential attachment model. The algorithm, named cuPPA, is custom-designed for “single instruction multiple data (SIMD)” style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the frst to exploit GPUs, and also the fastest implementation available today, to generate scale-free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. Also another version of the algorithm called cuPPA-Hash tailored for multiple GPUs is presented. On a single GPU, the original cuPPA algorithm delivers the best performance, but is challenging to port to multiGPU implementation. For multi-GPU implementation, cuPPA-Hash has been used as the parallel algorithm to achieve a perfect linear speedup up to 4 GPUs. In one of the best cases, when executed on an NVidia GeForce 1080 GPU, the original cuPPA generates a scale-free network of two billion edges in less than 3 s. On multi-GPU platforms, cuPPA-Hash generates a scale-free network of 16 billion edges in less than 7 s using a machine consisting of 4 NVidia Tesla P100 GPUs.
Document Type: article in journal/newspaper
File Description: application/pdf
Language: English
ISSN: 2364-1185
2364-1541
Relation: info:eu-repo/semantics/altIdentifier/issn/2364-1185; info:eu-repo/semantics/altIdentifier/issn/2364-1541; https://publikationen.bibliothek.kit.edu/1000095128; https://publikationen.bibliothek.kit.edu/1000095128/29891341; https://doi.org/10.5445/IR/1000095128
DOI: 10.5445/IR/1000095128
Availability: https://publikationen.bibliothek.kit.edu/1000095128
https://publikationen.bibliothek.kit.edu/1000095128/29891341
https://doi.org/10.5445/IR/1000095128
Rights: https://creativecommons.org/licenses/by/4.0/deed.de ; info:eu-repo/semantics/openAccess
Accession Number: edsbas.81906AE5
Database: BASE
Description
Abstract:A novel parallel algorithm is presented for generating random scale-free networks using the preferential attachment model. The algorithm, named cuPPA, is custom-designed for “single instruction multiple data (SIMD)” style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the frst to exploit GPUs, and also the fastest implementation available today, to generate scale-free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. Also another version of the algorithm called cuPPA-Hash tailored for multiple GPUs is presented. On a single GPU, the original cuPPA algorithm delivers the best performance, but is challenging to port to multiGPU implementation. For multi-GPU implementation, cuPPA-Hash has been used as the parallel algorithm to achieve a perfect linear speedup up to 4 GPUs. In one of the best cases, when executed on an NVidia GeForce 1080 GPU, the original cuPPA generates a scale-free network of two billion edges in less than 3 s. On multi-GPU platforms, cuPPA-Hash generates a scale-free network of 16 billion edges in less than 7 s using a machine consisting of 4 NVidia Tesla P100 GPUs.
ISSN:23641185
23641541
DOI:10.5445/IR/1000095128