Hybrid of genetic algorithm and local search to solve MAX-SAT problem using nVidia CUDA framework

General Purpose computing over Graphical Processing Units (GPGPUs) is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. But GPGPUs also bring an unprecedented level of complexity in algorithmic design and software development. In this paper we describe...

Full description

Saved in:
Bibliographic Details
Published in:Genetic programming and evolvable machines Vol. 10; no. 4; pp. 391 - 415
Main Authors: Munawar, Asim, Wahib, Mohamed, Munetomo, Masaharu, Akama, Kiyoshi
Format: Journal Article
Language:English
Published: Boston Springer US 01.12.2009
Subjects:
ISSN:1389-2576, 1573-7632
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:General Purpose computing over Graphical Processing Units (GPGPUs) is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. But GPGPUs also bring an unprecedented level of complexity in algorithmic design and software development. In this paper we describe the challenges and design choices involved in parallelizing a hybrid of Genetic Algorithm (GA) and Local Search (LS) to solve MAXimum SATisfiability (MAX-SAT) problem on a state-of-the-art nVidia Tesla GPU using nVidia Compute Unified Device Architecture (CUDA). MAX-SAT is a problem of practical importance and is often solved by employing metaheuristics based search methods like GAs and hybrid of GA with LS. Almost all the parallel GAs (pGAs) designed in the last two decades were designed for either clusters or MPPs. Unfortunately, very little research is done on the implementation of such algorithms over commodity graphics hardware. GAs in their simple form are not suitable for implementation over the Single Instruction Multiple Thread (SIMT) architecture of a GPU, and the same is the case with conventional LS algorithms. In this paper we explore different genetic operators that can be used for an efficient implementation of GAs over nVidia GPUs. We also design and introduce new techniques/operators for an efficient implementation of GAs and LS over such architectures. We use nVidia Tesla C1060 to perform several numerical tests and performance measurements and show that in the best case we obtain a speedup of 25×. We also discuss the effects of different optimization techniques on the overall execution time.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1389-2576
1573-7632
DOI:10.1007/s10710-009-9091-4