Performance comparisons of greedy algorithms in compressed sensing

SummaryCompressed sensing has motivated the development of numerous sparse approximation algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed se...

Full description

Saved in:
Bibliographic Details
Published in:Numerical linear algebra with applications Vol. 22; no. 2; pp. 254 - 282
Main Authors: Blanchard, Jeffrey D., Tanner, Jared
Format: Journal Article
Language:English
Published: Oxford Blackwell Publishing Ltd 01.03.2015
Wiley Subscription Services, Inc
Subjects:
ISSN:1070-5325, 1099-1506
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:SummaryCompressed sensing has motivated the development of numerous sparse approximation algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, greedy sparse approximation algorithms have been observed to be both able to recover the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large‐scale empirical investigation into the behavior of three of the state of the art greedy algorithms: Normalized Iterative Hard Thresholding (NIHT), Hard Thresholding Pursuit (HTP), and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably recover the sparsest solution is accurately determined, and throughout this region, additional performance characteristics are presented. Contrasting the recovery regions and the average computational time for each algorithm, we present algorithm selection maps, which indicate, for each problem size, which algorithm is able to reliably recover the sparsest vector in the least amount of time. Although no algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise across a variety of problem classes. A principle difference between the NIHT and the more sophisticated HTP and CSMPSP is the balance of asymptotic convergence rate against computational cost prior to potential support set updates. The data suggest that NIHT is typically faster than HTP and CSMPSP because of greater flexibility in updating the support that limits unnecessary computation on incorrect support sets. The algorithm selection maps presented here are the first of their kind for compressed sensing. Copyright © 2014 John Wiley & Sons, Ltd.
Bibliography:ArticleID:NLA1948
Supporting info item
ark:/67375/WNG-VWCCJZSM-C
istex:B548056F707B78BC27C6BBAC3381B6FA838855F3
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1070-5325
1099-1506
DOI:10.1002/nla.1948