Convergence and Stability of a Class of Iteratively Re-weighted Least Squares Algorithms for Sparse Signal Recovery in the Presence of Noise
In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algor...
Uloženo v:
| Vydáno v: | IEEE transactions on signal processing Ročník 62; číslo 1; s. 183 - 195 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
30.10.2013
|
| ISSN: | 1053-587X |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 <
≤ 1 and
> 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS(
,
) algorithms minimize
-smooth versions of the ℓ
'norms'. We leverage EM theory to show that, for each 0 <
≤ 1, the limit points of the sequence of IRLS(
,
) iterates are stationary point of the
-smooth ℓ
'norm' minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS(
,
) algorithms is stable for each 0 <
≤ 1, if the limit point of the iterates coincides the global minimizer. For the case
= 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for
< 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery. |
|---|---|
| AbstractList | In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 < ν ≤ 1 and ε > 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS(ν, ε) algorithms minimize ε-smooth versions of the ℓ ν 'norms'. We leverage EM theory to show that, for each 0 < ν ≤ 1, the limit points of the sequence of IRLS(ν, ε) iterates are stationary point of the ε-smooth ℓ ν 'norm' minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS(ν, ε) algorithms is stable for each 0 < ν ≤ 1, if the limit point of the iterates coincides the global minimizer. For the case ν = 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for ν < 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery.In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 < ν ≤ 1 and ε > 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS(ν, ε) algorithms minimize ε-smooth versions of the ℓ ν 'norms'. We leverage EM theory to show that, for each 0 < ν ≤ 1, the limit points of the sequence of IRLS(ν, ε) iterates are stationary point of the ε-smooth ℓ ν 'norm' minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS(ν, ε) algorithms is stable for each 0 < ν ≤ 1, if the limit point of the iterates coincides the global minimizer. For the case ν = 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for ν < 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery. In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 < ν ≤ 1 and ε > 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS(ν, ε) algorithms minimize ε-smooth versions of the ℓν ‘norms’. We leverage EM theory to show that, for each 0 < ν ≤ 1, the limit points of the sequence of IRLS(ν, ε) iterates are stationary point of the ε-smooth ℓν ‘norm’ minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS(ν, ε) algorithms is stable for each 0 < ν ≤ 1, if the limit point of the iterates coincides the global minimizer. For the case ν = 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for ν < 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery. In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 < ≤ 1 and > 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS( , ) algorithms minimize -smooth versions of the ℓ 'norms'. We leverage EM theory to show that, for each 0 < ≤ 1, the limit points of the sequence of IRLS( , ) iterates are stationary point of the -smooth ℓ 'norm' minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS( , ) algorithms is stable for each 0 < ≤ 1, if the limit point of the iterates coincides the global minimizer. For the case = 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for < 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery. |
| Author | Ba, Demba Purdon, Patrick L Babadi, Behtash Brown, Emery N |
| AuthorAffiliation | 2 Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114 3 Harvard-MIT Division of Health, Sciences and Technology 1 Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 |
| AuthorAffiliation_xml | – name: 3 Harvard-MIT Division of Health, Sciences and Technology – name: 1 Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 – name: 2 Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114 |
| Author_xml | – sequence: 1 givenname: Behtash surname: Babadi fullname: Babadi, Behtash organization: Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 ; Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114 – sequence: 2 givenname: Demba surname: Ba fullname: Ba, Demba organization: Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 ; Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114 – sequence: 3 givenname: Patrick L surname: Purdon fullname: Purdon, Patrick L organization: Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 ; Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114 – sequence: 4 givenname: Emery N surname: Brown fullname: Brown, Emery N organization: Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 ; Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114 ; Harvard-MIT Division of Health, Sciences and Technology |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/26549965$$D View this record in MEDLINE/PubMed |
| BookMark | eNpVkT1v2zAQhjkkaL66dyo4dpFDiiJFLQUCI2kDGElQuUA34SSdZAY06ZC0C_-H_ugorRsk0x3w3vs8w52RI-cdEvKJsxnnrLpc1g-znHExy3NdKi2PyClnUmRSl79OyFmMj4zxoqjUB3KSK1lUlZKn5M_cux2GEV2HFFxP6wStsSbtqR8o0LmFGF_W24QBktmh3dMfmP1GM64S9nSBEBOtn7YQMNIrO_pg0mod6eADrTcQItLajA7sVOv85NpT42haIX2YGn-9E_7Om4gX5HgAG_HjYZ6TnzfXy_n3bHH_7XZ-tcg2nPM840XPtCw5w5IJhKIrxSD6inWDHBjruNAVlwVrdauxUAKmlHUqxx7aCvWgxDn5-o-72bZr7Dt0KYBtNsGsIewbD6Z5nzizaka_ayaaYkU-Ab4cAME_bTGmZm1ih9aCQ7-NDddCKpELVU6nn9-6XiX_XyCeARaSi0M |
| ContentType | Journal Article |
| DBID | NPM 7X8 5PM |
| DOI | 10.1109/TSP.2013.2287685 |
| DatabaseName | PubMed MEDLINE - Academic PubMed Central (Full Participant titles) |
| DatabaseTitle | PubMed MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EndPage | 195 |
| ExternalDocumentID | PMC4636042 26549965 |
| Genre | Journal Article |
| GrantInformation_xml | – fundername: NIH HHS grantid: DP1 OD003646 |
| GroupedDBID | -~X .DC 0R~ 29I 4.4 5GY 6IK 85S 97E AAJGR AASAJ AAWTH ABAZT ABQJQ ACGFO ACIWK ACNCT AENEX AGSQL AHBIQ AJQPL ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 E.L EBS EJD F5P HZ~ IFIPE IPLJI JAVBF LAI MS~ NPM O9- OCL P2P RIA RIE RIG RNS TAE TN5 7X8 ABVLG AGQYO AKQYR 5PM |
| ID | FETCH-LOGICAL-p1112-14d085710e703ea4c73f3d90cf5f00c13891540b8b8e463a3f30c62edab9e8f63 |
| ISSN | 1053-587X |
| IngestDate | Tue Sep 30 16:36:02 EDT 2025 Sat Sep 27 21:15:02 EDT 2025 Thu Apr 03 07:06:41 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-p1112-14d085710e703ea4c73f3d90cf5f00c13891540b8b8e463a3f30c62edab9e8f63 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| PMID | 26549965 |
| PQID | 1835632367 |
| PQPubID | 23479 |
| PageCount | 13 |
| ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_4636042 proquest_miscellaneous_1835632367 pubmed_primary_26549965 |
| PublicationCentury | 2000 |
| PublicationDate | 2013-Oct-30 20131030 |
| PublicationDateYYYYMMDD | 2013-10-30 |
| PublicationDate_xml | – month: 10 year: 2013 text: 2013-Oct-30 day: 30 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transactions on signal processing |
| PublicationTitleAlternate | IEEE Trans Signal Process |
| PublicationYear | 2013 |
| SSID | ssj0014496 |
| Score | 2.080897 |
| Snippet | In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the... |
| SourceID | pubmedcentral proquest pubmed |
| SourceType | Open Access Repository Aggregation Database Index Database |
| StartPage | 183 |
| Title | Convergence and Stability of a Class of Iteratively Re-weighted Least Squares Algorithms for Sparse Signal Recovery in the Presence of Noise |
| URI | https://www.ncbi.nlm.nih.gov/pubmed/26549965 https://www.proquest.com/docview/1835632367 https://pubmed.ncbi.nlm.nih.gov/PMC4636042 |
| Volume | 62 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) issn: 1053-587X databaseCode: RIE dateStart: 19910101 customDbUrl: isFulltext: true dateEnd: 99991231 titleUrlDefault: https://ieeexplore.ieee.org/ omitProxy: false ssIdentifier: ssj0014496 providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3LbtNAFB2lhQUsEG_CoxokxCYy2OP3slStWEQhUlIpO8v2jBuLYLt5lOYf-BQ-knvv2I7TsigLNpZlR3bke3Rn7uscxj6gTEiWWtKAtSs1HCEtIwEkGWGS-MKTpmt6ksQm_NEomM3Cca_3u5mFuVr4RRFcX4fVfzU1XANj4-jsP5i7fShcgHMwOhzB7HC8k-FPsI98qTk2qS9zram4t3oUklQwaYCE-JTB2S1wG278pBwpbD-HqOYzmFxucDJpcLy4KJf5eq55GwaTCgJhNZjkFwWx8qfYAbptmiXHNMuU6jREma_22owwrERFikaenOoUK_2gSo8rNMsoZVaTWFKnwRc1X8er-e6GdpM_knZBGSPNeaH3wyg48H3QZrTbLAMm37Z13alOcljULlfXa7RfBl9huIE_6zpuT9wCqPbCltbGub06ELnqdDLGlj77k4Bo0dOCQftE3KNv0dn5cBhNT2fTj9WlgRplWMuvBVsO2D3hu2E9LdjWrByH1ODav9oUxc3w881X_i2cudmV29nmTB-zR3V8wo81rp6wniqesocd1spn7FcHYRwQxluE8TLjMSeE4WkHYbyDME4I4zXC-A5hHBDGNcK4RhhvEMbzggPCeIMwfDwh7Dk7Pzudnnw1alEPo4JlVRiWI1FUwTIVrDUqdlLfzmwZmmnmZqaZUt0coogkSALleHYMd83UE0rGSaiCzLNfsMOiLNQrxuGDS4hPYht2zY4jrcQUWSZFEAe2maks7rP3zVeOwGliJSwuVLlZRQAQ17ORvLDPXuqvHlWa3SUSHqZMPLfP_D17tD9AQvb9O0U-J2J2It9zxOs7vPcNe7DD-Vt2uF5u1Dt2P71a56vlETvwZ8ERwesPDHqxnA |
| linkProvider | IEEE |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Convergence+and+Stability+of+a+Class+of+Iteratively+Re-weighted+Least+Squares+Algorithms+for+Sparse+Signal+Recovery+in+the+Presence+of+Noise&rft.jtitle=IEEE+transactions+on+signal+processing&rft.au=Babadi%2C+Behtash&rft.au=Ba%2C+Demba&rft.au=Purdon%2C+Patrick+L&rft.au=Brown%2C+Emery+N&rft.date=2013-10-30&rft.issn=1053-587X&rft.volume=62&rft.issue=1&rft.spage=183&rft_id=info:doi/10.1109%2FTSP.2013.2287685&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1053-587X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1053-587X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1053-587X&client=summon |