Joint constraint algorithm based on deep neural network with dual outputs for single-channel speech separation
Single-channel speech separation (SCSS) plays an important role in speech processing. It is an underdetermined problem since several signals need to be recovered from one channel, which is more difficult to solve. To achieve SCSS more effectively, we propose a new cost function. What’s more, a joint...
Uložené v:
| Vydané v: | Signal, image and video processing Ročník 14; číslo 7; s. 1387 - 1395 |
|---|---|
| Hlavní autori: | , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
London
Springer London
01.10.2020
Springer Nature B.V |
| Predmet: | |
| ISSN: | 1863-1703, 1863-1711 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Single-channel speech separation (SCSS) plays an important role in speech processing. It is an underdetermined problem since several signals need to be recovered from one channel, which is more difficult to solve. To achieve SCSS more effectively, we propose a new cost function. What’s more, a joint constraint algorithm based on this function is used to separate mixed speech signals, which aims to separate two sources at the same time accurately. The joint constraint algorithm not only penalizes residual sum of square, but also exploits the joint relationship between the outputs to train the dual output DNN. In these joint constraints, the training accuracy of the separation model can be further increased. We evaluate the proposed algorithm performance on the GRID corpus. The experimental results show that the new algorithm can obtain better speech intelligibility compared to the basic cost function. In the aspects of source-to-distortion ratio , signal-to-interference ratio, source-to-artifact ratio and perceptual evaluation of speech quality, the novel approach can obtain better performance. |
|---|---|
| AbstractList | Single-channel speech separation (SCSS) plays an important role in speech processing. It is an underdetermined problem since several signals need to be recovered from one channel, which is more difficult to solve. To achieve SCSS more effectively, we propose a new cost function. What’s more, a joint constraint algorithm based on this function is used to separate mixed speech signals, which aims to separate two sources at the same time accurately. The joint constraint algorithm not only penalizes residual sum of square, but also exploits the joint relationship between the outputs to train the dual output DNN. In these joint constraints, the training accuracy of the separation model can be further increased. We evaluate the proposed algorithm performance on the GRID corpus. The experimental results show that the new algorithm can obtain better speech intelligibility compared to the basic cost function. In the aspects of source-to-distortion ratio , signal-to-interference ratio, source-to-artifact ratio and perceptual evaluation of speech quality, the novel approach can obtain better performance. |
| Author | Li, Pingan Sun, Linhui Zhu, Ge |
| Author_xml | – sequence: 1 givenname: Linhui orcidid: 0000-0001-9442-9964 surname: Sun fullname: Sun, Linhui email: sunlinhuislh@126.com organization: College of Telecommunications & Information Engineering, Nanjing University of Posts and Telecommunications – sequence: 2 givenname: Ge surname: Zhu fullname: Zhu, Ge organization: College of Telecommunications & Information Engineering, Nanjing University of Posts and Telecommunications – sequence: 3 givenname: Pingan surname: Li fullname: Li, Pingan organization: College of Telecommunications & Information Engineering, Nanjing University of Posts and Telecommunications |
| BookMark | eNp9kF1LwzAUhoNMcM79Aa8CXleTpkvbSxl-Inij1yFNTrbOLqlJyvDfm62i4MUC4RwO73s-nnM0sc4CQpeUXFNCyptAaclJRvL0KS95xk_QlFacZbSkdPKbE3aG5iFsSHosLyteTZF9dq2NWDkbopf7VHYr59u43uJGBtDYWawBemxh8LJLIe6c_8C7JMF6SBU3xH6IARvncWjtqoNMraW10OHQA6g1DtBLL2Pr7AU6NbILMP-JM_R-f_e2fMxeXh-elrcvmWK0jhmVSgMxTQ1soZU2YPJa80rXxaLWdS41Nyp1JpqyxkBTGMM4sNw0ipWGAWMzdDX27b37HCBEsXGDt2mkyIsi55TWlCRVPqqUdyF4MKL37Vb6L0GJ2KMVI1qR0IoDWsGTqfpnUm08HLcH2B23stEa0hy7Av-31RHXN0blk20 |
| CitedBy_id | crossref_primary_10_1007_s11265_023_01891_7 crossref_primary_10_3389_fenvs_2024_1429410 crossref_primary_10_1007_s11760_025_04229_x crossref_primary_10_1016_j_apacoust_2024_110076 crossref_primary_10_1007_s11042_022_12816_0 crossref_primary_10_1109_ACCESS_2024_3479292 |
| Cites_doi | 10.1109/TASLP.2018.2874708 10.1109/TASLP.2017.2687104 10.1109/TASLP.2014.2352935 10.1109/TASLP.2015.2468583 10.1109/TASLP.2014.2305833 10.1109/TASLP.2017.2700540 10.1109/TASLP.2017.2716443 10.1121/1.2229005 10.1109/TASLP.2016.2558822 10.1109/TSA.2005.858005 10.1109/TASLP.2015.2416653 10.1016/j.specom.2018.11.008 10.1038/323533a0 10.1109/TASLP.2016.2536478 10.1109/IWAENC.2018.8521379 10.1109/ICASSP.2014.6854294 10.1109/ISCSLP.2014.6936615 10.1109/WASPAA.2017.8170041 10.1109/ICASSP.2017.7953191 |
| ContentType | Journal Article |
| Copyright | Springer-Verlag London Ltd., part of Springer Nature 2020 Springer-Verlag London Ltd., part of Springer Nature 2020. |
| Copyright_xml | – notice: Springer-Verlag London Ltd., part of Springer Nature 2020 – notice: Springer-Verlag London Ltd., part of Springer Nature 2020. |
| DBID | AAYXX CITATION JQ2 |
| DOI | 10.1007/s11760-020-01676-6 |
| DatabaseName | CrossRef ProQuest Computer Science Collection |
| DatabaseTitle | CrossRef ProQuest Computer Science Collection |
| DatabaseTitleList | ProQuest Computer Science Collection |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 1863-1711 |
| EndPage | 1395 |
| ExternalDocumentID | 10_1007_s11760_020_01676_6 |
| GrantInformation_xml | – fundername: the Natural Science Foundation of the Jiangsu Higher Education Institutions of China grantid: No.19KJB510049 – fundername: the National Natural Science Foundation of China grantid: No.61901227; No.61671252 |
| GroupedDBID | -5B -5G -BR -EM -Y2 -~C .VR 06D 0R~ 123 1N0 203 29~ 2J2 2JN 2JY 2KG 2KM 2LR 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5VS 67Z 6NX 875 8TC 95- 95. 95~ AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBXA ABDZT ABECU ABFTV ABHQN ABJNI ABJOX ABKCH ABMNI ABMQK ABNWP ABQBU ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACSNA ACZOJ ADHHG ADHIR ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFQL AEGAL AEGNC AEJHL AEJRE AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFGCZ AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARMRJ AXYYD AYJHY B-. BA0 BDATZ BGNMA BSONS CAG COF CS3 CSCUP DDRTE DNIVK DPUIP EBLON EBS EIOEI EJD ESBYG FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNWQR GQ6 GQ7 GQ8 GXS H13 HF~ HG5 HG6 HLICF HMJXF HQYDN HRMNR HZ~ IJ- IKXTQ IWAJR IXC IXD IXE IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ KDC KOV LLZTM M4Y MA- NPVJJ NQJWS NU0 O9- O93 O9J OAM P9O PF0 PT4 QOS R89 R9I RIG ROL RPX RSV S16 S1Z S27 S3B SAP SDH SEG SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W48 YLTOR Z45 Z5O Z7R Z7X Z83 Z88 ZMTXR ~A9 AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADKFA AEZWR AFDZB AFFHD AFHIU AFKRA AFOHR AHPBZ AHWEU AIXLP ARAPS ATHPR AYFIA BENPR BGLVJ CCPQU CITATION HCIFZ K7- PHGZM PHGZT PQGLB JQ2 |
| ID | FETCH-LOGICAL-c319t-1acde0fb9e35dcdfef29d68d9459d92ad6fceec0d13bfeb4ff36e32fbc37f3e33 |
| IEDL.DBID | RSV |
| ISICitedReferencesCount | 8 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000525480600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1863-1703 |
| IngestDate | Thu Sep 25 00:55:11 EDT 2025 Tue Nov 18 22:18:02 EST 2025 Sat Nov 29 05:30:57 EST 2025 Fri Feb 21 02:27:02 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Keywords | Dual outputs Cost function Single-channel speech separation Joint constraint Deep neural network (DNN) |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c319t-1acde0fb9e35dcdfef29d68d9459d92ad6fceec0d13bfeb4ff36e32fbc37f3e33 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0001-9442-9964 |
| PQID | 2442611910 |
| PQPubID | 2044169 |
| PageCount | 9 |
| ParticipantIDs | proquest_journals_2442611910 crossref_primary_10_1007_s11760_020_01676_6 crossref_citationtrail_10_1007_s11760_020_01676_6 springer_journals_10_1007_s11760_020_01676_6 |
| PublicationCentury | 2000 |
| PublicationDate | 2020-10-01 |
| PublicationDateYYYYMMDD | 2020-10-01 |
| PublicationDate_xml | – month: 10 year: 2020 text: 2020-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | London |
| PublicationPlace_xml | – name: London – name: Heidelberg |
| PublicationTitle | Signal, image and video processing |
| PublicationTitleAbbrev | SIViP |
| PublicationYear | 2020 |
| Publisher | Springer London Springer Nature B.V |
| Publisher_xml | – name: Springer London – name: Springer Nature B.V |
| References | CR2 CR3 Han, Wang, Wang (CR6) 2015; 23 Grais, Roma, Simpson (CR12) 2017; 25 Sun, Wang, Chambers (CR7) 2019; 27 CR8 Rumelhart, Hinton, Williams (CR17) 1986; 323 CR19 Du, Tu, Dai, Lee (CR1) 2017; 24 CR15 Vincent, Gribonval (CR20) 2006; 14 Wang, Du, Dai (CR10) 2017; 25 CR14 CR13 Zhang, Wang (CR11) 2017; 25 Sun, Xie, Gu (CR16) 2019; 106 Wang, Narayanan, Wang (CR21) 2014; 22 Cooke, Barker, Cunningham (CR18) 2006; 120 Zhang, Wang (CR5) 2016; 24 Narayanan, Wang (CR4) 2014; 22 Huang, Kim, Hasegawa-Johnson (CR9) 2015; 23 1676_CR3 1676_CR2 1676_CR15 E Vincent (1676_CR20) 2006; 14 1676_CR14 1676_CR13 DE Rumelhart (1676_CR17) 1986; 323 1676_CR8 L Sun (1676_CR16) 2019; 106 A Narayanan (1676_CR4) 2014; 22 EM Grais (1676_CR12) 2017; 25 XL Zhang (1676_CR5) 2016; 24 Y Wang (1676_CR21) 2014; 22 K Han (1676_CR6) 2015; 23 J Du (1676_CR1) 2017; 24 Y Wang (1676_CR10) 2017; 25 X Zhang (1676_CR11) 2017; 25 M Cooke (1676_CR18) 2006; 120 Y Sun (1676_CR7) 2019; 27 P-S Huang (1676_CR9) 2015; 23 1676_CR19 |
| References_xml | – ident: CR19 – ident: CR3 – volume: 27 start-page: 125 issue: 1 year: 2019 end-page: 139 ident: CR7 article-title: Two-stage monaural source separation in reverberant room environments using deep neural networks publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2018.2874708 – volume: 25 start-page: 1075 issue: 5 year: 2017 end-page: 1084 ident: CR11 article-title: Deep learning based binaural speech separation in reverberant environments publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2017.2687104 – ident: CR14 – ident: CR15 – volume: 22 start-page: 1849 issue: 12 year: 2014 end-page: 1858 ident: CR21 article-title: On training targets for supervised speech separation publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2014.2352935 – ident: CR2 – volume: 23 start-page: 2136 issue: 12 year: 2015 end-page: 2147 ident: CR9 article-title: Joint optimization of masks and deep recurrent neural networks for monaural source separation publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2015.2468583 – volume: 22 start-page: 826 issue: 4 year: 2014 end-page: 835 ident: CR4 article-title: Investigation of speech separation as a front-end for noise robust speech recognition publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2014.2305833 – volume: 25 start-page: 1535 issue: 7 year: 2017 end-page: 1546 ident: CR10 article-title: A gender mixture detection approach to unsupervised single-channel speech separation based on deep neural networks publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2017.2700540 – ident: CR13 – volume: 25 start-page: 1773 issue: 9 year: 2017 end-page: 1783 ident: CR12 article-title: Two stage single channel audio source separation using deep neural networks publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2017.2716443 – volume: 120 start-page: 2421 issue: 5 year: 2006 end-page: 2424 ident: CR18 article-title: An audio-visual corpus for speech perception and automatic speech recognition publication-title: J. Acoust. Soc. Am. doi: 10.1121/1.2229005 – volume: 24 start-page: 1424 issue: 8 year: 2017 end-page: 1437 ident: CR1 article-title: A regression approach to single-channel speech separation via high-resolution deep neural networks publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2016.2558822 – ident: CR8 – volume: 14 start-page: 1462 issue: 4 year: 2006 end-page: 1469 ident: CR20 article-title: Fevotte C 2006 Performance measurement in blind audio source separation publication-title: IEEE Trans. Audio Speech Lang. Process. doi: 10.1109/TSA.2005.858005 – volume: 23 start-page: 982 issue: 6 year: 2015 end-page: 992 ident: CR6 article-title: Learning spectral mapping for speech dereverberation and denoising publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2015.2416653 – volume: 106 start-page: 85 year: 2019 end-page: 94 ident: CR16 article-title: Joint dictionary learning using a new optimization method for single-channel blind source separation publication-title: Speech Commun. doi: 10.1016/j.specom.2018.11.008 – volume: 323 start-page: 533 issue: 6088 year: 1986 end-page: 536 ident: CR17 article-title: Learning representations by back-propagating errors publication-title: Nature doi: 10.1038/323533a0 – volume: 24 start-page: 967 issue: 5 year: 2016 end-page: 977 ident: CR5 article-title: A deep ensemble learning method for monaural speech separation publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2016.2536478 – volume: 14 start-page: 1462 issue: 4 year: 2006 ident: 1676_CR20 publication-title: IEEE Trans. Audio Speech Lang. Process. doi: 10.1109/TSA.2005.858005 – volume: 25 start-page: 1535 issue: 7 year: 2017 ident: 1676_CR10 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2017.2700540 – volume: 25 start-page: 1075 issue: 5 year: 2017 ident: 1676_CR11 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2017.2687104 – ident: 1676_CR13 doi: 10.1109/IWAENC.2018.8521379 – ident: 1676_CR14 doi: 10.1109/ICASSP.2014.6854294 – ident: 1676_CR15 – volume: 23 start-page: 2136 issue: 12 year: 2015 ident: 1676_CR9 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2015.2468583 – volume: 120 start-page: 2421 issue: 5 year: 2006 ident: 1676_CR18 publication-title: J. Acoust. Soc. Am. doi: 10.1121/1.2229005 – ident: 1676_CR19 – volume: 22 start-page: 826 issue: 4 year: 2014 ident: 1676_CR4 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2014.2305833 – volume: 27 start-page: 125 issue: 1 year: 2019 ident: 1676_CR7 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2018.2874708 – volume: 22 start-page: 1849 issue: 12 year: 2014 ident: 1676_CR21 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2014.2352935 – volume: 24 start-page: 967 issue: 5 year: 2016 ident: 1676_CR5 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2016.2536478 – volume: 323 start-page: 533 issue: 6088 year: 1986 ident: 1676_CR17 publication-title: Nature doi: 10.1038/323533a0 – ident: 1676_CR8 doi: 10.1109/ISCSLP.2014.6936615 – volume: 106 start-page: 85 year: 2019 ident: 1676_CR16 publication-title: Speech Commun. doi: 10.1016/j.specom.2018.11.008 – ident: 1676_CR2 doi: 10.1109/WASPAA.2017.8170041 – ident: 1676_CR3 doi: 10.1109/ICASSP.2017.7953191 – volume: 24 start-page: 1424 issue: 8 year: 2017 ident: 1676_CR1 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2016.2558822 – volume: 23 start-page: 982 issue: 6 year: 2015 ident: 1676_CR6 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2015.2416653 – volume: 25 start-page: 1773 issue: 9 year: 2017 ident: 1676_CR12 publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2017.2716443 |
| SSID | ssj0000327868 |
| Score | 2.2290568 |
| Snippet | Single-channel speech separation (SCSS) plays an important role in speech processing. It is an underdetermined problem since several signals need to be... |
| SourceID | proquest crossref springer |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1387 |
| SubjectTerms | Algorithms Artificial neural networks Computer Imaging Computer Science Cost function Image Processing and Computer Vision Intelligibility Model accuracy Multimedia Information Systems Original Paper Pattern Recognition and Graphics Separation Signal processing Signal,Image and Speech Processing Speech Speech processing Vision |
| Title | Joint constraint algorithm based on deep neural network with dual outputs for single-channel speech separation |
| URI | https://link.springer.com/article/10.1007/s11760-020-01676-6 https://www.proquest.com/docview/2442611910 |
| Volume | 14 |
| WOSCitedRecordID | wos000525480600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLINK Contemporary 1997-Present customDbUrl: eissn: 1863-1711 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000327868 issn: 1863-1703 databaseCode: RSV dateStart: 20070401 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwEB4V2gMc2LIFsTwqH7i1lpI46yRHhFghhFaoL3GLHHumrLQ4q03g92PnwUJVkOAcZ5LYM5nP45lvAI6F0wPtcAFPKTY8TsmZFMWSF0oWRMZpBDVdSy6T6TS9vs6uuqKwqs92748kmz_1qtgtTGTA_XbHp85LLtfgo3N3qW_Y8OPnn8fISiCiJG1r4FLp-TcD0VXL_F_Mc4-0gpn_nIw2DmcyeN-rfoatDmCyk1YjtuED2iEM-uYNrLPlIWw-YSL8AvainNmaaY8WfdOImqn533I5q29umXd0hpWWGcQF8wSYTr5t08eZj-MyX8_FyrvaPaJiDgUzH4CYI_dVxRbnrFog6htWYUs0Xtod-D05-3V6zrtWDFw7G615qLTBgIoMxdhoQ0hRZmRqsnicmSxSRpLztjowoSgIi5hISBQRFVokJFCIXVi3pcU9YEqNk1AJZQqp40ymSuHYCaVAanJ4Jh5B2C9Hrjuecv_l83zFsOynN3fTmzfTm8sRfHu8Z9GydLw6-rBf5byz2Cp3MMdtJt3uNRjB935VV5dflrb_tuEHsBF5xWjyAQ9hvV7e4RF80vf1rFp-bTT5AWMX78o |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3fT9wwDLYGQ2I8wIChHWMjD3uDSG3TS9tHhIbYdpwmBoi3Kk3scdKRnq5lf_-S_uBgGkjsuanbJnb9xbE_A3wWTg-0wwU8pdjwOCVnUhRLXihZEBmnEdR0LRkl43F6fZ396IrCqj7bvT-SbP7Ui2K3MJEB99sdnzovuVyC17HzWJ4x__zn1X1kJRBRkrY1cKn0_JuB6Kpl_i3msUdawMy_TkYbh3Oy8X-v-hbWO4DJjlqN2IRXaLdgo2_ewDpb3oK1B0yE22C_lRNbM-3Rom8aUTM1_VXOJ_XNLfOOzrDSMoM4Y54A08m3bfo483Fc5uu5WHlXu0dUzKFg5gMQU-S-qtjilFUzRH3DKmyJxkv7Di5Pvlwcn_KuFQPXzkZrHiptMKAiQzE02hBSlBmZmiweZiaLlJHkvK0OTCgKwiImEhJFRIUWCQkUYgeWbWnxPTClhkmohDKF1HEmU6Vw6IRSIDU5PBMPIOyXI9cdT7n_8mm-YFj205u76c2b6c3lAA7u75m1LB3Pjt7rVznvLLbKHcxxm0m3ew0GcNiv6uLy09J2XzZ8H1ZPL85G-ejr-PsHeBN5JWlyA_dguZ7f4UdY0b_rSTX_1Gj1H6il8q4 |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB5BQQgOLRRQFwr4wA2sJnHWSY6IdsWjWlXiod4ix55pV1qc1Sbt78eTR7eggoQ4xxkn9oxmxp7vG4DXKuiBDXGBzCl1Ms0pmBSlWlZGV0QuaAR1XUuOs_k8Pz0tTq6h-Ltq9_FKssc0MEuTbw9Wjg42wLc405Hk1IfL6LXUt-FOyoX0nK9_-X51yhKpJMt7PFyumYszUgNy5mYxv3qnTcj52y1p53xmO___2Q9hewg8xbteUx7BLfS7sDM2dRCDje_Cg2sMhY_Bf6oXvhWWo0huJtEKszyr14v2_IdgB-hE7YVDXAkmxgzyfV9WLvh8VzDOS9QXbZiiESE6FnwwsUTJaGOPS9GsEO25aLAnIK_9E_g2O_r6_oMcWjRIG2y3lbGxDiOqClRTZx0hJYXTuSvSaeGKxDhNwQvbyMWqIqxSIqVRJVRZlZFCpZ7Clq897oEwZprFRhlXaZsWOjcGp0EoRdpSiHPSCcTj1pR24C_nP1-WG-ZlXt4yLG_ZLW-pJ_Dm6p1Vz97x19H7446XgyU3ZQh_QpIZstpoAm_HHd48_rO0Z_82_BXcOzmclccf55-fw_2EdaQrGdyHrXZ9gS_grr1sF836ZafgPwGvefuS |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Joint+constraint+algorithm+based+on+deep+neural+network+with+dual+outputs+for+single-channel+speech+separation&rft.jtitle=Signal%2C+image+and+video+processing&rft.au=Sun%2C+Linhui&rft.au=Zhu%2C+Ge&rft.au=Li%2C+Pingan&rft.date=2020-10-01&rft.pub=Springer+London&rft.issn=1863-1703&rft.eissn=1863-1711&rft.volume=14&rft.issue=7&rft.spage=1387&rft.epage=1395&rft_id=info:doi/10.1007%2Fs11760-020-01676-6&rft.externalDocID=10_1007_s11760_020_01676_6 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1863-1703&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1863-1703&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1863-1703&client=summon |