Multichannel acoustic source and image dataset for the cocktail party effect in hearing aid and implant users

The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties...

Full description

Saved in:
Bibliographic Details
Published in:Scientific data Vol. 7; no. 1; pp. 440 - 13
Main Authors: Fischer, Tim, Caversaccio, Marco, Wimmer, Wilhelm
Format: Journal Article
Language:English
Published: London Nature Publishing Group UK 17.12.2020
Nature Publishing Group
Nature Portfolio
Subjects:
ISSN:2052-4463, 2052-4463
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties that hearing-impaired subjects face in these situations. To help patients with hearing aids and implants, scientists are trying to imitate this ability of human hearing, with modest success so far. To support the scientific community in its efforts, we provide the Bern Cocktail Party (BCP) dataset consisting of 55938 Cocktail Party scenarios recorded from 20 people and a head and torso simulator wearing cochlear implant audio processors. The data were collected in an acoustic chamber with 16 synchronized microphones placed at purposeful positions on the participants’ heads. In addition to the multi-channel audio source and image recordings, the spatial coordinates of the microphone positions were digitized for each participant. Python scripts were provided to facilitate data processing. Measurement(s) acoustic Cocktail Party scenarios • acoustic composition of speech • Noise Technology Type(s) head-mounted microphones Factor Type(s) head shape of the participants • acoustic Cocktail Party scenarios played Sample Characteristic - Organism Homo sapiens Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13181921
AbstractList The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties that hearing-impaired subjects face in these situations. To help patients with hearing aids and implants, scientists are trying to imitate this ability of human hearing, with modest success so far. To support the scientific community in its efforts, we provide the Bern Cocktail Party (BCP) dataset consisting of 55938 Cocktail Party scenarios recorded from 20 people and a head and torso simulator wearing cochlear implant audio processors. The data were collected in an acoustic chamber with 16 synchronized microphones placed at purposeful positions on the participants’ heads. In addition to the multi-channel audio source and image recordings, the spatial coordinates of the microphone positions were digitized for each participant. Python scripts were provided to facilitate data processing. Measurement(s) acoustic Cocktail Party scenarios • acoustic composition of speech • Noise Technology Type(s) head-mounted microphones Factor Type(s) head shape of the participants • acoustic Cocktail Party scenarios played Sample Characteristic - Organism Homo sapiens Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.13181921
Measurement(s) acoustic Cocktail Party scenarios • acoustic composition of speech • Noise Technology Type(s) head-mounted microphones Factor Type(s) head shape of the participants • acoustic Cocktail Party scenarios played Sample Characteristic - Organism Homo sapiens Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13181921
The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties that hearing-impaired subjects face in these situations. To help patients with hearing aids and implants, scientists are trying to imitate this ability of human hearing, with modest success so far. To support the scientific community in its efforts, we provide the Bern Cocktail Party (BCP) dataset consisting of 55938 Cocktail Party scenarios recorded from 20 people and a head and torso simulator wearing cochlear implant audio processors. The data were collected in an acoustic chamber with 16 synchronized microphones placed at purposeful positions on the participants’ heads. In addition to the multi-channel audio source and image recordings, the spatial coordinates of the microphone positions were digitized for each participant. Python scripts were provided to facilitate data processing.Measurement(s)acoustic Cocktail Party scenarios • acoustic composition of speech • NoiseTechnology Type(s)head-mounted microphonesFactor Type(s)head shape of the participants • acoustic Cocktail Party scenarios playedSample Characteristic - OrganismHomo sapiensMachine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13181921
The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties that hearing-impaired subjects face in these situations. To help patients with hearing aids and implants, scientists are trying to imitate this ability of human hearing, with modest success so far. To support the scientific community in its efforts, we provide the Bern Cocktail Party (BCP) dataset consisting of 55938 Cocktail Party scenarios recorded from 20 people and a head and torso simulator wearing cochlear implant audio processors. The data were collected in an acoustic chamber with 16 synchronized microphones placed at purposeful positions on the participants' heads. In addition to the multi-channel audio source and image recordings, the spatial coordinates of the microphone positions were digitized for each participant. Python scripts were provided to facilitate data processing.The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties that hearing-impaired subjects face in these situations. To help patients with hearing aids and implants, scientists are trying to imitate this ability of human hearing, with modest success so far. To support the scientific community in its efforts, we provide the Bern Cocktail Party (BCP) dataset consisting of 55938 Cocktail Party scenarios recorded from 20 people and a head and torso simulator wearing cochlear implant audio processors. The data were collected in an acoustic chamber with 16 synchronized microphones placed at purposeful positions on the participants' heads. In addition to the multi-channel audio source and image recordings, the spatial coordinates of the microphone positions were digitized for each participant. Python scripts were provided to facilitate data processing.
The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties that hearing-impaired subjects face in these situations. To help patients with hearing aids and implants, scientists are trying to imitate this ability of human hearing, with modest success so far. To support the scientific community in its efforts, we provide the Bern Cocktail Party (BCP) dataset consisting of 55938 Cocktail Party scenarios recorded from 20 people and a head and torso simulator wearing cochlear implant audio processors. The data were collected in an acoustic chamber with 16 synchronized microphones placed at purposeful positions on the participants’ heads. In addition to the multi-channel audio source and image recordings, the spatial coordinates of the microphone positions were digitized for each participant. Python scripts were provided to facilitate data processing. Measurement(s) acoustic Cocktail Party scenarios • acoustic composition of speech • Noise Technology Type(s) head-mounted microphones Factor Type(s) head shape of the participants • acoustic Cocktail Party scenarios played Sample Characteristic - Organism Homo sapiens Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13181921
The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in complex acoustic scenarios. The ease with which normal hearing people perform this challenging task is in stark contrast to the difficulties that hearing-impaired subjects face in these situations. To help patients with hearing aids and implants, scientists are trying to imitate this ability of human hearing, with modest success so far. To support the scientific community in its efforts, we provide the Bern Cocktail Party (BCP) dataset consisting of 55938 Cocktail Party scenarios recorded from 20 people and a head and torso simulator wearing cochlear implant audio processors. The data were collected in an acoustic chamber with 16 synchronized microphones placed at purposeful positions on the participants’ heads. In addition to the multi-channel audio source and image recordings, the spatial coordinates of the microphone positions were digitized for each participant. Python scripts were provided to facilitate data processing.
ArticleNumber 440
Author Caversaccio, Marco
Wimmer, Wilhelm
Fischer, Tim
Author_xml – sequence: 1
  givenname: Tim
  orcidid: 0000-0003-4584-6096
  surname: Fischer
  fullname: Fischer, Tim
  email: tim.fischer@artorg.unibe.ch
  organization: Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern
– sequence: 2
  givenname: Marco
  orcidid: 0000-0002-7090-8087
  surname: Caversaccio
  fullname: Caversaccio, Marco
  organization: Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern
– sequence: 3
  givenname: Wilhelm
  orcidid: 0000-0001-5392-2074
  surname: Wimmer
  fullname: Wimmer, Wilhelm
  email: wilhelm.wimmer@insel.ch
  organization: Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33335098$$D View this record in MEDLINE/PubMed
BookMark eNp9Ustu1DAUjVARLaU_wAJZYsMm4FfiZIOEKqCVitjA2rpxrmc8ZOzBdir173E6Q1-LeuPXOUfn3nteV0c-eKyqt4x-ZFR0n5JkTa9qymlNqVKq7l5UJ5w2vJayFUcPzsfVWUobSikTkjaKvqqORVkN7buTavtjnrIza_AeJwImzKlcSQpzNEjAj8RtYYVkhAwJM7EhkrxGYoL5k8FNZAcx3xC0Fk0mzpM1QnR-RcCNB_puAp_JnDCmN9VLC1PCs8N-Wv3-9vXX-UV99fP75fmXq9o0kuaagRSNGiWAAhR2VIz3lrfS0hZ7yYdBwdgJJjjaxjJlxq5RlholgBnGOhCn1eVedwyw0btYaog3OoDTtw8hrnSx7cyEujWcDR2VTHIrraQ9k0Mr1IAAxYNiRevzXms3D1scDfocYXok-vjHu7VehWutlFStoEXgw0Eghr8zpqy3LhmcSluwtFtzqZhsW6ZEgb5_At2UQfjSqgUlWN8xvjh699DRnZX_Uy0AvgeYGFKKaO8gjOolPXqfHl3So2_ToxdS94RkXIbswlKVm56nij017ZbZY7y3_QzrHwqk2Os
CitedBy_id crossref_primary_10_3389_fneur_2022_856219
crossref_primary_10_1016_j_heares_2024_109155
crossref_primary_10_1109_ACCESS_2024_3429524
crossref_primary_10_1038_s42005_024_01818_z
crossref_primary_10_1016_j_heares_2021_108294
Cites_doi 10.1109/TASL.2009.2015084
10.1109/TASLP.2016.2647702
10.1109/WASPAA.2017.8170002
10.1109/ICASSP.2019.8683520
10.1016/j.specom.2016.07.002
10.1162/neco.1996.8.3.643
10.21437/Interspeech.2020-3038
10.1007/s00266-005-6095-1
10.1109/TASLP.2017.2672401
10.21437/Interspeech.2016-88
10.1016/j.apacoust.2020.107363
10.1631/FITEE.1700814
10.1097/MAO.0000000000001924
10.1007/978-3-319-51662-2
10.1097/AUD.0000000000000412
10.1159/000380741
10.1109/ICASSP.2009.4960543
10.1007/s10579-007-9054-4
10.1109/JSTSP.2019.2912565
10.1121/1.5133944
10.1093/oxfordhb/9780199233557.001.0001
10.4324/9781315514611
10.1097/AUD.0000000000000912
10.1109/SAM.2018.8448644
10.7551/mitpress/6391.001.0001
10.1109/ICASSP.2012.6288223
10.1109/ASRU.2017.8268914
10.1136/adc.67.10.1286
10.21437/Interspeech.2019-2821
10.1002/9781119279860.ch4
10.1097/MAO.0000000000000866
10.21437/Interspeech.2008-644
10.1109/ICASSP.2015.7178964
10.1080/14786440709463595
10.1371/journal.pone.0211899
10.6084/m9.figshare.c.5087012.v1
10.1145/3385955.3407928
10.3390/app9040734
10.1177/2331216518779313
10.1121/1.1907229
10.21437/CHiME.2020-1
10.21437/Interspeech.2020-1673
10.1109/ICASSP.2019.8682733
10.1016/j.cub.2009.09.005
10.1109/ICASSP40776.2020.9053074
10.1016/j.specom.2018.11.002
10.21437/Interspeech.2018-1454
10.1016/B978-0-12-814601-9.00022-5
10.21437/Interspeech.2019-2441
10.1109/ASRU.2015.7404805
10.1097/MAO.0000000000000775
ContentType Journal Article
Copyright The Author(s) 2020
The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2020
– notice: The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7X7
7XB
88E
8FE
8FH
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BBNVY
BENPR
BHPHI
CCPQU
DWQXO
FYUFA
GHDGH
GNUQQ
HCIFZ
K9.
LK8
M0S
M1P
M7P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
DOI 10.1038/s41597-020-00777-8
DatabaseName Springer Nature OA Free Journals
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
ProQuest SciTech Collection
ProQuest Natural Science Journals
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
Biological Science Collection
ProQuest Central
Natural Science Collection
ProQuest One Community College
ProQuest Central
Proquest Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
SciTech Premium Collection
ProQuest Health & Medical Complete (Alumni)
Biological Sciences
ProQuest Health & Medical Collection
Medical Database
ProQuest Central Biological Science Database (via ProQuest)
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest Central Student
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Health & Medical Research Collection
Biological Science Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest Biological Science Collection
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList

Publicly Available Content Database
MEDLINE - Academic

CrossRef
MEDLINE
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Sciences (General)
EISSN 2052-4463
EndPage 13
ExternalDocumentID oai_doaj_org_article_6c21b804142f4f40914b637beaa35771
PMC7747630
33335098
10_1038_s41597_020_00777_8
Genre Dataset
Journal Article
GroupedDBID 0R~
3V.
53G
5VS
7X7
88E
8FE
8FH
8FI
8FJ
AAJSJ
ABUWG
ACGFS
ACSFO
ACSMW
ADBBV
ADRAZ
AFKRA
AGHDO
AJTQC
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
BVXVI
C6C
CCPQU
DIK
EBLON
EBS
EJD
FYUFA
GROUPED_DOAJ
HCIFZ
HMCUK
HYE
KQ8
LK8
M1P
M48
M7P
M~E
NAO
OK1
PGMZT
PIMPY
PQQKQ
PROAC
PSQYO
RNT
RNTTT
RPM
SNYQT
UKHRP
AASML
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7XB
8FK
AZQEC
DWQXO
GNUQQ
K9.
PHGZM
PHGZT
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQUKI
PRINS
7X8
PUEGO
5PM
ID FETCH-LOGICAL-c540t-1a4357d4aa7ae3fd7129f264f06e942bb7ad83132ef5f17cd857f0c73a1c118a3
IEDL.DBID DOA
ISICitedReferencesCount 7
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000601267000003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2052-4463
IngestDate Fri Oct 03 12:21:44 EDT 2025
Tue Nov 04 01:40:44 EST 2025
Thu Oct 02 05:24:58 EDT 2025
Tue Oct 07 07:05:59 EDT 2025
Thu Jan 02 22:41:32 EST 2025
Sat Nov 29 05:58:39 EST 2025
Tue Nov 18 22:30:56 EST 2025
Fri Feb 21 02:37:30 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the metadata files associated with this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c540t-1a4357d4aa7ae3fd7129f264f06e942bb7ad83132ef5f17cd857f0c73a1c118a3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Undefined-1
ObjectType-Feature-3
content type line 23
ORCID 0000-0003-4584-6096
0000-0002-7090-8087
0000-0001-5392-2074
OpenAccessLink https://doaj.org/article/6c21b804142f4f40914b637beaa35771
PMID 33335098
PQID 2473198121
PQPubID 2041912
PageCount 13
ParticipantIDs doaj_primary_oai_doaj_org_article_6c21b804142f4f40914b637beaa35771
pubmedcentral_primary_oai_pubmedcentral_nih_gov_7747630
proquest_miscellaneous_2471466173
proquest_journals_2473198121
pubmed_primary_33335098
crossref_primary_10_1038_s41597_020_00777_8
crossref_citationtrail_10_1038_s41597_020_00777_8
springer_journals_10_1038_s41597_020_00777_8
PublicationCentury 2000
PublicationDate 2020-12-17
PublicationDateYYYYMMDD 2020-12-17
PublicationDate_xml – month: 12
  year: 2020
  text: 2020-12-17
  day: 17
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Scientific data
PublicationTitleAbbrev Sci Data
PublicationTitleAlternate Sci Data
PublicationYear 2020
Publisher Nature Publishing Group UK
Nature Publishing Group
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
– name: Nature Portfolio
References Richey, C. et al. Voices obscured in complex environmental settings (voices) corpus. arXiv preprint arXiv:1804.05053 (2018).
Fischer, T., Caversaccio, M. & Wimmer, W. A front-back confusion metric in horizontal sound localization: The fbc score. In ACM Symposium on Applied Perception 2020, SAP ’20, https://doi.org/10.1145/3385955.3407928 (Association for Computing Machinery, New York, NY, USA, 2020).
AvanPGiraudetFBükiBImportance of binaural hearingAudiol. Neurotol.2015203610.1159/000380741
BozkırMGKaraka¸sPYavuzMDereFMorphometry of the external ear in our adult populationAesthetic plastic surgery200630818510.1007/s00266-005-6095-1
WimmerWWederSCaversaccioMKompisMSpeech intelligibility in noise with a pinna effect imitating cochlear implant processorOtol. & neurotology201637192310.1097/MAO.0000000000000866
GannotSVincentEMarkovich-GolanSOzerovAA consolidated perspective on multimicrophone speech enhancement and source separationIEEE/ACM Transactions on Audio, Speech, Lang. Process.20172569273010.1109/TASLP.2016.2647702
BushbyKColeTMatthewsJGoodshipJCentiles for adult head circumferenceArch. disease childhood199267128612871:STN:280:DyaK3s%2FnsVSmuw%3D%3D10.1136/adc.67.10.1286
BertinNVoicehome-2, an extended corpus for multichannel speech processing in real homesSpeech Commun.2019106687810.1016/j.specom.2018.11.002
Cuevas-RodríguezM3D Tune-In Toolkit: An open-source library for real-time binaural spatialisationPLOS ONE201914e02118991:CAS:528:DC%2BC1MXnvFemur0%3D10.1371/journal.pone.0211899308561986411112
DenkFErnstSMEwertSDKollmeierBAdapting hearing devices to the individual ear acoustics: Database and target response correction functions for various device stylesTrends hearing2018222331216518779313
International Telecommunication Union. Recommendation itu-r bs.2051-2. In Advanced sound system for programme production (ITU, 2018).
Cosentino, J., Pariente, M., Cornell, S., Deleforge, A. & Vincent, E. Librimix: An open-source dataset for generalizable speech separation. arXiv preprint arXiv:2005.11262 (2020).
DrudeLHaeb-UmbachRIntegration of neural networks and probabilistic spatial models for acoustic blind source separationIEEE J. Sel. Top. Signal Process.2019138158262019ISTSP..13..815D10.1109/JSTSP.2019.2912565
BiancoMJMachine learning in acoustics: Theory and applications. TheJ. Acoust. Soc. Am.2019146359036282019ASAJ..146.3590B10.1121/1.5133944
Van SegbroeckMDipco–dinner party corpusarXiv preprint arXiv2019190913447
Middlebrooks, J. C., Simon, J. Z., Popper, A. N. & Fay, R. R. The auditory system at the cocktail party, vol. 60 (Springer, 2017).
Fischer, T. et al. Pinna-imitating microphone directionality improves sound localization and discrimination in bilateral cochlear implant users. Ear Hear. (in print)https://doi.org/10.1097/AUD.0000000000000912 (2020).
Drude, L., Hasenklever, D. & Haeb-Umbach, R. Unsupervised training of a deep clustering model for multichannel blind source separation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 695–699 (IEEE, 2019).
Kim, C. & Stern, R. M. Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis. In Ninth Annual Conference of the International Speech Communication Association (2008).
CucisP-AHearing in noise: The importance of coding strategies—normal-hearing subjects and cochlear implant usersAppl. Sci.2019973410.3390/app9040734
RayleighLXii. on our perception of sound directionThe London, Edinburgh, Dublin Philos. Mag. J. Sci.19071321423210.1080/14786440709463595
Snyder, D., Chen, G. & Povey, D. Musan: A music, speech, and noise corpus. arXiv preprint arXiv:1510.08484 (2015).
SainathTNMultichannel signal processing with deep neural networks for automatic speech recognitionIEEE/ACM Transactions on Audio, Speech, Lang. Process.20172596597910.1109/TASLP.2017.2672401
Zen, H. et al. Libritts: A corpus derived from librispeech for text-to-speech. arXiv preprint arXiv:1904.02882 (2019).
Vacher, M. et al. The sweet-home speech and multimodal corpus for home automation interaction (2014).
Panayotov, V., Chen, G., Povey, D. & Khudanpur, S. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5206–5210 (IEEE, 2015).
MostefaDThe chil audiovisual corpus for lecture and meeting analysis inside smart roomsLang. resources evaluation20074138940710.1007/s10579-007-9054-4
WimmerWKompisMStiegerCCaversaccioMWederSDirectional microphone contralateral routing of signals in cochlear implant users: A within-subjects comparisonEar hearing20173836837310.1097/AUD.0000000000000412
BarkerJWatanabeSVincentETrmalJThe fifth’chime’speech separation and recognition challenge: dataset, task and baselinesarXiv preprint arXiv2018180310609
FischerTCaversaccioMWimmerWMultichannel acoustic source and image dataset for the cocktail party effect in hearing aid and implant users202010.6084/m9.figshare.c.5087012.v1figshare
FischerTKompisMMantokoudisGCaversaccioMWimmerWDynamic sound field audiometry: Static and dynamic spatial hearing tests in the full horizontal planeAppl. Acoust.202016610736310.1016/j.apacoust.2020.107363
Stupakov, A., Hanusa, E., Bilmes, J. & Fox, D. Cosine-a corpus of multi-party conversational speech in noisy environments. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 4153–4156 (IEEE, 2009).
Leijon, A. D. 5.1: Subset of signal enhancement techniques operational on pc system. Hear. Deliv. D5 (2005).
CherryECSome Experiments on the Recognition of Speech, with One and with Two EarsThe J. Acoust. Soc. Am.1953259759791953ASAJ...25..975C10.1121/1.1907229
Plack, C. J. (ed.) Oxford Handbook of Auditory Science: Hearing (Oxford University Press, 2010).
AnGThe effects of adding noise during backpropagation training on a generalization performanceNeural computation1996864367410.1162/neco.1996.8.3.643
Mathur, A., Kawsar, F., Berthouze, N. & Lane, N. D. Libri-adapt: a new speech dataset for unsupervised domain adaptation. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 7439–7443 (IEEE, 2020).
QianY-mWengCChangX-kWangSYuDPast review, current progress, and challenges ahead on the cocktail party problemFront. Inf. Technol. & Electron. Eng.201819406310.1631/FITEE.1700814
Pertilä, P., Brutti, A., Svaizer, P. & Omologo, M. Multichannel Source Activity Detection, Localization, and Tracking. In Audio Source Separation and Speech Enhancement, 47–64, https://doi.org/10.1002/9781119279860.ch4 (John Wiley & Sons Ltd, Chichester, UK, 2018).
Shinn-CunninghamBGBottom-up and top-down influences on spatial unmaskingActa Acustica United with Acustica200591967979
Kumar, A. & Florencio, D. Speech enhancement in multiple-noise conditions using deep neural networks. arXiv preprint arXiv:1605.02427 (2016).
Moray, N. Attention: Selective processes in vision and hearing (Routledge, 2017).
Watanabe, S., Mandel, M., Barker, J. & Vincent, E. Chime-6 challenge: Tackling multispeaker speech recognition for unsegmented recordings. arXiv preprint arXiv:2004.09249 (2020).
Pariente, M. et al. Asteroid: the pytorch-based audio source separation toolkit for researchers. arXiv preprint arXiv:2005.04132 (2020).
Reddy, C. K. et al. The interspeech 2020 deep noise suppression challenge: Datasets, subjective speech quality and testing framework. arXiv preprint arXiv:2001.08662 (2020).
Ravanelli, M. et al. The dirha-english corpus and related tasks for distant-speech recognition in domestic environments. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 275–282 (IEEE, 2015).
WimmerWCaversaccioMKompisMSpeech intelligibility in noise with a single-unit cochlear implant audio processorOtol. & neurotology2015361197120210.1097/MAO.0000000000000775
Jeub, M., Herglotz, C., Nelke, C., Beaugeant, C. & Vary, P. Noise reduction for dual-microphone mobile phones exploiting power level differences. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1693–1696 (IEEE, 2012).
Blauert, J. Spatial hearing : the psychophysics of human sound localization (MIT Press, 1997).
Calamia, P., Davis, S., Smalt, C. & Weston, C. A conformal, helmet-mounted microphone array for auditory situational awareness and hearing protection. In 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 96–100 (IEEE, 2017).
LevinDYHabetsEAGannotSNear-field signal acquisition for smartglasses using two acoustic vector-sensorsSpeech Commun.201683425310.1016/j.specom.2016.07.002
Girin, L., Gannot, S. & Li, X. Chapter 3 - audio source separation into the wild. In Alameda-Pineda, X., Ricci, E. & Sebe, N. (eds.) Multimodal Behavior Analysis in the Wild, Computer Vision and Pattern Recognition, 53–78, https://doi.org/10.1016/B978-0-12-814601-9.00022-5 (Academic Press, 2019).
Higuchi, T., Kinoshita, K., Delcroix, M. & Nakatani, T. Adversarial training for data-driven speech enhancement without parallel corpus. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 40–47 (IEEE, 2017).
McDermottJHThe cocktail party problemCurr. Biol.200919R1024R10271:CAS:528:DC%2BD1MXhsV2hu7nL10.1016/j.cub.2009.09.00519948136
World Health Organization. Deafness and hearing loss, Fact Sheet. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss (2020).
GawliczekTUnilateral and bilateral audiological benefit with an adhesively attached, noninvasive bone conduction hearing systemOtol. & neurotology2018391025103010.1097/MAO.0000000000001924
Wichern, G. et al. Wham!: Extending speech separation to noisy environments. arXiv preprint arXiv:1907.01160 (2019).
KrishnamurthyNHansenJHBabble noise: modeling, analysis, and applicationsIEEE transactions on audio, speech, language processing2009171394140710.1109/TASL.2009.2015084
Corey, R. M., Tsuda, N. & Singer, A. C. Acoustic impulse responses for wearable audio devices. In ICASSP 2019-2019 IEEE International Conf
N Bertin (777_CR14) 2019; 106
777_CR46
777_CR47
777_CR48
777_CR41
777_CR42
777_CR43
TN Sainath (777_CR33) 2017; 25
W Wimmer (777_CR49) 2017; 38
D Mostefa (777_CR15) 2007; 41
777_CR40
JH McDermott (777_CR3) 2009; 19
S Gannot (777_CR60) 2017; 25
M Van Segbroeck (777_CR8) 2019; 1909
G An (777_CR57) 1996; 8
EC Cherry (777_CR5) 1953; 25
MJ Bianco (777_CR38) 2019; 146
W Wimmer (777_CR51) 2016; 37
777_CR16
777_CR17
777_CR18
777_CR19
T Gawliczek (777_CR52) 2018; 39
777_CR12
777_CR56
Y-m Qian (777_CR6) 2018; 19
777_CR13
777_CR4
J Barker (777_CR9) 2018; 1803
777_CR10
F Denk (777_CR27) 2018; 22
777_CR54
MG Bozkır (777_CR58) 2006; 30
777_CR11
DY Levin (777_CR24) 2016; 83
L Drude (777_CR34) 2019; 13
K Bushby (777_CR59) 1992; 67
777_CR1
T Fischer (777_CR45) 2020; 166
P Avan (777_CR53) 2015; 20
M Cuevas-Rodríguez (777_CR37) 2019; 14
T Fischer (777_CR55) 2020
777_CR29
777_CR23
777_CR25
777_CR26
777_CR20
L Rayleigh (777_CR28) 1907; 13
777_CR21
N Krishnamurthy (777_CR44) 2009; 17
777_CR22
P-A Cucis (777_CR7) 2019; 9
777_CR61
777_CR39
777_CR35
777_CR36
777_CR30
777_CR31
777_CR32
W Wimmer (777_CR50) 2015; 36
BG Shinn-Cunningham (777_CR2) 2005; 91
References_xml – reference: Zen, H. et al. Libritts: A corpus derived from librispeech for text-to-speech. arXiv preprint arXiv:1904.02882 (2019).
– reference: Higuchi, T., Kinoshita, K., Delcroix, M. & Nakatani, T. Adversarial training for data-driven speech enhancement without parallel corpus. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 40–47 (IEEE, 2017).
– reference: International Telecommunication Union. Recommendation itu-r bs.1770-4. In Algorithms to measure audio programme loudness and true-peak audio level (ITU, 2015).
– reference: WimmerWKompisMStiegerCCaversaccioMWederSDirectional microphone contralateral routing of signals in cochlear implant users: A within-subjects comparisonEar hearing20173836837310.1097/AUD.0000000000000412
– reference: BarkerJWatanabeSVincentETrmalJThe fifth’chime’speech separation and recognition challenge: dataset, task and baselinesarXiv preprint arXiv2018180310609
– reference: WimmerWWederSCaversaccioMKompisMSpeech intelligibility in noise with a pinna effect imitating cochlear implant processorOtol. & neurotology201637192310.1097/MAO.0000000000000866
– reference: AvanPGiraudetFBükiBImportance of binaural hearingAudiol. Neurotol.2015203610.1159/000380741
– reference: Wichern, G. et al. Wham!: Extending speech separation to noisy environments. arXiv preprint arXiv:1907.01160 (2019).
– reference: KrishnamurthyNHansenJHBabble noise: modeling, analysis, and applicationsIEEE transactions on audio, speech, language processing2009171394140710.1109/TASL.2009.2015084
– reference: Drude, L., Hasenklever, D. & Haeb-Umbach, R. Unsupervised training of a deep clustering model for multichannel blind source separation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 695–699 (IEEE, 2019).
– reference: BertinNVoicehome-2, an extended corpus for multichannel speech processing in real homesSpeech Commun.2019106687810.1016/j.specom.2018.11.002
– reference: Reddy, C. K. et al. The interspeech 2020 deep noise suppression challenge: Datasets, subjective speech quality and testing framework. arXiv preprint arXiv:2001.08662 (2020).
– reference: GannotSVincentEMarkovich-GolanSOzerovAA consolidated perspective on multimicrophone speech enhancement and source separationIEEE/ACM Transactions on Audio, Speech, Lang. Process.20172569273010.1109/TASLP.2016.2647702
– reference: Corey, R. M., Tsuda, N. & Singer, A. C. Acoustic impulse responses for wearable audio devices. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 216–220 (IEEE, 2019).
– reference: McDermottJHThe cocktail party problemCurr. Biol.200919R1024R10271:CAS:528:DC%2BD1MXhsV2hu7nL10.1016/j.cub.2009.09.00519948136
– reference: Vacher, M. et al. The sweet-home speech and multimodal corpus for home automation interaction (2014).
– reference: Cuevas-RodríguezM3D Tune-In Toolkit: An open-source library for real-time binaural spatialisationPLOS ONE201914e02118991:CAS:528:DC%2BC1MXnvFemur0%3D10.1371/journal.pone.0211899308561986411112
– reference: Jeub, M., Herglotz, C., Nelke, C., Beaugeant, C. & Vary, P. Noise reduction for dual-microphone mobile phones exploiting power level differences. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1693–1696 (IEEE, 2012).
– reference: Fischer, T. et al. Pinna-imitating microphone directionality improves sound localization and discrimination in bilateral cochlear implant users. Ear Hear. (in print)https://doi.org/10.1097/AUD.0000000000000912 (2020).
– reference: Blauert, J. Spatial hearing : the psychophysics of human sound localization (MIT Press, 1997).
– reference: Pertilä, P., Brutti, A., Svaizer, P. & Omologo, M. Multichannel Source Activity Detection, Localization, and Tracking. In Audio Source Separation and Speech Enhancement, 47–64, https://doi.org/10.1002/9781119279860.ch4 (John Wiley & Sons Ltd, Chichester, UK, 2018).
– reference: International Telecommunication Union. Recommendation itu-r bs.2051-2. In Advanced sound system for programme production (ITU, 2018).
– reference: Ravanelli, M. et al. The dirha-english corpus and related tasks for distant-speech recognition in domestic environments. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 275–282 (IEEE, 2015).
– reference: Girin, L., Gannot, S. & Li, X. Chapter 3 - audio source separation into the wild. In Alameda-Pineda, X., Ricci, E. & Sebe, N. (eds.) Multimodal Behavior Analysis in the Wild, Computer Vision and Pattern Recognition, 53–78, https://doi.org/10.1016/B978-0-12-814601-9.00022-5 (Academic Press, 2019).
– reference: BiancoMJMachine learning in acoustics: Theory and applications. TheJ. Acoust. Soc. Am.2019146359036282019ASAJ..146.3590B10.1121/1.5133944
– reference: BushbyKColeTMatthewsJGoodshipJCentiles for adult head circumferenceArch. disease childhood199267128612871:STN:280:DyaK3s%2FnsVSmuw%3D%3D10.1136/adc.67.10.1286
– reference: Snyder, D., Chen, G. & Povey, D. Musan: A music, speech, and noise corpus. arXiv preprint arXiv:1510.08484 (2015).
– reference: CucisP-AHearing in noise: The importance of coding strategies—normal-hearing subjects and cochlear implant usersAppl. Sci.2019973410.3390/app9040734
– reference: DrudeLHaeb-UmbachRIntegration of neural networks and probabilistic spatial models for acoustic blind source separationIEEE J. Sel. Top. Signal Process.2019138158262019ISTSP..13..815D10.1109/JSTSP.2019.2912565
– reference: World Health Organization. Deafness and hearing loss, Fact Sheet. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss (2020).
– reference: Panayotov, V., Chen, G., Povey, D. & Khudanpur, S. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5206–5210 (IEEE, 2015).
– reference: Fischer, T., Caversaccio, M. & Wimmer, W. A front-back confusion metric in horizontal sound localization: The fbc score. In ACM Symposium on Applied Perception 2020, SAP ’20, https://doi.org/10.1145/3385955.3407928 (Association for Computing Machinery, New York, NY, USA, 2020).
– reference: Middlebrooks, J. C., Simon, J. Z., Popper, A. N. & Fay, R. R. The auditory system at the cocktail party, vol. 60 (Springer, 2017).
– reference: CherryECSome Experiments on the Recognition of Speech, with One and with Two EarsThe J. Acoust. Soc. Am.1953259759791953ASAJ...25..975C10.1121/1.1907229
– reference: Shinn-CunninghamBGBottom-up and top-down influences on spatial unmaskingActa Acustica United with Acustica200591967979
– reference: Kumar, A. & Florencio, D. Speech enhancement in multiple-noise conditions using deep neural networks. arXiv preprint arXiv:1605.02427 (2016).
– reference: FischerTCaversaccioMWimmerWMultichannel acoustic source and image dataset for the cocktail party effect in hearing aid and implant users202010.6084/m9.figshare.c.5087012.v1figshare
– reference: Mathur, A., Kawsar, F., Berthouze, N. & Lane, N. D. Libri-adapt: a new speech dataset for unsupervised domain adaptation. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 7439–7443 (IEEE, 2020).
– reference: Calamia, P., Davis, S., Smalt, C. & Weston, C. A conformal, helmet-mounted microphone array for auditory situational awareness and hearing protection. In 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 96–100 (IEEE, 2017).
– reference: Plack, C. J. (ed.) Oxford Handbook of Auditory Science: Hearing (Oxford University Press, 2010).
– reference: GawliczekTUnilateral and bilateral audiological benefit with an adhesively attached, noninvasive bone conduction hearing systemOtol. & neurotology2018391025103010.1097/MAO.0000000000001924
– reference: MostefaDThe chil audiovisual corpus for lecture and meeting analysis inside smart roomsLang. resources evaluation20074138940710.1007/s10579-007-9054-4
– reference: FischerTKompisMMantokoudisGCaversaccioMWimmerWDynamic sound field audiometry: Static and dynamic spatial hearing tests in the full horizontal planeAppl. Acoust.202016610736310.1016/j.apacoust.2020.107363
– reference: Leijon, A. D. 5.1: Subset of signal enhancement techniques operational on pc system. Hear. Deliv. D5 (2005).
– reference: RayleighLXii. on our perception of sound directionThe London, Edinburgh, Dublin Philos. Mag. J. Sci.19071321423210.1080/14786440709463595
– reference: Moray, N. Attention: Selective processes in vision and hearing (Routledge, 2017).
– reference: Van SegbroeckMDipco–dinner party corpusarXiv preprint arXiv2019190913447
– reference: Löllmann, H. W. et al. The locata challenge data corpus for acoustic source localization and tracking. In 2018 IEEE 10th Sensor Array and Multichannel Signal Processing Workshop (SAM), 410–414 (2018).
– reference: Richey, C. et al. Voices obscured in complex environmental settings (voices) corpus. arXiv preprint arXiv:1804.05053 (2018).
– reference: Pariente, M. et al. Asteroid: the pytorch-based audio source separation toolkit for researchers. arXiv preprint arXiv:2005.04132 (2020).
– reference: Cosentino, J., Pariente, M., Cornell, S., Deleforge, A. & Vincent, E. Librimix: An open-source dataset for generalizable speech separation. arXiv preprint arXiv:2005.11262 (2020).
– reference: DenkFErnstSMEwertSDKollmeierBAdapting hearing devices to the individual ear acoustics: Database and target response correction functions for various device stylesTrends hearing2018222331216518779313
– reference: Kim, C. & Stern, R. M. Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis. In Ninth Annual Conference of the International Speech Communication Association (2008).
– reference: Watanabe, S., Mandel, M., Barker, J. & Vincent, E. Chime-6 challenge: Tackling multispeaker speech recognition for unsegmented recordings. arXiv preprint arXiv:2004.09249 (2020).
– reference: AnGThe effects of adding noise during backpropagation training on a generalization performanceNeural computation1996864367410.1162/neco.1996.8.3.643
– reference: Stupakov, A., Hanusa, E., Bilmes, J. & Fox, D. Cosine-a corpus of multi-party conversational speech in noisy environments. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 4153–4156 (IEEE, 2009).
– reference: BozkırMGKaraka¸sPYavuzMDereFMorphometry of the external ear in our adult populationAesthetic plastic surgery200630818510.1007/s00266-005-6095-1
– reference: LevinDYHabetsEAGannotSNear-field signal acquisition for smartglasses using two acoustic vector-sensorsSpeech Commun.201683425310.1016/j.specom.2016.07.002
– reference: QianY-mWengCChangX-kWangSYuDPast review, current progress, and challenges ahead on the cocktail party problemFront. Inf. Technol. & Electron. Eng.201819406310.1631/FITEE.1700814
– reference: SainathTNMultichannel signal processing with deep neural networks for automatic speech recognitionIEEE/ACM Transactions on Audio, Speech, Lang. Process.20172596597910.1109/TASLP.2017.2672401
– reference: WimmerWCaversaccioMKompisMSpeech intelligibility in noise with a single-unit cochlear implant audio processorOtol. & neurotology2015361197120210.1097/MAO.0000000000000775
– volume: 17
  start-page: 1394
  year: 2009
  ident: 777_CR44
  publication-title: IEEE transactions on audio, speech, language processing
  doi: 10.1109/TASL.2009.2015084
– volume: 25
  start-page: 692
  year: 2017
  ident: 777_CR60
  publication-title: IEEE/ACM Transactions on Audio, Speech, Lang. Process.
  doi: 10.1109/TASLP.2016.2647702
– ident: 777_CR26
  doi: 10.1109/WASPAA.2017.8170002
– ident: 777_CR36
  doi: 10.1109/ICASSP.2019.8683520
– ident: 777_CR41
– volume: 83
  start-page: 42
  year: 2016
  ident: 777_CR24
  publication-title: Speech Commun.
  doi: 10.1016/j.specom.2016.07.002
– volume: 8
  start-page: 643
  year: 1996
  ident: 777_CR57
  publication-title: Neural computation
  doi: 10.1162/neco.1996.8.3.643
– ident: 777_CR12
  doi: 10.21437/Interspeech.2020-3038
– volume: 30
  start-page: 81
  year: 2006
  ident: 777_CR58
  publication-title: Aesthetic plastic surgery
  doi: 10.1007/s00266-005-6095-1
– volume: 25
  start-page: 965
  year: 2017
  ident: 777_CR33
  publication-title: IEEE/ACM Transactions on Audio, Speech, Lang. Process.
  doi: 10.1109/TASLP.2017.2672401
– ident: 777_CR32
  doi: 10.21437/Interspeech.2016-88
– volume: 166
  start-page: 107363
  year: 2020
  ident: 777_CR45
  publication-title: Appl. Acoust.
  doi: 10.1016/j.apacoust.2020.107363
– volume: 19
  start-page: 40
  year: 2018
  ident: 777_CR6
  publication-title: Front. Inf. Technol. & Electron. Eng.
  doi: 10.1631/FITEE.1700814
– volume: 39
  start-page: 1025
  year: 2018
  ident: 777_CR52
  publication-title: Otol. & neurotology
  doi: 10.1097/MAO.0000000000001924
– ident: 777_CR1
  doi: 10.1007/978-3-319-51662-2
– volume: 38
  start-page: 368
  year: 2017
  ident: 777_CR49
  publication-title: Ear hearing
  doi: 10.1097/AUD.0000000000000412
– volume: 20
  start-page: 3
  year: 2015
  ident: 777_CR53
  publication-title: Audiol. Neurotol.
  doi: 10.1159/000380741
– ident: 777_CR22
  doi: 10.1109/ICASSP.2009.4960543
– ident: 777_CR13
– volume: 41
  start-page: 389
  year: 2007
  ident: 777_CR15
  publication-title: Lang. resources evaluation
  doi: 10.1007/s10579-007-9054-4
– volume: 13
  start-page: 815
  year: 2019
  ident: 777_CR34
  publication-title: IEEE J. Sel. Top. Signal Process.
  doi: 10.1109/JSTSP.2019.2912565
– volume: 146
  start-page: 3590
  year: 2019
  ident: 777_CR38
  publication-title: J. Acoust. Soc. Am.
  doi: 10.1121/1.5133944
– ident: 777_CR23
– ident: 777_CR39
  doi: 10.1093/oxfordhb/9780199233557.001.0001
– ident: 777_CR4
  doi: 10.4324/9781315514611
– ident: 777_CR48
  doi: 10.1097/AUD.0000000000000912
– ident: 777_CR20
  doi: 10.1109/SAM.2018.8448644
– ident: 777_CR29
  doi: 10.7551/mitpress/6391.001.0001
– ident: 777_CR47
  doi: 10.1109/ICASSP.2012.6288223
– ident: 777_CR35
  doi: 10.1109/ASRU.2017.8268914
– volume: 67
  start-page: 1286
  year: 1992
  ident: 777_CR59
  publication-title: Arch. disease childhood
  doi: 10.1136/adc.67.10.1286
– ident: 777_CR21
  doi: 10.21437/Interspeech.2019-2821
– ident: 777_CR30
  doi: 10.1002/9781119279860.ch4
– volume: 37
  start-page: 19
  year: 2016
  ident: 777_CR51
  publication-title: Otol. & neurotology
  doi: 10.1097/MAO.0000000000000866
– ident: 777_CR43
  doi: 10.21437/Interspeech.2008-644
– ident: 777_CR61
  doi: 10.1109/ICASSP.2015.7178964
– volume: 13
  start-page: 214
  year: 1907
  ident: 777_CR28
  publication-title: The London, Edinburgh, Dublin Philos. Mag. J. Sci.
  doi: 10.1080/14786440709463595
– volume: 1803
  start-page: 10609
  year: 2018
  ident: 777_CR9
  publication-title: arXiv preprint arXiv
– volume: 14
  start-page: e0211899
  year: 2019
  ident: 777_CR37
  publication-title: PLOS ONE
  doi: 10.1371/journal.pone.0211899
– year: 2020
  ident: 777_CR55
  doi: 10.6084/m9.figshare.c.5087012.v1
– ident: 777_CR46
  doi: 10.1145/3385955.3407928
– ident: 777_CR18
– volume: 1909
  start-page: 13447
  year: 2019
  ident: 777_CR8
  publication-title: arXiv preprint arXiv
– ident: 777_CR56
– volume: 9
  start-page: 734
  year: 2019
  ident: 777_CR7
  publication-title: Appl. Sci.
  doi: 10.3390/app9040734
– volume: 22
  start-page: 233121651877931
  year: 2018
  ident: 777_CR27
  publication-title: Trends hearing
  doi: 10.1177/2331216518779313
– ident: 777_CR42
– volume: 25
  start-page: 975
  year: 1953
  ident: 777_CR5
  publication-title: The J. Acoust. Soc. Am.
  doi: 10.1121/1.1907229
– ident: 777_CR10
  doi: 10.21437/CHiME.2020-1
– ident: 777_CR54
  doi: 10.21437/Interspeech.2020-1673
– ident: 777_CR25
– volume: 91
  start-page: 967
  year: 2005
  ident: 777_CR2
  publication-title: Acta Acustica United with Acustica
– ident: 777_CR19
  doi: 10.1109/ICASSP.2019.8682733
– volume: 19
  start-page: R1024
  year: 2009
  ident: 777_CR3
  publication-title: Curr. Biol.
  doi: 10.1016/j.cub.2009.09.005
– ident: 777_CR11
  doi: 10.1109/ICASSP40776.2020.9053074
– volume: 106
  start-page: 68
  year: 2019
  ident: 777_CR14
  publication-title: Speech Commun.
  doi: 10.1016/j.specom.2018.11.002
– ident: 777_CR16
  doi: 10.21437/Interspeech.2018-1454
– ident: 777_CR31
  doi: 10.1016/B978-0-12-814601-9.00022-5
– ident: 777_CR40
  doi: 10.21437/Interspeech.2019-2441
– ident: 777_CR17
  doi: 10.1109/ASRU.2015.7404805
– volume: 36
  start-page: 1197
  year: 2015
  ident: 777_CR50
  publication-title: Otol. & neurotology
  doi: 10.1097/MAO.0000000000000775
SSID ssj0001340570
Score 2.2549584
Snippet The Cocktail Party Effect refers to the ability of the human sense of hearing to extract a specific target sound source from a mixture of background noises in...
Measurement(s) acoustic Cocktail Party scenarios • acoustic composition of speech • Noise Technology Type(s) head-mounted microphones Factor Type(s) head shape...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 440
SubjectTerms 631/114/1305
631/378/2619
639/166/985
639/705/117
Acoustics
Cochlea
Cochlear Implants
Data Descriptor
Hearing
Hearing Aids
Humanities and Social Sciences
Humans
multidisciplinary
Noise
Science
Science (multidisciplinary)
Transplants & implants
SummonAdditionalLinks – databaseName: ProQuest Central Biological Science Database (via ProQuest)
  dbid: M7P
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lj9MwELZg4cAFWJ6BBRmJAwiijWPXdk4IECtOqz2AtDfLT4jUTUtTkPj3zDhuq_LYCz02jmR7Xl_G428IeR6Sh6gpZK2ljLUIltUdb0ItOy94E7UOLORmE-r0VJ-fd2cl4TaWssqNT8yOOiw85siPW6FAWyAcsTfLbzV2jcLT1dJC4yq5hiwJPJfune1yLBzhSFPuyjRcH48Qr5B-tMXr1EqBg96LR5m2_29Y88-Syd_OTXM4Orn1vwu5TW4WIErfTppzSK7E4Q45LKY-0heFj_rlXXKRL-niDeEhzil40NwAjE5pf2qHQPsL8EoUi03HuKYAgynASoquFutT6RLU8yedKkdoP1Bsog2LorYP5fXlHARMMWEy3iOfTz58ev-xLl0aag9ob10zC4hLBWGtspGnoABBJIBZqZGxE61zygaNBJExzRJTPuiZSo1X3DIPXzeW3ycHw2KIDwmVMwfRUXdWAChV3mknuy4IBZhGt07xirCNrIwvFObYSWNu8lE612aSrwH5mixfoyvyavvOciLwuHT0O1SB7Ugk385_LFZfTLFlI33LHPI2iTaJBB_ITDjJlYvWwk4oVpGjjeRN8Qij2Ym9Is-2j8GW8YDGDhFkh2PAbABTwkofTPq2nQmHH4A7mKHa08S9qe4_GfqvmS8cN1PypiKvNzq7m9a_t-LR5at4TG60aEasrZk6Igfr1ff4hFz3P9b9uHqa7fAXIJI4xQ
  priority: 102
  providerName: ProQuest
Title Multichannel acoustic source and image dataset for the cocktail party effect in hearing aid and implant users
URI https://link.springer.com/article/10.1038/s41597-020-00777-8
https://www.ncbi.nlm.nih.gov/pubmed/33335098
https://www.proquest.com/docview/2473198121
https://www.proquest.com/docview/2471466173
https://pubmed.ncbi.nlm.nih.gov/PMC7747630
https://doaj.org/article/6c21b804142f4f40914b637beaa35771
Volume 7
WOSCitedRecordID wos000601267000003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ
  customDbUrl:
  eissn: 2052-4463
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001340570
  issn: 2052-4463
  databaseCode: DOA
  dateStart: 20140101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2052-4463
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001340570
  issn: 2052-4463
  databaseCode: M~E
  dateStart: 20140101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lj9MwELZg4cAFsTwLS2UkDiCINo5dj3NkUVdwoIoQSOVkObYjInVDtSlIXPjtzDhp2fK80EMOtSNN5uH5bM-Dsceh8eg1lc6M1jFTwYmslHnIdOmVzKMxQYTUbAIWC7NcltWFVl8UEzaUBx4Yd6x9IWoqkqOKRjW4GxGq1hLq6JycQcoeLxD1XNhMpdMVSUAkH7NkcmmOe_RUVHi0oERqAFya9zxRKtj_O5T5a7DkTzemyRGd3mDXRwTJXwyUH7JLsbvJDkcb7fmTsZD001vsLGXXUmpvF1ccl77UuYsP5_XcdYG3Z7iccIoS7eOGI37liAc5rZEUWMrXyJ6vfAj54G3Hqfs10sRdG8bX1yuUDKeTjv42e386f_fyVTa2V8g8wrRNJhxCJQjKOXBRNgHQ9TeIj5pcx1IVdQ0uGKrsGJtZI8AHM4Mm9yCd8LgtcfIOO-g-dfEe43pWo1szpVOIJsHXptZlGRSgtExRg5wwsWW19WPtcWqBsbLpDlwaO4jHonhsEo81E_Zs9856qLzx19knJMHdTKqanf5AXbKjLtl_6dKEHW3lb0dT7m2hAJcpxEE4_Gg3jEZINyuuiyg7moP6jmAQv_TuoC47SiT-EJUhhbCnSHuk7o907cdU6JuYqWU-Yc-3KveDrD-z4v7_YMUDdq0gWxFFJuCIHWzOP8eH7Kr_smn78ym7DEtITzNlV07mi-rtNBnglGJnK3p-m-NI9fpN9eE7UBsw1w
linkProvider Directory of Open Access Journals
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwELZKQYILUJ4LBYwEEgiixo-NnQNCvKpWLaseitSb69gORNpml80C6p_iNzLjJLtaHr31QI4bJ7In38x8a8-DkCe-dOA1ZZboLAuJ9JYluUh9kuVOijRo7ZmPzSbUaKSPjvKDNfKzz4XBsMreJkZD7ScO98i3uFSAFnBH7PX0a4Jdo_B0tW-h0cJiL5z-gL9szavd9_B9n3K-_eHw3U7SdRVIHLCTecIsMATlpbXKBlF6BR6vBFpQplnIJS8KZb3GgoahHJZMOa-HqkydEpY5YONWwHsvkItAI7iOoYIHyz0dgfQn7XJzUqG3GvCPWO6UY_q2UuAQVvxfbBPwN277Z4jmb-e00f1tX_vfBHedXO2INn3TasYGWQv1DbLRmbKGPuvqbT-_SU5iEjJmQNdhTMFDxAZntD3WoLb2tDoBq0sxmLYJcwo0nwJtpuhKMP6WTkH9TmkbGUOrmmKTcBAitZXvHp-OAcAUN4SaW-TTuSz7NlmvJ3W4S2g2LMD769xKIN3KFbrI8txLBZxN80KJAWE9NozrSrRjp5CxiaECQpsWTwbwZCKejB6QF4tnpm2BkjNHv0XILUZicfH4w2T22XS2ymSOswLrUkleylLC7GSRCVUEa0ESig3IZo8001m8xixhNiCPF7fBVuEBlK0DfDscA2YBODOs9E6L78VMBFxAXmGGagX5K1NdvVNXX2I9dBRmJtIBednryHJa_xbFvbNX8Yhc3jn8uG_2d0d798kVjirMeMLUJlmfz76FB-SS-z6vmtnDaAMoOT5v3fkFSiSVzg
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lj9MwELaWBaG9AMszsICRQAJB1Dh2Y-eAELBUrBZVPYC0N-PYDhupm5amgPav8euYcZJW5bG3PZBj40T25JuZr_Y8CHnsSgteU2SxyjIfC2dYnPPExVluBU-8Uo650GxCjsfq6CifbJGffS4MhlX2NjEYajezuEc-SIUEtIA7YoOyC4uY7I9ezb_G2EEKT1r7dhotRA796Q_4-9a8PNiHb_0kTUfvPr59H3cdBmILTGUZMwNsQTphjDSel06C9yuBIpRJ5nORFoU0TmFxQ18OSyatU0NZJlZywywwc8PhvRfIRYlFy0PY4GS9v8ORCiVdnk7C1aABX4mlT1NM5ZYSnMOGLwwtA_7Gc_8M1_ztzDa4wtHV_1mI18iVjoDT163G7JItX18nu52Ja-jTrg73sxvkJCQnY2Z07acUPEdofEbb4w5qakerE7DGFINsG7-kQP8p0GmKLgbjcukc1PKUthEztKopNg8HgVJTue7x-RSATXGjqLlJPp3Lsm-R7XpW-zuEZsMCWIHKjQAyLm2hiizPnZDA5VRaSB4R1uNE2650O3YQmeoQQsCVbrGlAVs6YEuriDxfPTNvC5ecOfoNwm81EouOhx9miy-6s2E6sykrsF6VSEtRCpidKDIuC28MSEKyiOz1qNOdJWz0GnIRebS6DTYMD6ZM7eHb4RgwF8ClYaW3W6yvZsLhAlILM5QbWrAx1c07dXUc6qSjMDOeRORFry_raf1bFHfPXsVDchlURn84GB_eIzspajNLYyb3yPZy8c3fJ5fs92XVLB4Ec0DJ5_NWnV9gtp6L
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multichannel+acoustic+source+and+image+dataset+for+the+cocktail+party+effect+in+hearing+aid+and+implant+users&rft.jtitle=Scientific+data&rft.au=Fischer%2C+Tim&rft.au=Caversaccio%2C+Marco&rft.au=Wimmer%2C+Wilhelm&rft.date=2020-12-17&rft.eissn=2052-4463&rft.volume=7&rft.issue=1&rft.spage=440&rft_id=info:doi/10.1038%2Fs41597-020-00777-8&rft_id=info%3Apmid%2F33335098&rft.externalDocID=33335098
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2052-4463&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2052-4463&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2052-4463&client=summon