Psychoacoustic cues to emotion in speech prosody and music
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual...
Gespeichert in:
| Veröffentlicht in: | Cognition and emotion Jg. 27; H. 4; S. 658 - 684 |
|---|---|
| Hauptverfasser: | , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Hove
Taylor & Francis Group
01.06.2013
Psychology Press |
| Schlagworte: | |
| ISSN: | 0269-9931, 1464-0600, 1464-0600 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain. |
|---|---|
| AbstractList | There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain. There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain. Adapted from the source document There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain. There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain. Adapted from the source document. |
| Author | Coutinho, Eduardo Dibben, Nicola |
| Author_xml | – sequence: 1 givenname: Eduardo surname: Coutinho fullname: Coutinho, Eduardo email: eduardo.coutinho@unige.ch organization: School of Music , University of Liverpool – sequence: 2 givenname: Nicola surname: Dibben fullname: Dibben, Nicola organization: Music Department , University of Sheffield |
| BackLink | http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=27426903$$DView record in Pascal Francis https://www.ncbi.nlm.nih.gov/pubmed/23057507$$D View this record in MEDLINE/PubMed |
| BookMark | eNqNksFq3DAURUVJaSZp_6AEbwLdePqkJ1lWNiGENikE2kX2QiNLRMG2ppKHMn8fmZmhkEUyK23Ouxx07xk5GePoCPlKYUmhhe_AGqUU0iUDypYSmRDqA1lQ3vAaGoATspiRemZOyVnOzwDAkcMncsoQhBQgF-TqT97ap2hs3OQp2MpuXK6mWLkhTiGOVRirvHbOPlXrFHPstpUZu2rY5GA_k4_e9Nl92b_n5PHnj8fb-_rh992v25uH2gomprrxsuW2BeaVdA1Qa50XTnjVcbtyVHrHnVVUUcBVu1JdZ7BFRK4EqtYZPCffdrFF4G-xm_QQsnV9b0ZXpDXlVArBWdseh3I1R7-LomTzLxaro1DBEY4QQIGlA4mzwMUe3awG1-l1CoNJW32opgCXe8Bka3qfzGhD_s9JXtoFLNzVjrOloJyc1zZMZi5vSib0moKe96IPe9HzXvRuL-WYvzo-5L9zdr07C6OPaTD_Yuo7PZltH9NBFN9MeAFICNIh |
| CODEN | COEMEC |
| CitedBy_id | crossref_primary_10_3389_fnbeh_2018_00184 crossref_primary_10_1080_10137548_2018_1562960 crossref_primary_10_1038_s41598_019_50042_1 crossref_primary_10_1109_TAFFC_2015_2396151 crossref_primary_10_1525_mp_2018_36_2_217 crossref_primary_10_1177_03057356251351778 crossref_primary_10_3390_ijerph19020994 crossref_primary_10_1177_2051570719828687 crossref_primary_10_3389_fpsyg_2020_584171 crossref_primary_10_1016_j_pneurobio_2014_09_003 crossref_primary_10_3389_fpsyt_2025_1656292 crossref_primary_10_3390_acoustics6020021 crossref_primary_10_3390_brainsci9030070 crossref_primary_10_7717_peerj_cs_1356 crossref_primary_10_1016_j_infbeh_2017_09_001 crossref_primary_10_1186_s40359_023_01066_w crossref_primary_10_1038_s41598_022_21902_0 crossref_primary_10_1080_02699931_2019_1700482 crossref_primary_10_1098_rstb_2023_0258 crossref_primary_10_3390_acoustics6020029 crossref_primary_10_1371_journal_pone_0309432 crossref_primary_10_3389_fpsyg_2016_00781 crossref_primary_10_1371_journal_pone_0283635 crossref_primary_10_3389_fnins_2019_00902 crossref_primary_10_1016_j_anorl_2020_10_004 crossref_primary_10_1177_02762374251371286 crossref_primary_10_1177_03057356241303857 crossref_primary_10_1016_j_cortex_2024_06_016 crossref_primary_10_1177_0305735620978697 crossref_primary_10_1177_2331216520985678 crossref_primary_10_1080_02699931_2016_1255588 crossref_primary_10_1371_journal_pone_0241196 crossref_primary_10_1016_j_dr_2021_101000 crossref_primary_10_1177_0767370119841228 crossref_primary_10_3758_s13428_014_0445_3 crossref_primary_10_1093_scan_nsac019 crossref_primary_10_1177_1321103X18773113 crossref_primary_10_1093_cercor_bhx033 crossref_primary_10_3389_fpsyg_2023_1012839 crossref_primary_10_1016_j_cognition_2025_106102 crossref_primary_10_1080_02699931_2022_2162003 crossref_primary_10_3390_bs9120142 crossref_primary_10_1179_1467010015Z_000000000265 crossref_primary_10_1016_j_rasd_2022_101993 crossref_primary_10_1109_MPRV_2021_3115879 crossref_primary_10_3109_14992027_2014_997314 crossref_primary_10_1155_hbe2_5925146 crossref_primary_10_1016_j_bandl_2014_10_009 crossref_primary_10_1016_j_cub_2017_08_074 crossref_primary_10_3758_s13414_022_02613_0 crossref_primary_10_1007_s00426_020_01467_1 crossref_primary_10_3389_fpsyg_2021_760167 crossref_primary_10_1002_cne_23854 crossref_primary_10_1016_j_aforl_2020_04_029 crossref_primary_10_5406_musimoviimag_8_1_0037 crossref_primary_10_1016_j_neuropsychologia_2015_07_029 crossref_primary_10_1109_MPRV_2022_3145047 crossref_primary_10_1177_1029864918765613 crossref_primary_10_3389_fnins_2023_1245434 crossref_primary_10_1093_jmt_thy016 crossref_primary_10_3389_fpsyg_2017_01402 crossref_primary_10_1152_jn_00953_2015 crossref_primary_10_3389_fnins_2017_00159 |
| Cites_doi | 10.1037/a0024700 10.1111/j.1468-5584.2004.00265.x 10.1177/0305735610362821 10.1525/mp.2011.28.3.247 10.1111/1467-9280.00346 10.1111/j.1749-6632.2001.tb05731.x 10.1016/0021-9924(82)90034-X 10.1080/00049539908255353 10.1073/pnas.191355898 10.1006/brln.1997.1862 10.1037/0033-2909.97.3.412 10.1207/s15516709cog1402_1 10.1016/j.cub.2009.02.058 10.1007/978-3-662-09562-1 10.1525/mp.2009.27.1.1 10.1111/j.1467-9280.1992.tb00253.x 10.1177/0022022101032001009 10.1037/a0020740 10.1109/ICASSP.1997.596192 10.2307/40285731 10.1016/0005-7916(94)90063-9 10.1037/1528–3542.7.4.774 10.1037/1528-3542.8.4.494 10.1037/h0077714 10.1525/mp.2006.23.4.319 10.1196/annals.1280.012 10.1037/a0018484 10.1177/10298649020050S105 10.1037/a0021846 10.2143/CILL.30.1.519212 10.3758/BRM.41.2.385 10.1525/9780520350519 10.1007/s11031-005-4414-0 10.1515/SEM.2006.017 10.1037/0033-2909.99.2.143 10.1037/0033-2909.129.5.770 10.1177/053901882021004004 10.1016/S0163-6383(98)90055-8 10.1038/323533a0 |
| ContentType | Journal Article |
| Copyright | Copyright Taylor & Francis Group, LLC 2013 2015 INIST-CNRS |
| Copyright_xml | – notice: Copyright Taylor & Francis Group, LLC 2013 – notice: 2015 INIST-CNRS |
| DBID | AAYXX CITATION IQODW CGR CUY CVF ECM EIF NPM 7X8 7QG 7TK 7QJ 7T9 |
| DOI | 10.1080/02699931.2012.732559 |
| DatabaseName | CrossRef Pascal-Francis Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic Animal Behavior Abstracts Neurosciences Abstracts Applied Social Sciences Index & Abstracts (ASSIA) Linguistics and Language Behavior Abstracts (LLBA) |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic Neurosciences Abstracts Animal Behavior Abstracts Applied Social Sciences Index and Abstracts (ASSIA) Linguistics and Language Behavior Abstracts (LLBA) |
| DatabaseTitleList | Linguistics and Language Behavior Abstracts (LLBA) MEDLINE - Academic Applied Social Sciences Index and Abstracts (ASSIA) MEDLINE Neurosciences Abstracts Neurosciences Abstracts |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Music Psychology |
| EISSN | 1464-0600 |
| EndPage | 684 |
| ExternalDocumentID | 23057507 27426903 10_1080_02699931_2012_732559 732559 |
| Genre | Research Support, Non-U.S. Gov't Journal Article |
| GroupedDBID | --- -~X .7I .QK 0BK 0R~ 29F 2DF 4.4 53G 5GY 5VS 8VB AAGDL AAGZJ AAHIA AAMFJ AAMIU AAPUL AATTQ AAZMC ABCCY ABDBF ABFIM ABIVO ABJNI ABLIJ ABPEM ABRLO ABRYG ABTAI ABXUL ABXYU ABZLS ACGEJ ACGFS ACHQT ACTIO ACTOA ACUHS ADAHI ADCVX ADKVQ ADMHG ADXPE AECIN AEFOU AEISY AEKEX AEMOZ AEOZL AEPSL AEYOC AEZRU AFHDM AFRVT AGDLA AGMYJ AGRBW AHDZW AHQJS AIJEM AIYEW AJWEG AKBVH AKVCP ALMA_UNASSIGNED_HOLDINGS ALQZU AQTUD AVBZW AWYRJ BEJHT BLEHA BMOTO BOHLJ CCCUG CQ1 CS3 DGFLZ DKSSO DU5 EAP EBD EBO EBR EBS EBU EJD EMK EPL EPS ESO ESX E~B E~C F5P FEDTE G-F GTTXZ H13 HF~ HVGLF HZ~ IPNFZ J.O K1G KYCEM M4Z NA5 NW- O9- P2P QWB RIG RNANH ROSJB RSYQP S-F STATR TASJS TBQAZ TDBHL TEH TFH TFL TFW TH9 TNTFI TRJHH TUROJ TUS UPT UT5 UT9 VAE XKC ZL0 ~01 ~S~ 07M 1TA 4B3 9M8 AANPH AAYXX ABBZI ABVXC ABWZE ACIKQ ACPKE ACRBO ADEWX ADIUE ADLFI ADXAZ ADXHL AETEA AEXSR AIXGP ALLRG C5A CAG CBZAQ CITATION CKOZC COF C~T DGXZK EFRLQ EGDCR FXNIP JLMOS L7Y LJTGL MVM QZZOY RBICI TBH UA1 UQL ZY4 ABTAH ADYSH IQODW CGR CUY CVF ECM EIF NPM 7X8 7QG 7TK 7QJ 7T9 |
| ID | FETCH-LOGICAL-c525t-6f784c802f97e601ccef5e5f9d4cbe17fe4ec919103b8b9dda38333495398ea3 |
| IEDL.DBID | TFW |
| ISICitedReferencesCount | 80 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000319106500006&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0269-9931 1464-0600 |
| IngestDate | Tue Oct 21 14:06:04 EDT 2025 Sun Nov 09 10:14:26 EST 2025 Fri Sep 05 12:07:35 EDT 2025 Fri Sep 05 13:14:56 EDT 2025 Sun Nov 09 12:12:31 EST 2025 Mon Jul 21 05:56:53 EDT 2025 Wed Apr 02 07:23:16 EDT 2025 Sat Nov 29 08:00:34 EST 2025 Tue Nov 18 21:48:01 EST 2025 Mon Oct 20 23:40:21 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 4 |
| Keywords | Human Psychoacoustics Affect affectivity Cognition Emotion emotionality Neural network Experimental study Connectionism Prosody Valence Language Arousal and valence Neural networks Music Arousal Speech Speech prosody Simulation model Emotion |
| Language | English |
| License | CC BY 4.0 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c525t-6f784c802f97e601ccef5e5f9d4cbe17fe4ec919103b8b9dda38333495398ea3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 ObjectType-Article-2 ObjectType-Feature-1 |
| PMID | 23057507 |
| PQID | 1353043735 |
| PQPubID | 23462 |
| PageCount | 27 |
| ParticipantIDs | proquest_miscellaneous_1417554288 proquest_miscellaneous_1353043735 crossref_citationtrail_10_1080_02699931_2012_732559 crossref_primary_10_1080_02699931_2012_732559 informaworld_taylorfrancis_310_1080_02699931_2012_732559 proquest_miscellaneous_1372060010 pascalfrancis_primary_27426903 proquest_miscellaneous_1417549495 pubmed_primary_23057507 proquest_miscellaneous_1372054308 |
| PublicationCentury | 2000 |
| PublicationDate | 2013-06-01 |
| PublicationDateYYYYMMDD | 2013-06-01 |
| PublicationDate_xml | – month: 06 year: 2013 text: 2013-06-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | Hove |
| PublicationPlace_xml | – name: Hove – name: England |
| PublicationTitle | Cognition and emotion |
| PublicationTitleAlternate | Cogn Emot |
| PublicationYear | 2013 |
| Publisher | Taylor & Francis Group Psychology Press |
| Publisher_xml | – name: Taylor & Francis Group – name: Psychology Press |
| References | CIT0030 CIT0032 CIT0031 Lang P. J. (CIT0034) 2005 Juslin P. N. (CIT0033) 2010 CIT0070 CIT0036 CIT0035 Gabrielsson A. (CIT0026) 2010 CIT0039 Gabrielsson A. (CIT0025) 2003 Daniel P. (CIT0015) 1997; 83 CIT0041 Sethares W. A. (CIT0050) 2004 CIT0040 Chalupper J. (CIT0008) 2002; 88 CIT0043 CIT0042 CIT0045 CIT0044 Gabrielsson A. (CIT0024) 2002; 2001 CIT0003 CIT0047 CIT0002 CIT0046 CIT0005 CIT0049 CIT0004 Patel A. D. (CIT0038) 2010 CIT0048 CIT0006 CIT0009 CIT0052 CIT0051 CIT0010 CIT0054 CIT0053 CIT0056 CIT0011 CIT0055 Owren M. J. (CIT0037) 1997 CIT0058 CIT0013 CIT0057 Aures W. (CIT0001) 1985; 58 CIT0016 CIT0059 CIT0018 CIT0017 CIT0019 CIT0061 CIT0060 CIT0063 Hutchinson W. (CIT0028) 1978; 7 CIT0062 CIT0021 CIT0065 CIT0020 CIT0064 CIT0023 CIT0067 CIT0022 CIT0066 Brunswik E. (CIT0007) 1956 Coutinho E. (CIT0012) 2010 CIT0069 CIT0068 CIT0027 CIT0029 |
| References_xml | – ident: CIT0063 – ident: CIT0013 doi: 10.1037/a0024700 – ident: CIT0004 doi: 10.1111/j.1468-5584.2004.00265.x – ident: CIT0067 – ident: CIT0048 – ident: CIT0019 doi: 10.1177/0305735610362821 – volume-title: Tuning, timbre, spectrum, scale year: 2004 ident: CIT0050 – ident: CIT0054 – ident: CIT0030 doi: 10.1525/mp.2011.28.3.247 – ident: CIT0002 doi: 10.1111/1467-9280.00346 – ident: CIT0058 – ident: CIT0040 doi: 10.1111/j.1749-6632.2001.tb05731.x – ident: CIT0052 doi: 10.1016/0021-9924(82)90034-X – ident: CIT0064 – ident: CIT0049 doi: 10.1080/00049539908255353 – volume: 58 start-page: 268 year: 1985 ident: CIT0001 publication-title: Acustica – ident: CIT0005 doi: 10.1073/pnas.191355898 – ident: CIT0039 doi: 10.1006/brln.1997.1862 – start-page: 367 volume-title: Handbook of music and emotion: Theory, research, applications year: 2010 ident: CIT0026 – ident: CIT0022 doi: 10.1037/0033-2909.97.3.412 – ident: CIT0060 – ident: CIT0021 doi: 10.1207/s15516709cog1402_1 – ident: CIT0068 – ident: CIT0023 doi: 10.1016/j.cub.2009.02.058 – ident: CIT0057 doi: 10.1007/978-3-662-09562-1 – ident: CIT0011 doi: 10.1525/mp.2009.27.1.1 – volume: 7 start-page: 1 issue: 1 year: 1978 ident: CIT0028 publication-title: Journal of New Music Research – ident: CIT0020 doi: 10.1111/j.1467-9280.1992.tb00253.x – ident: CIT0047 doi: 10.1177/0022022101032001009 – ident: CIT0043 doi: 10.1037/a0020740 – start-page: 331 volume-title: Advances in cognitive systems year: 2010 ident: CIT0012 – volume-title: International Affective Picture System (IAPS): Technical manual and affective ratings year: 2005 ident: CIT0034 – ident: CIT0070 – volume-title: Music, language, and the brain year: 2010 ident: CIT0038 – start-page: 299 volume-title: Perspectives in ethology. Vol. 12: Communication year: 1997 ident: CIT0037 – ident: CIT0044 doi: 10.1109/ICASSP.1997.596192 – ident: CIT0065 – ident: CIT0031 doi: 10.2307/40285731 – ident: CIT0006 doi: 10.1016/0005-7916(94)90063-9 – ident: CIT0027 doi: 10.1037/1528–3542.7.4.774 – ident: CIT0017 – ident: CIT0061 – ident: CIT0056 doi: 10.1037/1528-3542.8.4.494 – ident: CIT0042 doi: 10.1037/h0077714 – ident: CIT0069 – ident: CIT0029 doi: 10.1525/mp.2006.23.4.319 – volume-title: Handbook of music and emotion: Theory, research, applications year: 2010 ident: CIT0033 – ident: CIT0003 doi: 10.1196/annals.1280.012 – ident: CIT0009 doi: 10.1037/a0018484 – volume: 2001 start-page: 123 year: 2002 ident: CIT0024 publication-title: Musicae Scientiae, Special Issue doi: 10.1177/10298649020050S105 – ident: CIT0035 doi: 10.1037/a0021846 – ident: CIT0062 – ident: CIT0018 – ident: CIT0036 doi: 10.2143/CILL.30.1.519212 – ident: CIT0016 doi: 10.3758/BRM.41.2.385 – ident: CIT0010 – volume: 83 start-page: 113 year: 1997 ident: CIT0015 publication-title: Acustica – volume-title: Perception and the representative design of psychological experiments year: 1956 ident: CIT0007 doi: 10.1525/9780520350519 – start-page: 503 volume-title: Handbook of affective sciences year: 2003 ident: CIT0025 – ident: CIT0066 – ident: CIT0055 doi: 10.1007/s11031-005-4414-0 – ident: CIT0051 doi: 10.1515/SEM.2006.017 – ident: CIT0046 doi: 10.1037/0033-2909.99.2.143 – ident: CIT0032 doi: 10.1037/0033-2909.129.5.770 – ident: CIT0045 doi: 10.1177/053901882021004004 – ident: CIT0053 doi: 10.1016/S0163-6383(98)90055-8 – volume: 88 start-page: 378 issue: 3 year: 2002 ident: CIT0008 publication-title: Acta Acustica united with Acustica – ident: CIT0059 – ident: CIT0041 doi: 10.1038/323533a0 |
| SSID | ssj0004340 |
| Score | 2.3554058 |
| Snippet | There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the... |
| SourceID | proquest pubmed pascalfrancis crossref informaworld |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 658 |
| SubjectTerms | Acoustic Stimulation Acoustics Adolescent Adult Affectivity. Emotion Arousal and valence Auditory Perception Biological and medical sciences Computer Modeling and Simulation Cues Cues/Cueing Emotion Emotional expressivity Emotions Female Fundamental and applied biological sciences. Psychology Humans Male Middle Aged Models, Psychological Music Music - psychology Neural networks Neural Networks (Computer) Perception Personality. Affectivity Prosody Psychoacoustics Psychology. Psychoanalysis. Psychiatry Psychology. Psychophysiology Research Subjects Speech Speech prosody |
| Title | Psychoacoustic cues to emotion in speech prosody and music |
| URI | https://www.tandfonline.com/doi/abs/10.1080/02699931.2012.732559 https://www.ncbi.nlm.nih.gov/pubmed/23057507 https://www.proquest.com/docview/1353043735 https://www.proquest.com/docview/1372054308 https://www.proquest.com/docview/1372060010 https://www.proquest.com/docview/1417549495 https://www.proquest.com/docview/1417554288 |
| Volume | 27 |
| WOSCitedRecordID | wos000319106500006&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAWR databaseName: Taylor & Francis Journals Complete customDbUrl: eissn: 1464-0600 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0004340 issn: 0269-9931 databaseCode: TFW dateStart: 19870301 isFulltext: true titleUrlDefault: https://www.tandfonline.com providerName: Taylor & Francis |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB5B4dALj_JaHisjcTUksRPb3BBixanisBJ7ixx7IipBtmoCUv89M3Gy7R66leAYafzI2B5_Y4_nA3hnKvS6sU56E63UPi-lrWwrTRFVMI32ZR5Hsglzemo3G_ft2it-DqtkH7pNiSJGW82L2zf9HBH3gdwGgjWKvbu8eG8Uo2IywgTseYqvV9-vHkYqPR2yOMkl5rdzN1SytzftZS7lkEnfk9baRHdxMx4d96XVw___o0fwYMKk4lOaRI_hDnYncG9kgD6B452JvHwCH9MHWdGRBEwE6poYtgITGZA460R_jhh-COp3v42XgrokfnFNT2G9-rL-_FVO7AsylEU5yKo1VgebFa0zSG5bCNiWWLYu6tBgblrUGBy5e5lqbONi9OTsKsXxqs6iV8_gqNt2-AIEYQhPMMk0hJa0N1mjbMzJo9eV8kVEXICa1V6HKTM5E2T8rPM5gemkn5r1Uyf9LEDuSp2nzBy3yNvrI1oP44nINJ61Olx0uTf6u_b4nrtymVrA23k61LQ8-c7Fd0hDUTOtyJg-qjwkYwpCziqzt8gwOM0OyGjCgpxt6FBbLFOSz0ltPU_z9upvFIP3zLz8d029guMi8YXILH8NR8PFb3wD98Of4ay_WMJds7HLcYn-BQ73M6E |
| linkProvider | Taylor & Francis |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwEB5Bi0QvPAqU5VGMxNWQxE5sc0OIVRFlxWElerMc2xGVIFs1KVL_PTNxsmUP3UqIY6TxIzN-fGOP5wN4o6roZK0NdypoLl1ecl3phqsiCK9q6co8DGQTarHQJyfm2xhN2I1hleRDNylRxLBW0-Smw-gpJO4d-g2IawS5d3nxVgmCxbdht8StlqL6lvPvV08jhRyPWQynItPruWtq2didNnKXUtCk61BvTSK8uB6RDjvT_P5_-KcHcG-EpexDGkcP4VZs92F3IIHeh731Knn5CN6nD1xIBx4w5rFvrF-xmPiA2GnLurMY_Q-GHe9W4ZJhn9gvqukxLOeflh-P-EjAwH1ZlD2vGqWl11nRGBXRc_M-NmUsGxOkr2OumiijN-jxZaLWtQnBob8rBIWsGh2deAI77aqNT4EhjHCIlFSNgEk6ldVChxydelkJV4QYZyAmvVs_JicnjoyfNp9ymI76saQfm_QzA74udZaSc9wgr_82qe2HQ5HRoFZsL3q4Yf51e3TVXZlMzOD1NB4szlC6dnFtRFNYYhYZMkiV22RUgeBZZPoGGcKn2RYZiXCQEg5ta4tkSnQ7sa2DNHCv_kYQfs_Us3_X1Cu4e7T8emyPPy--PIe9ItGH8Cx_ATv9-UV8CXf87_60Oz8cZuofGR822g |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwELagRagXCoXC8ihG4mpIYie2uaHCCgRa9bASvVmOH6ISza66Aan_nhk72XYP3UpwjDR-ZGZsf-PHfIS8lU2wolWaWekVE7asmWpUZLLy3MlW2Lr0iWxCzmbq9FSfXHvFj9cqMYaOOVFEmqtxcC99HG_EvYewAWANx-iurN5Jjqj4LtlNubHAo-fTH1cvI7kYdlk0wyLj47kbatlYnDZSl-KdSbsCtcXMd3EzIE0L03T__3_pIXkwgFL6MXvRI3IndAdkN1FAH5C99Rx5-Zh8yB8wjSYWMOqga7Rf0JDZgOhZR1fLENxPCv1eLfwlhS7Rc6zpCZlPP8-Pv7CBfoG5uqp71kSphFNFFbUMoFznQqxDHbUXrg2ljEEEpyHeK3irWu29hWiXc7ywqlWw_JDsdIsuPCMUQIQFnCRbgEvCyqLlypcQ0ouG28qHMCF8VLtxQ2pyZMj4Zcoxg-mgH4P6MVk_E8LWpZY5Ncct8uq6RU2ftkQGexq-vejRhvXX7eFBd6MLPiFvRncwMD7x0MV2AUxhkFck5Y-qt8nICqAzL9QtMohOiy0yAsAgphva1hbK1BB0QltPs99e_Q1H9F7I5_-uqdfk_smnqfn-dfbtBdmrMncIK8qXZKe_-B1ekXvuT3-2ujhK4_QvBXU1fg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Psychoacoustic+cues+to+emotion+in+speech+prosody+and+music&rft.jtitle=Cognition+and+emotion&rft.au=Coutinho%2C+Eduardo&rft.au=Dibben%2C+Nicola&rft.date=2013-06-01&rft.issn=0269-9931&rft.volume=27&rft.issue=4&rft.spage=658&rft.epage=684&rft_id=info:doi/10.1080%2F02699931.2012.732559&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0269-9931&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0269-9931&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0269-9931&client=summon |