A multi-genre model for music emotion recognition using linear regressors
Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) ident...
Gespeichert in:
| Veröffentlicht in: | Journal of new music research Jg. 50; H. 4; S. 355 - 372 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Abingdon
Routledge
08.08.2021
Taylor & Francis Ltd |
| Schlagworte: | |
| ISSN: | 0929-8215, 1744-5027 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence. |
|---|---|
| AbstractList | Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence. Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence. |
| Author | Picking, Richard Weinel, Jonathan Griffiths, Darryl Cunningham, Stuart |
| Author_xml | – sequence: 1 givenname: Darryl surname: Griffiths fullname: Griffiths, Darryl organization: Wrexham Glyndwr University – sequence: 2 givenname: Stuart orcidid: 0000-0002-5348-7700 surname: Cunningham fullname: Cunningham, Stuart email: s.cunningham@mmu.ac.uk organization: Manchester Metropolitan University – sequence: 3 givenname: Jonathan orcidid: 0000-0001-5347-3897 surname: Weinel fullname: Weinel, Jonathan organization: University of Greenwich – sequence: 4 givenname: Richard orcidid: 0000-0002-1471-0914 surname: Picking fullname: Picking, Richard organization: Wrexham Glyndwr University |
| BookMark | eNqFkE1LAzEQhoNUsFZ_grDgeWu-NsnixVL8KBS86DmkSXZJ2U1qskX6783aevGgpxnmfd8Z5rkEEx-8BeAGwTmCAt7BGtcCo2qOIUZzVHNOCDsDU8QpLSuI-QRMR085mi7AZUpbCBGjjEzBalH0-25wZWt9tEUfjO2KJsQ8TU4Xtg-DC76IVofWu-8-C74tOuetilloo00pxHQFzhvVJXt9qjPw_vT4tnwp16_Pq-ViXWoiqqEkggq-QWajK1HXkGhsK2oxggYpZZkWDGNVG4MYN_kfWkPOmWEKM0vohhIyA7fHvbsYPvY2DXIb9tHnkxIzWFEsakKzqzq6dAwpRdvIXXS9igeJoBypyR9qcqQmT9Ry7v5XTrtBjX8PUbnu3_TDMe18ZtirzxA7Iwd16EJsovLaJUn-XvEFZK-G2g |
| CitedBy_id | crossref_primary_10_3390_s23010382 crossref_primary_10_1080_25742442_2025_2463888 crossref_primary_10_1109_TG_2024_3424459 crossref_primary_10_3389_fnins_2024_1400444 crossref_primary_10_3390_app14135765 crossref_primary_10_1111_ejed_70037 crossref_primary_10_1142_S0218001425580017 crossref_primary_10_3390_s24072201 crossref_primary_10_1109_TAFFC_2024_3437153 crossref_primary_10_33889_IJMEMS_2025_10_4_047 crossref_primary_10_1155_2022_3920663 |
| Cites_doi | 10.1145/1459359.1459550 10.1016/0005-7916(94)90063-9 10.1037/e575372009-001 10.1109/TAFFC.2015.2392768 10.1037/h0077714 10.1525/mp.2004.22.1.41 10.1177/0305735605056147 10.1177/0305735613496860 10.1109/TAFFC.2015.2397457 10.1109/TSA.2005.860344 10.1145/345124.345169 10.1177/0305735607082623 10.1093/acprof:oso/9780199592746.003.0019 10.1109/T-AFFC.2013.6 10.1109/TAFFC.2018.2820691 10.1109/TAFFC.2016.2523503 10.1109/MC.2018.3620965 10.1080/09298215.2017.1333518 10.1109/ICME.2011.6012152 10.1109/ICASSP.2015.7178058 10.1093/oso/9780192631886.003.0019 10.2466/pms.2003.96.3c.1117 10.1080/09298210500123978 10.1162/LMJ_a_00189 10.1177/0305735607068890 10.1371/journal.pone.0173392 10.1037/0096-1523.26.6.1797 10.1525/mp.2004.22.1.79 10.1037/0033-2909.125.5.601 10.1037/0033-2909.98.2.219 10.1109/T-AFFC.2010.1 10.1109/ICMEW.2013.6618436 10.1525/mp.2004.22.1.145 10.1109/TAFFC.2015.2462841 10.1177/0305735610362821 10.1007/978-0-387-85820-3_1 10.1525/mp.2006.23.4.319 10.1109/TAFFC.2014.2343222 10.1080/02699930500204250 10.1145/1743384.1743431 10.1080/02699939208411068 10.1109/ITechA.2015.7317402 10.1080/0929821042000317822 10.1093/oso/9780195068276.001.0001 10.1109/TAFFC.2017.2724515 10.1177/1029864911403367 10.1109/T-AFFC.2011.15 10.1002/0470013494.ch3 10.14236/ewic/EVA2016.53 10.1109/ISCID.2009.281 10.1177/0305735604039281 10.1525/mp.2012.30.3.307 10.1037/1528-3542.8.4.494 10.1109/TASL.2009.2020893 10.1080/09298215.2010.513733 10.1109/TASL.2007.911513 10.3389/fpsyg.2013.00596 10.1080/0929821042000317813 10.1109/COMPSYM.2010.5685520 10.1109/TAFFC.2016.2598569 10.1162/014892604323112257 10.1037/0033-295X.99.3.554 10.1080/09298215.2011.602195 10.1109/TASL.2011.2118752 |
| ContentType | Journal Article |
| Copyright | 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group 2021 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This work is licensed under the Creative Commons Attribution – Non-Commercial – No Derivatives License http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| Copyright_xml | – notice: 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group 2021 – notice: 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This work is licensed under the Creative Commons Attribution – Non-Commercial – No Derivatives License http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| DBID | 0YH AAYXX CITATION |
| DOI | 10.1080/09298215.2021.1977336 |
| DatabaseName | Taylor & Francis Open Access CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: 0YH name: Taylor & Francis Open Access url: https://www.tandfonline.com sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Music |
| EISSN | 1744-5027 |
| EndPage | 372 |
| ExternalDocumentID | 10_1080_09298215_2021_1977336 1977336 |
| Genre | Research Article |
| GroupedDBID | -~X .7I .QK 0BK 0R~ 0YH 29L 4.4 5GY AACJB AAGDL AAGZJ AAHIA AAMFJ AAMIU AAPUL AATTQ AAZMC ABCCR ABCCY ABDBF ABFIM ABJNI ABLIJ ABLJU ABPEM ABTAI ABXUL ABXYU ACGFS ACIWK ACTIO ACTOA ACUHS ADAHI ADCVX ADKVQ ADLRE ADXPE AECIN AEFOU AEISY AEKEX AENEX AEOZL AEPSL AEYOC AEZRU AFFNX AFRVT AGDLA AGMYJ AGRBW AHDZW AIJEM AIYEW AKBVH ALMA_UNASSIGNED_HOLDINGS ALQZU AQTUD AVBZW AWYRJ BEJHT BLEHA BMOTO BOHLJ BZPAY CCCUG CQ1 CS3 DGFLZ DKSSO DU5 EAP EBO EBS EMK EPL ESX E~B E~C G-F GTTXZ H13 HF~ HZ~ IPNFZ J.O KYCEM LJTGL M4Z NA5 NV0 O9- P2P RIG RNANH ROSJB RSYQP S-F STATR TASJS TBQAZ TDBHL TEA TFH TFL TFW TH9 TNTFI TRJHH TUROJ UT5 UT9 VAE ~01 ~S~ AAYXX CITATION |
| ID | FETCH-LOGICAL-c385t-38487b1dbc589903c2e54e210d1aae6c8622a9dd167d202490776d6a26e34b433 |
| IEDL.DBID | 0YH |
| ISICitedReferencesCount | 13 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000697651500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0929-8215 |
| IngestDate | Wed Aug 13 08:56:39 EDT 2025 Tue Nov 18 21:58:07 EST 2025 Sat Nov 29 04:16:28 EST 2025 Mon Oct 20 23:47:50 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 4 |
| Language | English |
| License | open-access: http://creativecommons.org/licenses/by-nc-nd/4.0/: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c385t-38487b1dbc589903c2e54e210d1aae6c8622a9dd167d202490776d6a26e34b433 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0001-5347-3897 0000-0002-1471-0914 0000-0002-5348-7700 |
| OpenAccessLink | https://www.tandfonline.com/doi/abs/10.1080/09298215.2021.1977336 |
| PQID | 2605428934 |
| PQPubID | 436393 |
| PageCount | 18 |
| ParticipantIDs | informaworld_taylorfrancis_310_1080_09298215_2021_1977336 crossref_primary_10_1080_09298215_2021_1977336 proquest_journals_2605428934 crossref_citationtrail_10_1080_09298215_2021_1977336 |
| PublicationCentury | 2000 |
| PublicationDate | 2021-08-08 |
| PublicationDateYYYYMMDD | 2021-08-08 |
| PublicationDate_xml | – month: 08 year: 2021 text: 2021-08-08 day: 08 |
| PublicationDecade | 2020 |
| PublicationPlace | Abingdon |
| PublicationPlace_xml | – name: Abingdon |
| PublicationTitle | Journal of new music research |
| PublicationYear | 2021 |
| Publisher | Routledge Taylor & Francis Ltd |
| Publisher_xml | – name: Routledge – name: Taylor & Francis Ltd |
| References | CIT0072 CIT0071 CIT0030 CIT0074 CIT0032 Myint E. E. P. (CIT0051) 2010 CIT0076 CIT0031 CIT0075 CIT0034 CIT0078 Gabrielsson A. (CIT0022) 2001 CIT0077 CIT0070 Cooke D. (CIT0012) 1990 Hu X. (CIT0028) 2012 CIT0036 CIT0079 CIT0037 CIT0039 CIT0083 CIT0082 CIT0085 CIT0043 CIT0042 CIT0001 CIT0044 Hu X. (CIT0029) 2014 Yang Y. H. (CIT0081) 2012 CIT0080 CIT0047 CIT0046 Baume C. (CIT0004) 2014 CIT0005 CIT0049 CIT0048 CIT0007 Barthet M. (CIT0003) 2013 CIT0006 CIT0009 CIT0008 CIT0050 CIT0052 CIT0010 CIT0054 CIT0053 CIT0056 CIT0011 CIT0055 Scherer K. R. (CIT0064) 2000 Eerola T. (CIT0017) 2009 Barthet M. (CIT0002) 2012 Juslin P. N. (CIT0038) 2011 Lartillot O. (CIT0045) 2007 Kim Y. E. (CIT0041) 2010 CIT0014 Giannakopoulos T. (CIT0023) 2014 CIT0058 CIT0057 CIT0016 CIT0015 CIT0059 CIT0018 CIT0019 CIT0061 CIT0060 CIT0063 CIT0062 CIT0021 CIT0065 CIT0020 CIT0067 Thayer R. E. (CIT0073) 1990 CIT0066 Juslin P. N. (CIT0035) 2009 Jun S. (CIT0033) 2008 Deng J. J. (CIT0013) 2012 Kamalzadeh M. (CIT0040) 2012 CIT0025 CIT0069 CIT0024 CIT0068 CIT0027 CIT0026 Zentner M. (CIT0084) 2010 |
| References_xml | – ident: CIT0082 doi: 10.1145/1459359.1459550 – ident: CIT0007 doi: 10.1016/0005-7916(94)90063-9 – ident: CIT0078 doi: 10.1037/e575372009-001 – ident: CIT0015 doi: 10.1109/TAFFC.2015.2392768 – ident: CIT0060 doi: 10.1037/h0077714 – ident: CIT0052 doi: 10.1525/mp.2004.22.1.41 – volume-title: 6th International Conference on New Trends in Information Science, Service Science and Data Mining (ISSDM2012) year: 2012 ident: CIT0013 – ident: CIT0039 doi: 10.1177/0305735605056147 – ident: CIT0043 doi: 10.1177/0305735613496860 – ident: CIT0075 doi: 10.1109/TAFFC.2015.2397457 – ident: CIT0047 doi: 10.1109/TSA.2005.860344 – ident: CIT0050 doi: 10.1145/345124.345169 – volume-title: Proceedings of Ninth International Symposium on Computer Music Modeling and Retrieval year: 2012 ident: CIT0002 – ident: CIT0044 doi: 10.1177/0305735607082623 – ident: CIT0065 doi: 10.1093/acprof:oso/9780199592746.003.0019 – ident: CIT0026 doi: 10.1109/T-AFFC.2013.6 – ident: CIT0053 doi: 10.1109/TAFFC.2018.2820691 – ident: CIT0030 doi: 10.1109/TAFFC.2016.2523503 – ident: CIT0027 doi: 10.1109/MC.2018.3620965 – ident: CIT0055 doi: 10.1080/09298215.2017.1333518 – ident: CIT0071 doi: 10.1109/ICME.2011.6012152 – ident: CIT0010 doi: 10.1109/ICASSP.2015.7178058 – start-page: 431 volume-title: Music and emotion: Theory and research year: 2001 ident: CIT0022 doi: 10.1093/oso/9780192631886.003.0019 – ident: CIT0067 doi: 10.2466/pms.2003.96.3c.1117 – ident: CIT0046 doi: 10.1080/09298210500123978 – ident: CIT0079 doi: 10.1162/LMJ_a_00189 – ident: CIT0011 doi: 10.1177/0305735607068890 – volume-title: The language of music year: 1990 ident: CIT0012 – ident: CIT0001 doi: 10.1371/journal.pone.0173392 – ident: CIT0034 doi: 10.1037/0096-1523.26.6.1797 – ident: CIT0014 doi: 10.1525/mp.2004.22.1.79 – ident: CIT0077 doi: 10.1037/0033-2909.125.5.601 – volume-title: 2nd International Conference on Signal Processing Systems year: 2010 ident: CIT0051 – volume-title: 13th International Society for Music Information Retrieval Conference (ISMIR 2012) year: 2012 ident: CIT0028 – ident: CIT0076 doi: 10.1037/0033-2909.98.2.219 – ident: CIT0008 doi: 10.1109/T-AFFC.2010.1 – ident: CIT0068 – ident: CIT0061 doi: 10.1109/ICMEW.2013.6618436 – ident: CIT0056 doi: 10.1525/mp.2004.22.1.145 – volume-title: Proceedings of 14th International Society for Music Information Retrieval Conference year: 2013 ident: CIT0003 – volume-title: 13th International Society for Music Information Retrieval Conference (ISMIR 2012) year: 2012 ident: CIT0040 – ident: CIT0062 doi: 10.1109/TAFFC.2015.2462841 – start-page: 137 volume-title: The neuropsychology of emotion year: 2000 ident: CIT0064 – ident: CIT0018 doi: 10.1177/0305735610362821 – ident: CIT0057 doi: 10.1007/978-0-387-85820-3_1 – ident: CIT0070 – ident: CIT0032 doi: 10.1525/mp.2006.23.4.319 – ident: CIT0059 doi: 10.1109/TAFFC.2014.2343222 – volume-title: 40th International Computer Music Conference (ICMC 2014) year: 2014 ident: CIT0029 – ident: CIT0006 doi: 10.1080/02699930500204250 – ident: CIT0066 doi: 10.1145/1743384.1743431 – volume-title: Oxford Handbook of Music Psychology year: 2009 ident: CIT0035 – ident: CIT0020 doi: 10.1080/02699939208411068 – ident: CIT0024 doi: 10.1109/ITechA.2015.7317402 – volume-title: Handbook of music and emotion: Theory, research, applications year: 2011 ident: CIT0038 – ident: CIT0063 doi: 10.1080/0929821042000317822 – volume-title: 5th International Conference on Visual Information Engineering (VIE 2008) year: 2008 ident: CIT0033 – volume-title: The biopsychology of mood and arousal year: 1990 ident: CIT0073 doi: 10.1093/oso/9780195068276.001.0001 – ident: CIT0049 doi: 10.1109/TAFFC.2017.2724515 – ident: CIT0074 doi: 10.1177/1029864911403367 – volume-title: 13th International Society for Music Information Retrieval Conference (ISMIR 2012) year: 2012 ident: CIT0081 – ident: CIT0042 doi: 10.1109/T-AFFC.2011.15 – ident: CIT0021 doi: 10.1002/0470013494.ch3 – ident: CIT0025 doi: 10.14236/ewic/EVA2016.53 – ident: CIT0072 doi: 10.1109/ISCID.2009.281 – start-page: 187 volume-title: Handbook of music and emotion: Theory, research, applications year: 2010 ident: CIT0084 – ident: CIT0058 doi: 10.1177/0305735604039281 – volume-title: 10th International Society for Music Information Retrieval Conference (ISMIR 2009) year: 2009 ident: CIT0017 – ident: CIT0019 doi: 10.1525/mp.2012.30.3.307 – volume-title: Audio Engineering Society Conference: 53rd International Conference: Semantic Audio year: 2014 ident: CIT0004 – ident: CIT0085 doi: 10.1037/1528-3542.8.4.494 – volume-title: Introduction to audio analysis: A MATLAB approach year: 2014 ident: CIT0023 – ident: CIT0069 doi: 10.1109/TASL.2009.2020893 – ident: CIT0031 doi: 10.1080/09298215.2010.513733 – ident: CIT0083 doi: 10.1109/TASL.2007.911513 – ident: CIT0036 doi: 10.3389/fpsyg.2013.00596 – ident: CIT0037 doi: 10.1080/0929821042000317813 – ident: CIT0009 doi: 10.1109/COMPSYM.2010.5685520 – ident: CIT0048 doi: 10.1109/TAFFC.2016.2598569 – ident: CIT0005 doi: 10.1162/014892604323112257 – ident: CIT0054 doi: 10.1037/0033-295X.99.3.554 – ident: CIT0016 doi: 10.1080/09298215.2011.602195 – ident: CIT0080 doi: 10.1109/TASL.2011.2118752 – volume-title: 11th International Society for Music Information Retrieval Conference (ISMIR 2010) year: 2010 ident: CIT0041 – volume-title: 10th International Conference on Digital Audio Effects (DAFx-07) year: 2007 ident: CIT0045 |
| SSID | ssj0016463 |
| Score | 2.3116734 |
| Snippet | Making the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres.... |
| SourceID | proquest crossref informaworld |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 355 |
| SubjectTerms | Arousal emotion Emotion recognition Emotions MER Music perception regression valence |
| Title | A multi-genre model for music emotion recognition using linear regressors |
| URI | https://www.tandfonline.com/doi/abs/10.1080/09298215.2021.1977336 https://www.proquest.com/docview/2605428934 |
| Volume | 50 |
| WOSCitedRecordID | wos000697651500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAWR databaseName: Taylor & Francis Journals Complete customDbUrl: eissn: 1744-5027 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0016463 issn: 0929-8215 databaseCode: TFW dateStart: 19940301 isFulltext: true titleUrlDefault: https://www.tandfonline.com providerName: Taylor & Francis |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PS8MwFA66efDib3E6JQevmc2Pdu1xiGOCDA8T5ymkSSqCbNJW_35f0nQ4RDzopbQNr4SX5L187dfvIXSpVUohNOZEWMqJMJGGJcUU4SrT0TCHHYDxOrN3w-k0nc-z-8AmrAKt0mHoohGK8LHaLW6VVy0j7iqClJ5CqgJ0x-iAwg6G82QTdRlAE8fqip4mqw8JiWiKqYEJcTbtTzw_PWYtPa2Jl34L1j4DjXf_oe97aCdsP_GomS_7aMMuDlDXF3s-RLcj7PmFxE8q7IvkYOga3IV2bJuKP3jFOYJzR5t_xq4fqoQGD96XZXWEHsY3s-sJCaUWiOZpXBOeAnDJqcl1DAAs4prZWFiAg4YqZRMNuIepzBiaDA1zKoNOBcgkiiWWi1xwfow6i-XCniAMpsZGWlstYkGVSIsEYgQrOBVxEVvbQ6L1sNRBh9yVw3iVtJUrDT6Szkcy-KiHBiuzt0aI4zeD7Ovwydq_ASmaciWS_2Lbb8dahjVdSYf8AKxlXJz-4dFnaNtdeg5h2keduny352hLf9QvVXnhJy8cZ-PHT7NM6TQ |
| linkProvider | Taylor & Francis |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PS8MwFH7oJujF3-J0ag5eO5sm7drjEMeGc6eJu4U0SUWQTbrq3-9L2o4NkR30VhpeCC_Je_nal-8DuFUyphgaU48byjyufYVbKpAek4nyuymeALTjmR11x-N4Ok1W78LYskqLobOSKMLFaru57cfouiTuzsecHmOuQngX0A7FIwxj0TY0Q8y1tqxv0n9Z_kmIeKmmhiaetalv8fzWzVp-WmMv_RGtXQrqH_zH4A9hvzqAkl65Yo5gy8yOoenknk9g2COuwtBzy4o4mRyCY8O32E5MqflDllVH-GwL51-JHYjMscHB93m-OIXn_sPkfuBVYgueYnFYeCxG6JJSnaoQIZjPVGBCbhAQaiqliRQin0AmWtOoqwPLM2h5gHQkg8gwnnLGzqAxm8_MORA01cZXyigecip5nEUYJYKMUR5moTEt4LWLhaqYyK0gxrugNWFp5SNhfSQqH7WgszT7KKk4Nhkkq_MnCvcNJCsFSwTbYNuuJ1tUu3ohLPZDuJYwfvGHrm9gdzB5GonRcPx4CXu2yVUUxm1oFPmnuYId9VW8LfJrt5K_AZBG7Ew |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PS8MwFA66iXjxtzh_5uA1s2nSrj0OtTgcY4eJu4U0SUWQbbTVv9-XtB0OkR30VhpeCS8v7-Vrv34PoRslIwqpMSXcUEa49hRsKV8SJmPl9VI4AWinMzvsjUbRdBqPazZhUdMqLYbOKqEIl6vt5l7orGHE3XpQ0iMoVYDufNqlcIJhLNxEbSeOBSE9SV6WHxJCXjVTAxNibZqfeH57zEp5WhEv_ZGsXQVK9v5h7vtotz5-4n4VLwdow8wOUds1ez5Cgz52_ELiggq7JjkYpgZ3YRybquMPXnKO4NrS5l-xnYfMYcCB93leHKPn5GFy90jqVgtEsSgoCYsAuKRUpyoAAOYx5ZuAG4CDmkppQgW4x5ex1jTsad-qDFoVIB1KPzSMp5yxE9SazWfmFGEw1cZTyigecCp5lIWQI_yMUR5kgTEdxBsPC1XrkNt2GO-CNnKltY-E9ZGofdRB3aXZohLiWGcQf18-Ubo3IFnVrkSwNbYXzVqLek8XwiI_AGsx42d_ePQ12h7fJ2I4GD2dox074uiE0QVqlfmHuURb6rN8K_IrF8dfo47q8A |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+multi-genre+model+for+music+emotion+recognition+using+linear+regressors&rft.jtitle=Journal+of+new+music+research&rft.au=Griffiths%2C+Darryl&rft.au=Cunningham%2C+Stuart&rft.au=Weinel%2C+Jonathan&rft.au=Picking%2C+Richard&rft.date=2021-08-08&rft.issn=0929-8215&rft.eissn=1744-5027&rft.volume=50&rft.issue=4&rft.spage=355&rft.epage=372&rft_id=info:doi/10.1080%2F09298215.2021.1977336&rft.externalDBID=n%2Fa&rft.externalDocID=10_1080_09298215_2021_1977336 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0929-8215&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0929-8215&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0929-8215&client=summon |