Subjective and Objective Audio-Visual Quality Assessment for User Generated Content
In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality assessment (VQA) to monitor and control users' Quality of Experience when watching UGC videos. However, most existing UGC VQA studies on...
Uloženo v:
| Vydáno v: | IEEE transactions on image processing Ročník 32; s. 1 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality assessment (VQA) to monitor and control users' Quality of Experience when watching UGC videos. However, most existing UGC VQA studies only focus on the visual distortions of videos, ignoring that the perceptual quality also depends on the accompanying audio signals. In this paper, we conduct a comprehensive study on UGC audio-visual quality assessment (AVQA) from both subjective and objective perspectives. Specially, we construct the first UGC AVQA database named SJTU-UAV database, which includes 520 in-the-wild UGC audio and video (A/V) sequences collected from the YFCC100m database. A subjective AVQA experiment is conducted on the database to obtain the mean opinion scores (MOSs) of the A/V sequences. To demonstrate the content diversity of the SJTU-UAV database, we give a detailed analysis of the SJTU-UAV database as well as other two synthetically-distorted AVQA databases and one authentically-distorted VQA database, from both the audio and video aspects. Then, to facilitate the development of AVQA fields, we construct a benchmark of AVQA models on the proposed SJTU-UAV database and other two AVQA databases, of which the benchmark models consist of AVQA models designed for synthetically distorted A/V sequences and AVQA models built through combining the popular VQA methods and audio features via support vector regressor (SVR). Finally, considering benchmark AVQA models perform poorly in assessing in-the-wild UGC videos, we further propose an effective AVQA model via jointly learning quality-aware audio and visual feature representations in the temporal domain, which is seldom investigated by existing AVQA models. Our proposed model outperforms the aforementioned benchmark AVQA models on the SJTU-UAV database and two synthetically distorted AVQA databases. The SJTU-UAV database and the code of the proposed model will be released to facilitate further research. |
|---|---|
| AbstractList | In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality assessment (VQA) to monitor and control users' Quality of Experience when watching UGC videos. However, most existing UGC VQA studies only focus on the visual distortions of videos, ignoring that the perceptual quality also depends on the accompanying audio signals. In this paper, we conduct a comprehensive study on UGC audio-visual quality assessment (AVQA) from both subjective and objective perspectives. Specially, we construct the first UGC AVQA database named SJTU-UAV database, which includes 520 in-the-wild UGC audio and video (A/V) sequences collected from the YFCC100m database. A subjective AVQA experiment is conducted on the database to obtain the mean opinion scores (MOSs) of the A/V sequences. To demonstrate the content diversity of the SJTU-UAV database, we give a detailed analysis of the SJTU-UAV database as well as other two synthetically-distorted AVQA databases and one authentically-distorted VQA database, from both the audio and video aspects. Then, to facilitate the development of AVQA fields, we construct a benchmark of AVQA models on the proposed SJTU-UAV database and other two AVQA databases, of which the benchmark models consist of AVQA models designed for synthetically distorted A/V sequences and AVQA models built through combining the popular VQA methods and audio features via support vector regressor (SVR). Finally, considering benchmark AVQA models perform poorly in assessing in-the-wild UGC videos, we further propose an effective AVQA model via jointly learning quality-aware audio and visual feature representations in the temporal domain, which is seldom investigated by existing AVQA models. Our proposed model outperforms the aforementioned benchmark AVQA models on the SJTU-UAV database and two synthetically distorted AVQA databases. The SJTU-UAV database and the code of the proposed model will be released to facilitate further research. In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality assessment (VQA) to monitor and control users' Quality of Experience when watching UGC videos. However, most existing UGC VQA studies only focus on the visual distortions of videos, ignoring that the perceptual quality also depends on the accompanying audio signals. In this paper, we conduct a comprehensive study on UGC audio-visual quality assessment (AVQA) from both subjective and objective perspectives. Specially, we construct the first UGC AVQA database named SJTU-UAV database, which includes 520 in-the-wild UGC audio and video (A/V) sequences collected from the YFCC100m database. A subjective AVQA experiment is conducted on the database to obtain the mean opinion scores (MOSs) of the A/V sequences. To demonstrate the content diversity of the SJTU-UAV database, we give a detailed analysis of the SJTU-UAV database as well as other two synthetically-distorted AVQA databases and one authentically-distorted VQA database, from both the audio and video aspects. Then, to facilitate the development of AVQA fields, we construct a benchmark of AVQA models on the proposed SJTU-UAV database and other two AVQA databases, of which the benchmark models consist of AVQA models designed for synthetically distorted A/V sequences and AVQA models built through combining the popular VQA methods and audio features via support vector regressor (SVR). Finally, considering benchmark AVQA models perform poorly in assessing in-the-wild UGC videos, we further propose an effective AVQA model via jointly learning quality-aware audio and visual feature representations in the temporal domain, which is seldom investigated by existing AVQA models. Our proposed model outperforms the aforementioned benchmark AVQA models on the SJTU-UAV database and two synthetically distorted AVQA databases. The SJTU-UAV database and the code of the proposed model will be released to facilitate further research.In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality assessment (VQA) to monitor and control users' Quality of Experience when watching UGC videos. However, most existing UGC VQA studies only focus on the visual distortions of videos, ignoring that the perceptual quality also depends on the accompanying audio signals. In this paper, we conduct a comprehensive study on UGC audio-visual quality assessment (AVQA) from both subjective and objective perspectives. Specially, we construct the first UGC AVQA database named SJTU-UAV database, which includes 520 in-the-wild UGC audio and video (A/V) sequences collected from the YFCC100m database. A subjective AVQA experiment is conducted on the database to obtain the mean opinion scores (MOSs) of the A/V sequences. To demonstrate the content diversity of the SJTU-UAV database, we give a detailed analysis of the SJTU-UAV database as well as other two synthetically-distorted AVQA databases and one authentically-distorted VQA database, from both the audio and video aspects. Then, to facilitate the development of AVQA fields, we construct a benchmark of AVQA models on the proposed SJTU-UAV database and other two AVQA databases, of which the benchmark models consist of AVQA models designed for synthetically distorted A/V sequences and AVQA models built through combining the popular VQA methods and audio features via support vector regressor (SVR). Finally, considering benchmark AVQA models perform poorly in assessing in-the-wild UGC videos, we further propose an effective AVQA model via jointly learning quality-aware audio and visual feature representations in the temporal domain, which is seldom investigated by existing AVQA models. Our proposed model outperforms the aforementioned benchmark AVQA models on the SJTU-UAV database and two synthetically distorted AVQA databases. The SJTU-UAV database and the code of the proposed model will be released to facilitate further research. |
| Author | Sun, Wei Min, Xiongkuo Cao, Yuqin Zhai, Guangtao |
| Author_xml | – sequence: 1 givenname: Yuqin orcidid: 0000-0002-5087-6559 surname: Cao fullname: Cao, Yuqin organization: Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, China – sequence: 2 givenname: Xiongkuo orcidid: 0000-0001-5693-0416 surname: Min fullname: Min, Xiongkuo organization: Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, China – sequence: 3 givenname: Wei orcidid: 0000-0001-8162-1949 surname: Sun fullname: Sun, Wei organization: Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, China – sequence: 4 givenname: Guangtao orcidid: 0000-0001-8165-9322 surname: Zhai fullname: Zhai, Guangtao organization: Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/37428674$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kU1rGzEQhkVJaT7aew8lLOTSyzozkrzSHo1p0kAgDUl6FbI0CzLr3UTSBvzvK2OnlBx6Gc2g5x1m5j1lR8M4EGNfEWaI0F4-3vyaceBiJngLc64_sBNsJdYAkh-VHOaqVijbY3aa0hoA5RybT-xYKMl1o-QJe3iYVmtyObxSZQdf3f2tFpMPY_07pMn21X0JIW-rRUqU0oaGXHVjrJ4SxeqaBoo2k6-W45DL12f2sbN9oi-H94w9Xf14XP6sb--ub5aL29oJqXOt5lqLMtGq1ejReWdVA-QIG4foyfkVt9pBg1Ih7yR4IbpOlbrR4Fri4ox93_d9juPLRCmbTUiO-t4ONE7JcC00LxtLVdCLd-h6nOJQpttR5UQAHAt1fqCm1Ya8eY5hY-PWvJ2rAM0ecHFMKVJnXMg2h7J3tKE3CGbniym-mJ0v5uBLEcI74Vvv_0i-7SWBiP7BUammFeIPzImWVg |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1109_TIP_2025_3563076 crossref_primary_10_1016_j_eswa_2024_123731 crossref_primary_10_1016_j_displa_2024_102671 crossref_primary_10_1016_j_displa_2024_102792 crossref_primary_10_1145_3632178 crossref_primary_10_1007_s11432_024_4133_3 crossref_primary_10_1016_j_displa_2025_103174 crossref_primary_10_1016_j_displa_2025_103173 crossref_primary_10_1038_s41598_025_07710_2 crossref_primary_10_1016_j_displa_2024_102744 crossref_primary_10_1016_j_displa_2025_103208 crossref_primary_10_1016_j_displa_2023_102575 crossref_primary_10_1016_j_displa_2023_102585 crossref_primary_10_1016_j_displa_2024_102652 crossref_primary_10_1016_j_displa_2024_102653 crossref_primary_10_1109_LSP_2024_3522855 crossref_primary_10_7717_peerj_cs_2654 crossref_primary_10_1109_TIP_2024_3461956 crossref_primary_10_1109_TMM_2023_3325719 crossref_primary_10_3390_fi15090295 crossref_primary_10_1109_TCE_2024_3439577 |
| Cites_doi | 10.1109/78.650093 10.1007/978-3-031-19836-6_19 10.1145/2502081.2502106 10.1109/ICC.2016.7511101 10.1109/TIP.2018.2869673 10.21437/Interspeech.2020-1303 10.1109/TCSVT.2017.2707479 10.1109/QoMEX.2017.7965673 10.1109/CVPR42600.2020.00373 10.1109/ICCV.2017.74 10.1109/TIP.2023.3251695 10.1109/ICMEW53276.2021.9455999 10.1109/JSTSP.2023.3270621 10.1109/TCSVT.2022.3164467 10.1117/12.477378 10.1109/TIP.2020.2967829 10.1007/s11042-018-5656-7 10.1109/QoMEX.2016.7498974 10.1109/TIP.2021.3072221 10.1109/JSTSP.2012.2215007 10.1109/TBC.2022.3221689 10.1109/LSP.2012.2227726 10.1109/TIP.2020.2988148 10.1145/3503161.3548329 10.1109/TIP.2011.2131660 10.1109/TMC.2020.3004534 10.1109/TIP.2014.2299154 10.1109/TIP.2019.2922850 10.1109/OJSP.2021.3090333 10.1109/ICIP42928.2021.9506420 10.1109/TIP.2015.2500021 10.1109/TIP.2010.2053549 10.23919/EUSIPCO.2019.8902975 10.1109/TIP.2017.2685941 10.1145/1961189.1961199 10.1109/TIP.2012.2214050 10.1145/2812802 10.1109/TIP.2019.2923051 10.1109/MMSP.2019.8901772 10.1145/3503161.3547872 10.1109/TIP.2020.2966082 10.1109/JSTSP.2019.2955024 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2023.3290528 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 1 |
| ExternalDocumentID | 37428674 10_1109_TIP_2023_3290528 10177693 |
| Genre | orig-research Journal Article |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS F5P HZ~ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 53G 5VS AAYXX ABFSI AETIX AGSQL AI. AIBXA ALLEH CITATION E.L EJD H~9 ICLAB IFJZH VH1 AAYOK NPM PKN RIG Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c348t-75883145b981d1cdca760ece16c11decdb2a8c0614712f40d33ff7061680c9e23 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 23 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001030610500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Wed Oct 01 14:59:22 EDT 2025 Mon Jun 30 10:14:07 EDT 2025 Wed Feb 19 02:06:58 EST 2025 Sat Nov 29 03:34:42 EST 2025 Tue Nov 18 22:18:53 EST 2025 Wed Aug 27 02:21:39 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c348t-75883145b981d1cdca760ece16c11decdb2a8c0614712f40d33ff7061680c9e23 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0001-8162-1949 0000-0002-5087-6559 0000-0001-5693-0416 0000-0001-8165-9322 |
| PMID | 37428674 |
| PQID | 2837140021 |
| PQPubID | 85429 |
| PageCount | 1 |
| ParticipantIDs | proquest_miscellaneous_2838245147 pubmed_primary_37428674 crossref_citationtrail_10_1109_TIP_2023_3290528 ieee_primary_10177693 proquest_journals_2837140021 crossref_primary_10_1109_TIP_2023_3290528 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-01-01 |
| PublicationDateYYYYMMDD | 2023-01-01 |
| PublicationDate_xml | – month: 01 year: 2023 text: 2023-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref12 ref14 ref11 ref10 (ref25) 2002 ref16 ref19 ref18 martinez (ref17) 2014 cao (ref1) 2023 ref46 ref45 ref48 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref34 ref37 ref36 ref31 ref33 ref32 ref2 ref39 ref38 ye (ref30) 2012 giannakopoulos (ref15) 2014 (ref47) 2000 ref24 ref23 ref26 hermansky (ref35) 1991; 1 ref20 ref22 ref21 ref28 ref27 ref29 |
| References_xml | – year: 2000 ident: ref47 publication-title: Final report from the video quality experts group on the validation of objective models of video quality assessment – ident: ref19 doi: 10.1109/78.650093 – ident: ref20 doi: 10.1007/978-3-031-19836-6_19 – ident: ref36 doi: 10.1145/2502081.2502106 – ident: ref22 doi: 10.1109/ICC.2016.7511101 – year: 2023 ident: ref1 article-title: Audio-visual quality assessment for user generated content: Database and method publication-title: arXiv 2303 02392 – ident: ref6 doi: 10.1109/TIP.2018.2869673 – ident: ref45 doi: 10.21437/Interspeech.2020-1303 – ident: ref2 doi: 10.1109/TCSVT.2017.2707479 – ident: ref7 doi: 10.1109/QoMEX.2017.7965673 – ident: ref44 doi: 10.1109/CVPR42600.2020.00373 – ident: ref48 doi: 10.1109/ICCV.2017.74 – year: 2014 ident: ref15 publication-title: Introduction to Audio Analysis A MATLAB Approach – ident: ref24 doi: 10.1109/TIP.2023.3251695 – ident: ref9 doi: 10.1109/ICMEW53276.2021.9455999 – ident: ref3 doi: 10.1109/JSTSP.2023.3270621 – ident: ref18 doi: 10.1109/TCSVT.2022.3164467 – ident: ref26 doi: 10.1117/12.477378 – ident: ref43 doi: 10.1109/TIP.2020.2967829 – ident: ref23 doi: 10.1007/s11042-018-5656-7 – ident: ref4 doi: 10.1109/QoMEX.2016.7498974 – ident: ref33 doi: 10.1109/TIP.2021.3072221 – ident: ref13 doi: 10.1109/JSTSP.2012.2215007 – ident: ref40 doi: 10.1109/TBC.2022.3221689 – ident: ref28 doi: 10.1109/LSP.2012.2227726 – volume: 1 start-page: 121 year: 1991 ident: ref35 article-title: RASTA-PLP speech analysis publication-title: Proc IEEE Int Conf Acoust Speech Signal Process – ident: ref16 doi: 10.1109/TIP.2020.2988148 – ident: ref10 doi: 10.1145/3503161.3548329 – start-page: 1098 year: 2012 ident: ref30 article-title: Unsupervised feature learning framework for no-reference image quality assessment publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref14 doi: 10.1109/TIP.2011.2131660 – ident: ref5 doi: 10.1109/TMC.2020.3004534 – ident: ref31 doi: 10.1109/TIP.2014.2299154 – ident: ref49 doi: 10.1109/TIP.2019.2922850 – ident: ref34 doi: 10.1109/OJSP.2021.3090333 – ident: ref11 doi: 10.1109/ICIP42928.2021.9506420 – ident: ref42 doi: 10.1109/TIP.2015.2500021 – ident: ref41 doi: 10.1109/TIP.2010.2053549 – ident: ref21 doi: 10.23919/EUSIPCO.2019.8902975 – ident: ref29 doi: 10.1109/TIP.2017.2685941 – ident: ref37 doi: 10.1145/1961189.1961199 – ident: ref27 doi: 10.1109/TIP.2012.2214050 – ident: ref12 doi: 10.1145/2812802 – ident: ref32 doi: 10.1109/TIP.2019.2923051 – ident: ref8 doi: 10.1109/MMSP.2019.8901772 – ident: ref38 doi: 10.1145/3503161.3547872 – ident: ref46 doi: 10.1109/TIP.2020.2966082 – year: 2002 ident: ref25 publication-title: Methodology for the Subjective Assessment of the Quality of Television Pictures – start-page: 2125 year: 2014 ident: ref17 article-title: A no-reference audio-visual video quality metric publication-title: Proc Eur Signal Process Conf – ident: ref39 doi: 10.1109/JSTSP.2019.2955024 |
| SSID | ssj0014516 |
| Score | 2.5147579 |
| Snippet | In recent years, User Generated Content (UGC) has grown dramatically in video sharing applications. It is necessary for service-providers to use video quality... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1 |
| SubjectTerms | Audio signals audio-visual quality assessment Benchmark testing Benchmarks Feature extraction multimodal fusion Quality assessment Streaming media User experience User generated content Video Visual databases Visualization |
| Title | Subjective and Objective Audio-Visual Quality Assessment for User Generated Content |
| URI | https://ieeexplore.ieee.org/document/10177693 https://www.ncbi.nlm.nih.gov/pubmed/37428674 https://www.proquest.com/docview/2837140021 https://www.proquest.com/docview/2838245147 |
| Volume | 32 |
| WOSCitedRecordID | wos001030610500001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD6o-KAP3i_1RgRffMjWe9rHIYqCzIGb7K2kucBEOtlWwX_vSdKVvSj41Iambcg5Sb-T03wfwE3oi1TJUtFUq4jGTJYUQWxCOZc6yGXJQ6sZ-fbM-v1sPM4HzWZ1uxdGKWV_PlMdc2pz-XIqarNU1jXuY7T71mEdj26zVpsyMIqzNrWZMMoQ9y9zkn7eHT4NOkYmvBOFuZ8Y4fWVb5AVVfkdX9rvzMPuP1u4BzsNoCQ95wH7sKaqA9htwCVphu78ALZXmAcP4RUnjHc31xFeSfLSlnq1nEzp22Re41Mdw8Y36bX8nQRBLhmh3xJHWI2AlViGq2pxBKOH--HdI230FaiI4mxBMVTIIuy3MkfQGggpOEt9JVSQiiCQSsgy5JkwISMLQh37Moq0ZlhOM1_kKoyOYaOaVuoUCNcpTgSJ0Ni7CAF1mSBuQnDBNUbuPPA96C57vBAN-bjRwPgobBDi5wXaqDA2KhobeXDb3vHpiDf-qHtkTLFSz1nBg4ulVYtmaM4LQ_eDUSViGw-u28s4qEymhFdqWts6WYgOFTMPTpw3tA-PGEZsKYvPfnnpOWyZtrllmgvYWMxqdQmb4msxmc-u0HPH2ZX13B-5l-iu |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bT9swFD4aHRLbw7iOZWPMSLzw4Na5OnmspiHQuoJEQbxFji9SEUoRbZD49zu206gvReItVhzH8jl2vuMTfx_AacRkplWlaWZ0TBOuKoogNqVCKBMWqhKR04y8G_HxOL-_L67bw-ruLIzW2v18pvv20uXy1Uw2dqtsYN3HavdtwMc0SSLmj2t1SQOrOeuSmymnHJH_MivJisHk8rpvhcL7cVSw1Eqvr3yFnKzKeoTpvjTn2-_s4w58aSElGXof2IUPut6D7RZeknbyzvfg8wr34D7c4JLx4Fc7ImpFrrrSsFHTGb2bzhts1XNsvJJhx-BJEOaSW_Rc4imrEbISx3FVLw7g9vzP5PcFbRUWqIyTfEExWMhjHLeqQNgaSiUFz5iWOsxkGCotVRWJXNqgkYeRSZiKY2M4lrOcyUJH8Vfo1bNafwMiTIZLQSoNji6CQFOliJwQXgiDsbsIWQCD5YiXsqUftyoYj6ULQ1hRoo1Ka6OytVEAZ90TT5564426B9YUK_W8FQI4Wlq1bCfnvLSEPxhXIroJ4KS7jdPK5kpErWeNq5NH6FAJD-DQe0PXeMwxZst48n3NS3_B1sXk36gcXY7__oBPtp9-0-YIeovnRv-ETfmymM6fj53__gd-oesN |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Subjective+and+Objective+Audio-Visual+Quality+Assessment+for+User+Generated+Content&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Cao%2C+Yuqin&rft.au=Min%2C+Xiongkuo&rft.au=Sun%2C+Wei&rft.au=Zhai%2C+Guangtao&rft.date=2023-01-01&rft.eissn=1941-0042&rft.volume=32&rft.spage=3847&rft_id=info:doi/10.1109%2FTIP.2023.3290528&rft_id=info%3Apmid%2F37428674&rft.externalDocID=37428674 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |