Large-Scale Study of Perceptual Video Quality
The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversi...
Saved in:
| Published in: | IEEE transactions on image processing Vol. 28; no. 2; pp. 612 - 627 |
|---|---|
| Main Authors: | , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.02.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex, and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Toward advancing NR video quality prediction, we have constructed a large-scale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding over 205 000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the live video quality challenge database (LIVE-VQC), by conducting a comparison with leading NR video quality predictors on it. This paper is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html. |
|---|---|
| AbstractList | The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex, and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Toward advancing NR video quality prediction, we have constructed a large-scale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding over 205 000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the live video quality challenge database (LIVE-VQC), by conducting a comparison with leading NR video quality predictors on it. This paper is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html. The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current noreference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we have constructed a largescale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC for short), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html. The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current noreference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we have constructed a largescale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC for short), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html.The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current noreference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we have constructed a largescale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC for short), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html. |
| Author | Sinno, Zeina Bovik, Alan Conrad |
| Author_xml | – sequence: 1 givenname: Zeina surname: Sinno fullname: Sinno, Zeina email: zeina@utexas.edu organization: Dept. of Electr. & Comput. Eng., Univ. of Texas at Austin, Austin, TX, USA – sequence: 2 givenname: Alan Conrad surname: Bovik fullname: Bovik, Alan Conrad email: bovik@ece.utexas.edu organization: Dept. of Electr. & Comput. Eng., Univ. of Texas at Austin, Austin, TX, USA |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/30222561$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kD1PwzAQhi0Eoh-wIyGhSCwsKT7biZ0RVXxUqkRRC6vlxmeUKk2Kkwz997hqy9CB6W543ld3z4CcV3WFhNwAHQHQ7HExmY0YBTViKs1Syc9IHzIBMaWCnYedJjKWILIeGTTNilIQCaSXpMcpYyxJoU_iqfHfGM9zU2I0bzu7jWoXzdDnuGk7U0ZfhcU6-ghr0W6vyIUzZYPXhzkkny_Pi_FbPH1_nYyfpnHOhWxjlWQiM0IoUIlKOHcGFHMMM2uQudyiQy6dsFQ4qnJrjZXLZUAUCGmEpHxIHva9G1__dNi0el00OZalqbDuGs3C85xTTmVA70_QVd35KlwXKJCgRLgmUHcHqluu0eqNL9bGb_VRRADSPZD7umk8Op0XrWmLumq9KUoNVO-M62Bc74zrg_EQpCfBY_c_kdt9pEDEP1yJlCcK-C91s4gX |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1007_s11760_025_04151_2 crossref_primary_10_1109_TCSVT_2021_3088505 crossref_primary_10_1109_TG_2023_3293093 crossref_primary_10_1016_j_displa_2021_102092 crossref_primary_10_3390_electronics12234721 crossref_primary_10_1109_ACCESS_2023_3292335 crossref_primary_10_1109_TIP_2023_3282067 crossref_primary_10_1109_ACCESS_2021_3100462 crossref_primary_10_3390_s21165322 crossref_primary_10_1016_j_jvcir_2021_103374 crossref_primary_10_1109_ACCESS_2021_3077642 crossref_primary_10_1109_TIP_2021_3130536 crossref_primary_10_1016_j_neucom_2025_130852 crossref_primary_10_1109_TIP_2022_3177127 crossref_primary_10_1109_TIM_2023_3273654 crossref_primary_10_1145_3470970 crossref_primary_10_1109_TCSVT_2022_3163860 crossref_primary_10_1007_s00530_021_00858_7 crossref_primary_10_3390_s22249696 crossref_primary_10_1016_j_ins_2019_07_096 crossref_primary_10_1109_TIP_2021_3107213 crossref_primary_10_1007_s41233_023_00055_6 crossref_primary_10_1109_TCSVT_2022_3207148 crossref_primary_10_1007_s00530_024_01403_y crossref_primary_10_1109_TIP_2022_3190711 crossref_primary_10_1007_s11042_023_16242_8 crossref_primary_10_1016_j_ijleo_2021_167774 crossref_primary_10_1016_j_displa_2024_102792 crossref_primary_10_1109_ACCESS_2023_3292114 crossref_primary_10_1109_LSP_2021_3136487 crossref_primary_10_1109_TBC_2025_3565888 crossref_primary_10_3390_s22062209 crossref_primary_10_1016_j_neucom_2024_127633 crossref_primary_10_1109_ACCESS_2020_3019350 crossref_primary_10_1109_TBC_2022_3197904 crossref_primary_10_3390_s23083998 crossref_primary_10_1016_j_jvcir_2022_103676 crossref_primary_10_1016_j_displa_2023_102510 crossref_primary_10_1109_ACCESS_2025_3541021 crossref_primary_10_1109_TBC_2023_3312932 crossref_primary_10_1016_j_displa_2024_102719 crossref_primary_10_1109_TCYB_2023_3338615 crossref_primary_10_1109_TIP_2024_3512376 crossref_primary_10_1109_TIP_2024_3445738 crossref_primary_10_3390_electronics10222768 crossref_primary_10_1109_TPAMI_2023_3319332 crossref_primary_10_1145_3722559 crossref_primary_10_1109_TCSVT_2022_3179575 crossref_primary_10_1007_s11063_019_10036_6 crossref_primary_10_1109_TIP_2023_3310344 crossref_primary_10_1109_TIP_2023_3272480 crossref_primary_10_1016_j_image_2024_117101 crossref_primary_10_1016_j_neucom_2024_127249 crossref_primary_10_1007_s11063_022_10939_x crossref_primary_10_1109_TPAMI_2024_3385364 crossref_primary_10_1109_TIP_2024_3463418 crossref_primary_10_1109_TCSVT_2023_3249741 crossref_primary_10_1109_TIP_2021_3136723 crossref_primary_10_1109_LSP_2022_3215311 crossref_primary_10_1109_TIP_2022_3226417 crossref_primary_10_1109_ACCESS_2024_3468336 crossref_primary_10_1109_TBC_2022_3164332 crossref_primary_10_3389_fnins_2024_1426195 crossref_primary_10_1007_s11432_019_2757_1 crossref_primary_10_1109_MCE_2024_3519777 crossref_primary_10_1109_TBC_2022_3210385 crossref_primary_10_1007_s11042_022_13383_0 crossref_primary_10_1007_s11263_020_01408_w crossref_primary_10_1109_TMM_2021_3122347 crossref_primary_10_1016_j_displa_2022_102289 crossref_primary_10_1109_TCSVT_2022_3209007 crossref_primary_10_1016_j_displa_2023_102585 crossref_primary_10_1007_s11042_023_18068_w crossref_primary_10_1109_TMI_2024_3461737 crossref_primary_10_1016_j_jvcir_2024_104290 crossref_primary_10_1109_TBC_2023_3340031 crossref_primary_10_1117_1_JEI_33_4_043029 crossref_primary_10_1109_TCSVT_2022_3225552 crossref_primary_10_1109_TMM_2022_3189251 crossref_primary_10_1109_TBC_2024_3511927 crossref_primary_10_1109_TCSVT_2024_3450085 crossref_primary_10_3390_jimaging6080074 crossref_primary_10_1109_TBC_2025_3575339 crossref_primary_10_3390_electronics13050904 crossref_primary_10_1016_j_sigpro_2024_109819 crossref_primary_10_1007_s11042_022_12073_1 crossref_primary_10_1109_TCSVT_2022_3164467 crossref_primary_10_1007_s11432_024_4133_3 crossref_primary_10_1109_LSP_2020_3048607 crossref_primary_10_1109_TCSVT_2024_3418941 crossref_primary_10_1109_TBC_2024_3382949 crossref_primary_10_1109_TIP_2021_3072221 crossref_primary_10_1109_TMM_2023_3347090 crossref_primary_10_1109_TIP_2023_3327912 crossref_primary_10_1016_j_knosys_2024_111655 crossref_primary_10_1049_ipr2_12428 crossref_primary_10_1109_TCSVT_2021_3114509 crossref_primary_10_1109_TIP_2019_2923051 crossref_primary_10_3390_s21082872 crossref_primary_10_1016_j_image_2021_116626 crossref_primary_10_1016_j_cag_2021_01_013 crossref_primary_10_1109_TBC_2023_3284411 crossref_primary_10_3390_jimaging7030055 crossref_primary_10_1016_j_displa_2025_103178 crossref_primary_10_1109_ACCESS_2023_3259101 crossref_primary_10_1145_3632178 crossref_primary_10_1109_ACCESS_2025_3578043 crossref_primary_10_1109_TMM_2024_3521704 crossref_primary_10_1109_TCSVT_2022_3177518 crossref_primary_10_1016_j_engappai_2025_111023 crossref_primary_10_1016_j_compmedimag_2025_102622 crossref_primary_10_1109_TCSVT_2022_3227039 crossref_primary_10_1109_TIP_2021_3112055 crossref_primary_10_1109_TIP_2021_3130541 crossref_primary_10_1109_TIP_2023_3290528 |
| Cites_doi | 10.1109/TIP.2010.2042111 10.1109/TIP.2012.2214050 10.1109/LSP.2012.2227726 10.1109/TIP.2015.2502725 10.1145/2506364.2506366 10.1186/2042-6410-3-21 10.1109/TCSVT.2014.2363737 10.1109/TMM.2013.2291663 10.1162/089976600300015565 10.1016/j.jvcir.2017.04.009 10.1145/2506364.2506368 10.1109/COMST.2014.2363139 10.1016/j.jvcir.2015.02.012 10.1109/TIP.2016.2568752 10.1109/QoMEX.2017.7965673 10.1001/archopht.120.8.1041 10.1109/TIP.2014.2299154 10.1109/ICISCE.2017.56 10.1109/QoMEX.2012.6263865 10.1007/s11263-015-0816-y 10.1109/TIP.2016.2562513 10.1109/ICASSP.2010.5496296 10.1007/s11263-012-0564-1 10.1109/TIP.2014.2312613 10.1109/TIP.2015.2500021 10.21437/PQS.2016-26 10.1109/JSTSP.2012.2212417 10.1145/1961189.1961199 10.1109/TCSVT.2017.2707479 10.1109/MNET.2010.5430141 10.1111/j.2517-6161.1974.tb00994.x |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2018.2869673 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | Technology Research Database PubMed MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 627 |
| ExternalDocumentID | 30222561 10_1109_TIP_2018_2869673 8463581 |
| Genre | orig-research Journal Article |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION NPM RIG Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c347t-85949a4481858533fa182f2e9dae2fcdefe37f4d04f08cddad7bb1828147a4703 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 208 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000446255300007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Sun Sep 28 02:26:34 EDT 2025 Mon Jun 30 10:23:09 EDT 2025 Wed Feb 19 02:09:30 EST 2025 Tue Nov 18 22:05:35 EST 2025 Sat Nov 29 03:21:09 EST 2025 Wed Aug 27 02:55:59 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 2 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c347t-85949a4481858533fa182f2e9dae2fcdefe37f4d04f08cddad7bb1828147a4703 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-4895-7744 |
| PMID | 30222561 |
| PQID | 2117184859 |
| PQPubID | 85429 |
| PageCount | 16 |
| ParticipantIDs | proquest_miscellaneous_2109330307 crossref_citationtrail_10_1109_TIP_2018_2869673 crossref_primary_10_1109_TIP_2018_2869673 proquest_journals_2117184859 pubmed_primary_30222561 ieee_primary_8463581 |
| PublicationCentury | 2000 |
| PublicationDate | 2019-02-01 |
| PublicationDateYYYYMMDD | 2019-02-01 |
| PublicationDate_xml | – month: 02 year: 2019 text: 2019-02-01 day: 01 |
| PublicationDecade | 2010 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2019 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref35 ref13 ref34 ref12 ref37 ref15 ref36 ref14 ref11 ref10 ref39 ref17 ref38 ref16 ref19 riley (ref3) 2018 ref18 keimel (ref9) 2010 vondrick (ref32) 2013; 101 (ref4) 2017 (ref47) 2000 sinno (ref28) 2018 ref46 ref24 ref45 ref23 goodrow (ref2) 2017 ref26 ref42 sinno (ref31) 2018 ref44 keimel (ref20) 2012 ref21 ref43 rainer (ref27) 2014 van leuven (ref33) 2018 (ref30) 2017 ref8 ref7 shahid (ref25) 2014 ref6 ref5 ref40 (ref22) 2012 bergman (ref1) 2017 men (ref41) 2017 gausduff (ref29) 2017 |
| References_xml | – ident: ref5 doi: 10.1109/TIP.2010.2042111 – year: 2000 ident: ref47 publication-title: Final report from the video quality experts group on the validation of objective models of video quality assessment – ident: ref37 doi: 10.1109/TIP.2012.2214050 – ident: ref36 doi: 10.1109/LSP.2012.2227726 – ident: ref39 doi: 10.1109/TIP.2015.2502725 – ident: ref24 doi: 10.1145/2506364.2506366 – ident: ref34 doi: 10.1186/2042-6410-3-21 – year: 2012 ident: ref22 publication-title: Methodology for the Subjective Assessment of the Quality of Television Pictures – ident: ref42 doi: 10.1109/TCSVT.2014.2363737 – ident: ref23 doi: 10.1109/TMM.2013.2291663 – ident: ref44 doi: 10.1162/089976600300015565 – year: 2018 ident: ref28 publication-title: Live Video Quality Challenge (VQC) Database – ident: ref12 doi: 10.1016/j.jvcir.2017.04.009 – ident: ref16 doi: 10.1145/2506364.2506368 – ident: ref26 doi: 10.1109/COMST.2014.2363139 – start-page: 1 year: 2017 ident: ref41 article-title: Empirical evaluation of no-reference vqa methods on a natural video quality database publication-title: Proc Int Conf Quality Multimedia Exper – year: 2018 ident: ref33 publication-title: Measuring Perceived Video Quality on Mobile Devices – ident: ref11 doi: 10.1016/j.jvcir.2015.02.012 – ident: ref40 doi: 10.1109/TIP.2016.2568752 – ident: ref21 doi: 10.1109/QoMEX.2017.7965673 – year: 2017 ident: ref29 publication-title: Gartner Says Worldwide Sales of Smartphones Grew 7 Percent in the Fourth Quarter of 2016 – ident: ref35 doi: 10.1001/archopht.120.8.1041 – year: 2017 ident: ref4 publication-title: Cisco visual networking index Forecast and methodology – ident: ref38 doi: 10.1109/TIP.2014.2299154 – ident: ref43 doi: 10.1109/ICISCE.2017.56 – ident: ref10 doi: 10.1109/QoMEX.2012.6263865 – ident: ref15 doi: 10.1007/s11263-015-0816-y – start-page: 19 year: 2014 ident: ref27 article-title: Quality of experience of Web-based adaptive HTTP streaming clients in real-world environments using crowdsourcing publication-title: Proc ACM Workshop Design Qual Deploy Adapt Video Stream – ident: ref14 doi: 10.1109/TIP.2016.2562513 – start-page: 390 year: 2010 ident: ref9 article-title: Visual quality of current coding technologies at high definition IPTV bitrates publication-title: Proc IEEE Int Workshop Multimedia Signal Process – ident: ref6 doi: 10.1109/ICASSP.2010.5496296 – volume: 101 start-page: 184 year: 2013 ident: ref32 article-title: Efficiently scaling up crowdsourced video annotation publication-title: Int J Comput Vis doi: 10.1007/s11263-012-0564-1 – year: 2018 ident: ref3 publication-title: Brits Can't Get Enough of Netflix and Amazon Prime – ident: ref7 doi: 10.1109/TIP.2014.2312613 – ident: ref17 doi: 10.1109/TIP.2015.2500021 – ident: ref19 doi: 10.21437/PQS.2016-26 – start-page: 53 year: 2014 ident: ref25 article-title: Crowdsourcing based subjective quality assessment of adaptive video streaming publication-title: Proc Int Workshop Quality Multimedia Exp – start-page: 276 year: 2018 ident: ref31 article-title: Large scale subjective video quality study publication-title: Proc 25th IEEE Int Conf Image Process (ICIP) – ident: ref8 doi: 10.1109/JSTSP.2012.2212417 – ident: ref45 doi: 10.1145/1961189.1961199 – ident: ref13 doi: 10.1109/TCSVT.2017.2707479 – year: 2017 ident: ref30 publication-title: Browser Market Share Worldwide – start-page: 245 year: 2012 ident: ref20 article-title: QualityCrowd-A framework for crowd-based quality evaluation publication-title: Proc Picture Coding Symp (PCS) – ident: ref18 doi: 10.1109/MNET.2010.5430141 – ident: ref46 doi: 10.1111/j.2517-6161.1974.tb00994.x – year: 2017 ident: ref2 publication-title: You Know Whats Cool? A Billion Hours – year: 2017 ident: ref1 publication-title: We Spend a Billion Hours a Day on YouTube More than Netflix and Facebook Video Combined Forbes |
| SSID | ssj0014516 |
| Score | 2.6768277 |
| Snippet | The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 612 |
| SubjectTerms | Cameras Computer simulation crowd sourcing Distortion Downloading Image coding multimedia Quality Quality assessment Streaming media Uniqueness User generated content Video compression Video data Video quality assessment Videography |
| Title | Large-Scale Study of Perceptual Video Quality |
| URI | https://ieeexplore.ieee.org/document/8463581 https://www.ncbi.nlm.nih.gov/pubmed/30222561 https://www.proquest.com/docview/2117184859 https://www.proquest.com/docview/2109330307 |
| Volume | 28 |
| WOSCitedRecordID | wos000446255300007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED_c8EEf_Jhf9YsKvgjGrU3aNI8iisIYA6fsrWRNAgNpZVuF_fde0q4oqOBboZc0vcvl7vK7XAAuI2mUjrgiaAw5YaEKiVSo7kLGSlJOZeAQ09c-HwyS8VgM1-C6OQujtXbJZ_rGPjosXxVZabfKumgrbbmuFrQ459VZrQYxsBfOOmQz4oSj27-CJHuiO3oa2hyu5CZMYhFz-s0EuTtVfncvnZl52P7fAHdgq3Yn_dtK_ruwpvMObNeupV8r7rwDm1_qDu4B6dv8b_KM8tG-zSRc-oXxh1WOS4n9vU6VLvyqvsZyH14e7kd3j6S-N4FklPEFSSLBhMS4K7CgH6UGGR6aUAsldWgypY2m3DDVY6aXZEpJxScTJEkCxiXDJeAA2nmR6yPwZcQkNkK3gU4YNzHqv5EhhjHoBQZSJx50V6xMs7qouL3b4i11wUVPpMj81DI_rZnvwVXT4r0qqPEH7Z7lcUNXs9eD05W00lrj5ikGsmhmGf67BxfNa9QVC4DIXBelpXH7N7iseXBYSbnpm7rINw6Of_7mCWzgyESVr30K7cWs1Gewnn0spvPZOU7IcXLuJuQn5gfZHg |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED_mB6gPfsyv-lnBF8Fsa5M2zaOIojjHwDl8K1mTwEBWcZuw_95L2hUFFXwr9JKml1zuLr_LHcB5JI3SEVcElSEnLFQhkQrFXchYScqpDBxi2m_zTid5eRHdGlxWd2G01i74TDfso8PyVZ5N7VFZE3WlTde1AEsRY2FQ3NaqMANbctZhmxEnHA3_OSjZEs3efddGcSWNMIlFzOk3JeSqqvxuYDpFc7vxvyFuwnppUPpXxQrYgpoe1WGjNC79UnTHdVj7knlwG0jbRoCTJ5wh7dtYwpmfG79bRLlMsb_-UOncLzJszHbg-famd31HysoJJKOMT0gSCSYkel6Bhf0oNcjy0IRaKKlDkyltNOWGqRYzrSRTSio-GCBJEjAuGW4Cu7A4ykd6H3wZMYmN0HCgA8ZNjDuAkSE6MmgHBlInHjTnrEyzMq24rW7xmjr3oiVSZH5qmZ-WzPfgomrxVqTU-IN22_K4oivZ68HRfLbSUubGKbqyqGgZ_rsHZ9VrlBYLgciRzqeWxp3g4MbmwV4xy1Xf1Pm-cXDw8zdPYeWu99hO2_edh0NYxVGKInr7CBYn71N9DMvZx2Q4fj9xy_ITPLLbfQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Large-Scale+Study+of+Perceptual+Video+Quality&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Sinno%2C+Zeina&rft.au=Bovik%2C+Alan+Conrad&rft.date=2019-02-01&rft.eissn=1941-0042&rft_id=info:doi/10.1109%2FTIP.2018.2869673&rft_id=info%3Apmid%2F30222561&rft.externalDocID=30222561 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |