Large-Scale Study of Perceptual Video Quality

The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing Jg. 28; H. 2; S. 612 - 627
Hauptverfasser: Sinno, Zeina, Bovik, Alan Conrad
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States IEEE 01.02.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1057-7149, 1941-0042, 1941-0042
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex, and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Toward advancing NR video quality prediction, we have constructed a large-scale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding over 205 000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the live video quality challenge database (LIVE-VQC), by conducting a comparison with leading NR video quality predictors on it. This paper is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html.
AbstractList The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex, and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Toward advancing NR video quality prediction, we have constructed a large-scale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding over 205 000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the live video quality challenge database (LIVE-VQC), by conducting a comparison with leading NR video quality predictors on it. This paper is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html.
The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current noreference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we have constructed a largescale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC for short), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html.
The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current noreference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we have constructed a largescale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC for short), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html.The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current noreference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we have constructed a largescale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC for short), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html.
Author Sinno, Zeina
Bovik, Alan Conrad
Author_xml – sequence: 1
  givenname: Zeina
  surname: Sinno
  fullname: Sinno, Zeina
  email: zeina@utexas.edu
  organization: Dept. of Electr. & Comput. Eng., Univ. of Texas at Austin, Austin, TX, USA
– sequence: 2
  givenname: Alan Conrad
  surname: Bovik
  fullname: Bovik, Alan Conrad
  email: bovik@ece.utexas.edu
  organization: Dept. of Electr. & Comput. Eng., Univ. of Texas at Austin, Austin, TX, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/30222561$$D View this record in MEDLINE/PubMed
BookMark eNp9kD1PwzAQhi0Eoh-wIyGhSCwsKT7biZ0RVXxUqkRRC6vlxmeUKk2Kkwz997hqy9CB6W543ld3z4CcV3WFhNwAHQHQ7HExmY0YBTViKs1Syc9IHzIBMaWCnYedJjKWILIeGTTNilIQCaSXpMcpYyxJoU_iqfHfGM9zU2I0bzu7jWoXzdDnuGk7U0ZfhcU6-ghr0W6vyIUzZYPXhzkkny_Pi_FbPH1_nYyfpnHOhWxjlWQiM0IoUIlKOHcGFHMMM2uQudyiQy6dsFQ4qnJrjZXLZUAUCGmEpHxIHva9G1__dNi0el00OZalqbDuGs3C85xTTmVA70_QVd35KlwXKJCgRLgmUHcHqluu0eqNL9bGb_VRRADSPZD7umk8Op0XrWmLumq9KUoNVO-M62Bc74zrg_EQpCfBY_c_kdt9pEDEP1yJlCcK-C91s4gX
CODEN IIPRE4
CitedBy_id crossref_primary_10_1007_s11760_025_04151_2
crossref_primary_10_1109_TCSVT_2021_3088505
crossref_primary_10_1109_TG_2023_3293093
crossref_primary_10_1016_j_displa_2021_102092
crossref_primary_10_3390_electronics12234721
crossref_primary_10_1109_ACCESS_2023_3292335
crossref_primary_10_1109_TIP_2023_3282067
crossref_primary_10_1109_ACCESS_2021_3100462
crossref_primary_10_3390_s21165322
crossref_primary_10_1016_j_jvcir_2021_103374
crossref_primary_10_1109_ACCESS_2021_3077642
crossref_primary_10_1109_TIP_2021_3130536
crossref_primary_10_1016_j_neucom_2025_130852
crossref_primary_10_1109_TIP_2022_3177127
crossref_primary_10_1109_TIM_2023_3273654
crossref_primary_10_1145_3470970
crossref_primary_10_1109_TCSVT_2022_3163860
crossref_primary_10_1007_s00530_021_00858_7
crossref_primary_10_3390_s22249696
crossref_primary_10_1016_j_ins_2019_07_096
crossref_primary_10_1109_TIP_2021_3107213
crossref_primary_10_1007_s41233_023_00055_6
crossref_primary_10_1109_TCSVT_2022_3207148
crossref_primary_10_1007_s00530_024_01403_y
crossref_primary_10_1109_TIP_2022_3190711
crossref_primary_10_1007_s11042_023_16242_8
crossref_primary_10_1016_j_ijleo_2021_167774
crossref_primary_10_1016_j_displa_2024_102792
crossref_primary_10_1109_ACCESS_2023_3292114
crossref_primary_10_1109_LSP_2021_3136487
crossref_primary_10_1109_TBC_2025_3565888
crossref_primary_10_3390_s22062209
crossref_primary_10_1016_j_neucom_2024_127633
crossref_primary_10_1109_ACCESS_2020_3019350
crossref_primary_10_1109_TBC_2022_3197904
crossref_primary_10_3390_s23083998
crossref_primary_10_1016_j_jvcir_2022_103676
crossref_primary_10_1016_j_displa_2023_102510
crossref_primary_10_1109_ACCESS_2025_3541021
crossref_primary_10_1109_TBC_2023_3312932
crossref_primary_10_1016_j_displa_2024_102719
crossref_primary_10_1109_TCYB_2023_3338615
crossref_primary_10_1109_TIP_2024_3512376
crossref_primary_10_1109_TIP_2024_3445738
crossref_primary_10_3390_electronics10222768
crossref_primary_10_1109_TPAMI_2023_3319332
crossref_primary_10_1145_3722559
crossref_primary_10_1109_TCSVT_2022_3179575
crossref_primary_10_1007_s11063_019_10036_6
crossref_primary_10_1109_TIP_2023_3310344
crossref_primary_10_1109_TIP_2023_3272480
crossref_primary_10_1016_j_image_2024_117101
crossref_primary_10_1016_j_neucom_2024_127249
crossref_primary_10_1007_s11063_022_10939_x
crossref_primary_10_1109_TPAMI_2024_3385364
crossref_primary_10_1109_TIP_2024_3463418
crossref_primary_10_1109_TCSVT_2023_3249741
crossref_primary_10_1109_TIP_2021_3136723
crossref_primary_10_1109_LSP_2022_3215311
crossref_primary_10_1109_TIP_2022_3226417
crossref_primary_10_1109_ACCESS_2024_3468336
crossref_primary_10_1109_TBC_2022_3164332
crossref_primary_10_3389_fnins_2024_1426195
crossref_primary_10_1007_s11432_019_2757_1
crossref_primary_10_1109_MCE_2024_3519777
crossref_primary_10_1109_TBC_2022_3210385
crossref_primary_10_1007_s11042_022_13383_0
crossref_primary_10_1007_s11263_020_01408_w
crossref_primary_10_1109_TMM_2021_3122347
crossref_primary_10_1016_j_displa_2022_102289
crossref_primary_10_1109_TCSVT_2022_3209007
crossref_primary_10_1016_j_displa_2023_102585
crossref_primary_10_1007_s11042_023_18068_w
crossref_primary_10_1109_TMI_2024_3461737
crossref_primary_10_1016_j_jvcir_2024_104290
crossref_primary_10_1109_TBC_2023_3340031
crossref_primary_10_1117_1_JEI_33_4_043029
crossref_primary_10_1109_TCSVT_2022_3225552
crossref_primary_10_1109_TMM_2022_3189251
crossref_primary_10_1109_TBC_2024_3511927
crossref_primary_10_1109_TCSVT_2024_3450085
crossref_primary_10_3390_jimaging6080074
crossref_primary_10_1109_TBC_2025_3575339
crossref_primary_10_3390_electronics13050904
crossref_primary_10_1016_j_sigpro_2024_109819
crossref_primary_10_1007_s11042_022_12073_1
crossref_primary_10_1109_TCSVT_2022_3164467
crossref_primary_10_1007_s11432_024_4133_3
crossref_primary_10_1109_LSP_2020_3048607
crossref_primary_10_1109_TCSVT_2024_3418941
crossref_primary_10_1109_TBC_2024_3382949
crossref_primary_10_1109_TIP_2021_3072221
crossref_primary_10_1109_TMM_2023_3347090
crossref_primary_10_1109_TIP_2023_3327912
crossref_primary_10_1016_j_knosys_2024_111655
crossref_primary_10_1049_ipr2_12428
crossref_primary_10_1109_TCSVT_2021_3114509
crossref_primary_10_1109_TIP_2019_2923051
crossref_primary_10_3390_s21082872
crossref_primary_10_1016_j_image_2021_116626
crossref_primary_10_1016_j_cag_2021_01_013
crossref_primary_10_1109_TBC_2023_3284411
crossref_primary_10_3390_jimaging7030055
crossref_primary_10_1016_j_displa_2025_103178
crossref_primary_10_1109_ACCESS_2023_3259101
crossref_primary_10_1145_3632178
crossref_primary_10_1109_ACCESS_2025_3578043
crossref_primary_10_1109_TMM_2024_3521704
crossref_primary_10_1109_TCSVT_2022_3177518
crossref_primary_10_1016_j_engappai_2025_111023
crossref_primary_10_1016_j_compmedimag_2025_102622
crossref_primary_10_1109_TCSVT_2022_3227039
crossref_primary_10_1109_TIP_2021_3112055
crossref_primary_10_1109_TIP_2021_3130541
crossref_primary_10_1109_TIP_2023_3290528
Cites_doi 10.1109/TIP.2010.2042111
10.1109/TIP.2012.2214050
10.1109/LSP.2012.2227726
10.1109/TIP.2015.2502725
10.1145/2506364.2506366
10.1186/2042-6410-3-21
10.1109/TCSVT.2014.2363737
10.1109/TMM.2013.2291663
10.1162/089976600300015565
10.1016/j.jvcir.2017.04.009
10.1145/2506364.2506368
10.1109/COMST.2014.2363139
10.1016/j.jvcir.2015.02.012
10.1109/TIP.2016.2568752
10.1109/QoMEX.2017.7965673
10.1001/archopht.120.8.1041
10.1109/TIP.2014.2299154
10.1109/ICISCE.2017.56
10.1109/QoMEX.2012.6263865
10.1007/s11263-015-0816-y
10.1109/TIP.2016.2562513
10.1109/ICASSP.2010.5496296
10.1007/s11263-012-0564-1
10.1109/TIP.2014.2312613
10.1109/TIP.2015.2500021
10.21437/PQS.2016-26
10.1109/JSTSP.2012.2212417
10.1145/1961189.1961199
10.1109/TCSVT.2017.2707479
10.1109/MNET.2010.5430141
10.1111/j.2517-6161.1974.tb00994.x
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2018.2869673
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
Technology Research Database
PubMed
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 627
ExternalDocumentID 30222561
10_1109_TIP_2018_2869673
8463581
Genre orig-research
Journal Article
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
NPM
RIG
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c347t-85949a4481858533fa182f2e9dae2fcdefe37f4d04f08cddad7bb1828147a4703
IEDL.DBID RIE
ISICitedReferencesCount 208
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000446255300007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1057-7149
1941-0042
IngestDate Sun Sep 28 02:26:34 EDT 2025
Mon Jun 30 10:23:09 EDT 2025
Wed Feb 19 02:09:30 EST 2025
Tue Nov 18 22:05:35 EST 2025
Sat Nov 29 03:21:09 EST 2025
Wed Aug 27 02:55:59 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c347t-85949a4481858533fa182f2e9dae2fcdefe37f4d04f08cddad7bb1828147a4703
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-4895-7744
PMID 30222561
PQID 2117184859
PQPubID 85429
PageCount 16
ParticipantIDs proquest_miscellaneous_2109330307
crossref_citationtrail_10_1109_TIP_2018_2869673
crossref_primary_10_1109_TIP_2018_2869673
proquest_journals_2117184859
pubmed_primary_30222561
ieee_primary_8463581
PublicationCentury 2000
PublicationDate 2019-02-01
PublicationDateYYYYMMDD 2019-02-01
PublicationDate_xml – month: 02
  year: 2019
  text: 2019-02-01
  day: 01
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2019
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref35
ref13
ref34
ref12
ref37
ref15
ref36
ref14
ref11
ref10
ref39
ref17
ref38
ref16
ref19
riley (ref3) 2018
ref18
keimel (ref9) 2010
vondrick (ref32) 2013; 101
(ref4) 2017
(ref47) 2000
sinno (ref28) 2018
ref46
ref24
ref45
ref23
goodrow (ref2) 2017
ref26
ref42
sinno (ref31) 2018
ref44
keimel (ref20) 2012
ref21
ref43
rainer (ref27) 2014
van leuven (ref33) 2018
(ref30) 2017
ref8
ref7
shahid (ref25) 2014
ref6
ref5
ref40
(ref22) 2012
bergman (ref1) 2017
men (ref41) 2017
gausduff (ref29) 2017
References_xml – ident: ref5
  doi: 10.1109/TIP.2010.2042111
– year: 2000
  ident: ref47
  publication-title: Final report from the video quality experts group on the validation of objective models of video quality assessment
– ident: ref37
  doi: 10.1109/TIP.2012.2214050
– ident: ref36
  doi: 10.1109/LSP.2012.2227726
– ident: ref39
  doi: 10.1109/TIP.2015.2502725
– ident: ref24
  doi: 10.1145/2506364.2506366
– ident: ref34
  doi: 10.1186/2042-6410-3-21
– year: 2012
  ident: ref22
  publication-title: Methodology for the Subjective Assessment of the Quality of Television Pictures
– ident: ref42
  doi: 10.1109/TCSVT.2014.2363737
– ident: ref23
  doi: 10.1109/TMM.2013.2291663
– ident: ref44
  doi: 10.1162/089976600300015565
– year: 2018
  ident: ref28
  publication-title: Live Video Quality Challenge (VQC) Database
– ident: ref12
  doi: 10.1016/j.jvcir.2017.04.009
– ident: ref16
  doi: 10.1145/2506364.2506368
– ident: ref26
  doi: 10.1109/COMST.2014.2363139
– start-page: 1
  year: 2017
  ident: ref41
  article-title: Empirical evaluation of no-reference vqa methods on a natural video quality database
  publication-title: Proc Int Conf Quality Multimedia Exper
– year: 2018
  ident: ref33
  publication-title: Measuring Perceived Video Quality on Mobile Devices
– ident: ref11
  doi: 10.1016/j.jvcir.2015.02.012
– ident: ref40
  doi: 10.1109/TIP.2016.2568752
– ident: ref21
  doi: 10.1109/QoMEX.2017.7965673
– year: 2017
  ident: ref29
  publication-title: Gartner Says Worldwide Sales of Smartphones Grew 7 Percent in the Fourth Quarter of 2016
– ident: ref35
  doi: 10.1001/archopht.120.8.1041
– year: 2017
  ident: ref4
  publication-title: Cisco visual networking index Forecast and methodology
– ident: ref38
  doi: 10.1109/TIP.2014.2299154
– ident: ref43
  doi: 10.1109/ICISCE.2017.56
– ident: ref10
  doi: 10.1109/QoMEX.2012.6263865
– ident: ref15
  doi: 10.1007/s11263-015-0816-y
– start-page: 19
  year: 2014
  ident: ref27
  article-title: Quality of experience of Web-based adaptive HTTP streaming clients in real-world environments using crowdsourcing
  publication-title: Proc ACM Workshop Design Qual Deploy Adapt Video Stream
– ident: ref14
  doi: 10.1109/TIP.2016.2562513
– start-page: 390
  year: 2010
  ident: ref9
  article-title: Visual quality of current coding technologies at high definition IPTV bitrates
  publication-title: Proc IEEE Int Workshop Multimedia Signal Process
– ident: ref6
  doi: 10.1109/ICASSP.2010.5496296
– volume: 101
  start-page: 184
  year: 2013
  ident: ref32
  article-title: Efficiently scaling up crowdsourced video annotation
  publication-title: Int J Comput Vis
  doi: 10.1007/s11263-012-0564-1
– year: 2018
  ident: ref3
  publication-title: Brits Can't Get Enough of Netflix and Amazon Prime
– ident: ref7
  doi: 10.1109/TIP.2014.2312613
– ident: ref17
  doi: 10.1109/TIP.2015.2500021
– ident: ref19
  doi: 10.21437/PQS.2016-26
– start-page: 53
  year: 2014
  ident: ref25
  article-title: Crowdsourcing based subjective quality assessment of adaptive video streaming
  publication-title: Proc Int Workshop Quality Multimedia Exp
– start-page: 276
  year: 2018
  ident: ref31
  article-title: Large scale subjective video quality study
  publication-title: Proc 25th IEEE Int Conf Image Process (ICIP)
– ident: ref8
  doi: 10.1109/JSTSP.2012.2212417
– ident: ref45
  doi: 10.1145/1961189.1961199
– ident: ref13
  doi: 10.1109/TCSVT.2017.2707479
– year: 2017
  ident: ref30
  publication-title: Browser Market Share Worldwide
– start-page: 245
  year: 2012
  ident: ref20
  article-title: QualityCrowd-A framework for crowd-based quality evaluation
  publication-title: Proc Picture Coding Symp (PCS)
– ident: ref18
  doi: 10.1109/MNET.2010.5430141
– ident: ref46
  doi: 10.1111/j.2517-6161.1974.tb00994.x
– year: 2017
  ident: ref2
  publication-title: You Know Whats Cool? A Billion Hours
– year: 2017
  ident: ref1
  publication-title: We Spend a Billion Hours a Day on YouTube More than Netflix and Facebook Video Combined Forbes
SSID ssj0014516
Score 2.6768277
Snippet The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 612
SubjectTerms Cameras
Computer simulation
crowd sourcing
Distortion
Downloading
Image coding
multimedia
Quality
Quality assessment
Streaming media
Uniqueness
User generated content
Video compression
Video data
Video quality assessment
Videography
Title Large-Scale Study of Perceptual Video Quality
URI https://ieeexplore.ieee.org/document/8463581
https://www.ncbi.nlm.nih.gov/pubmed/30222561
https://www.proquest.com/docview/2117184859
https://www.proquest.com/docview/2109330307
Volume 28
WOSCitedRecordID wos000446255300007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dSxwxEB9U-mAfvOq1de0pW_BFaLz9iJfksUgPheM4qJV7W_IxAUFui3cn-N87ye4tCrbg28LOZpOZJDOTmfwG4DQ32ngpOENvS8alV0wWBpmVxueZIRWOEV1_IqZTOZ-r2Rb86O7CIGJMPsPz8Bhj-a6263BUNiRdGeC6tmFbCNHc1eoiBqHgbIxsXggmyOzfhCQzNby5noUcLnleyJEaifKVCoo1Vf5tXkY1M-69r4OfYK81J9Ofjfz3YQsXB9BrTcu0XbjLA_j4AnewD2wS8r_Zb5IPpiGT8CmtfTprclzW1N7tncM6bfA1nj7Dn_Gvm8sr1tZNYLbkYsXkheJKk9-Vh6BfWXpNToQvUDmNhbcOPZbCc5dxn0nrnHbCGCKRORea0xbwBXYW9QIPIXUy0xmZiN4IyzU3Gn1hR4YX1hib6TKB4YaVlW1BxUNti_sqOheZqoj5VWB-1TI_gbPui78NoMZ_aPuBxx1dy94EBhtpVe2KW1bkyJKa5TT2BL53r2mthACIXmC9DjTx_Ia2tQS-NlLu2i6j5zvKj97-5zfYpZ6pJl97ADurhzUewwf7uLpbPpzQhJzLkzghnwH13drv
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dSxwxEB-sCtWHWr_qqm1X6Euh8fYjd0keRRSl1-PAs_i25GMCgtwW767gf99Jdm9RaAu-LexsNplkMjOZyW8AvuRGGy8FZ-htybj0isnCILPS-DwzpMIxousPxWgk7-7UeAW-dXdhEDEmn-FpeIyxfFfbRTgq65GuDHBdb2Ctz3mRN7e1uphBKDkbY5t9wQQZ_sugZKZ6k-txyOKSp4UcqIEoXyihWFXl3wZmVDSXW6_r4nt41xqU6VmzArZhBac7sNUal2krurMd2HyGPLgLbBgywNkNzRCmIZfwKa19Om6yXBbU3s97h3XaIGw87cHt5cXk_Iq1lROYLbmYM9lXXGnyvPIQ9itLr8mN8AUqp7Hw1qHHUnjuMu4zaZ3TThhDJDLnQnPaBPZhdVpP8QBSJzOdkZHojbBcc6PRF3ZgeGGNsZkuE-gtWVnZFlY8VLd4qKJ7kamKmF8F5lct8xP42n3xq4HU-A_tbuBxR9eyN4Hj5WxVrczNKnJlSdFyGnsCJ91rkpYQAtFTrBeBJp7g0MaWwIdmlru2y-j7DvLDv__zM7y9mvwYVsPr0fcj2KBeqiZ7-xhW548L_Ajr9vf8fvb4KS7LP1iW3U4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Large-Scale+Study+of+Perceptual+Video+Quality&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Sinno%2C+Zeina&rft.au=Alan+Conrad+Bovik&rft.date=2019-02-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1057-7149&rft.eissn=1941-0042&rft.volume=28&rft.issue=2&rft.spage=612&rft_id=info:doi/10.1109%2FTIP.2018.2869673&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon