The MSR-Video to Text Dataset with Clean Annotations

Video captioning automatically generates short descriptions of the video content, usually in form of a single sentence. Many methods have been proposed for solving this task. A large dataset called MSR Video to Text (MSR-VTT) is often used as the benchmark dataset for testing the performance of the...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:arXiv.org
Hlavní autoři: Chen, Haoran, Li, Jianmin, Frintrop, Simone, Hu, Xiaolin
Médium: Paper
Jazyk:angličtina
Vydáno: Ithaca Cornell University Library, arXiv.org 25.02.2024
Témata:
ISSN:2331-8422
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Video captioning automatically generates short descriptions of the video content, usually in form of a single sentence. Many methods have been proposed for solving this task. A large dataset called MSR Video to Text (MSR-VTT) is often used as the benchmark dataset for testing the performance of the methods. However, we found that the human annotations, i.e., the descriptions of video contents in the dataset are quite noisy, e.g., there are many duplicate captions and many captions contain grammatical problems. These problems may pose difficulties to video captioning models for learning underlying patterns. We cleaned the MSR-VTT annotations by removing these problems, then tested several typical video captioning models on the cleaned dataset. Experimental results showed that data cleaning boosted the performances of the models measured by popular quantitative metrics. We recruited subjects to evaluate the results of a model trained on the original and cleaned datasets. The human behavior experiment demonstrated that trained on the cleaned dataset, the model generated captions that were more coherent and more relevant to the contents of the video clips.
AbstractList Video captioning automatically generates short descriptions of the video content, usually in form of a single sentence. Many methods have been proposed for solving this task. A large dataset called MSR Video to Text (MSR-VTT) is often used as the benchmark dataset for testing the performance of the methods. However, we found that the human annotations, i.e., the descriptions of video contents in the dataset are quite noisy, e.g., there are many duplicate captions and many captions contain grammatical problems. These problems may pose difficulties to video captioning models for learning underlying patterns. We cleaned the MSR-VTT annotations by removing these problems, then tested several typical video captioning models on the cleaned dataset. Experimental results showed that data cleaning boosted the performances of the models measured by popular quantitative metrics. We recruited subjects to evaluate the results of a model trained on the original and cleaned datasets. The human behavior experiment demonstrated that trained on the cleaned dataset, the model generated captions that were more coherent and more relevant to the contents of the video clips.
Author Chen, Haoran
Frintrop, Simone
Li, Jianmin
Hu, Xiaolin
Author_xml – sequence: 1
  givenname: Haoran
  surname: Chen
  fullname: Chen, Haoran
– sequence: 2
  givenname: Jianmin
  surname: Li
  fullname: Li, Jianmin
– sequence: 3
  givenname: Simone
  surname: Frintrop
  fullname: Frintrop, Simone
– sequence: 4
  givenname: Xiaolin
  surname: Hu
  fullname: Hu, Xiaolin
BookMark eNotjdFKwzAUQIMoOOc-wLeAz63J7U2TPo6qU5gIWnwdt-0t6xiJNpnu8x3o0zlP51yJcx88C3GjVY7OGHVH03H8zkEryFWJ6M7EDIpCZw4BLsUixp1SCkoLxhQzgc2W5cv7W_Yx9hxkCrLhY5L3lChykj9j2sp6z-Tl0vuQKI3Bx2txMdA-8uKfc9E8PjT1U7Z-XT3Xy3VGBjCzZV-R1aVVjixUbM3gOmJGXXW2w74q7VDRSTSSUUBQKG5dq9u2ZHDAxVzc_mU_p_B14Jg2u3CY_Om4AXQVokXE4hdZOkcd
ContentType Paper
Copyright 2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: 2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID 8FE
8FG
ABJCF
ABUWG
AFKRA
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
HCIFZ
L6V
M7S
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PTHSS
DOI 10.48550/arxiv.2102.06448
DatabaseName ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest MSED
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
Technology Collection
ProQuest One Community College
ProQuest Central
ProQuest SciTech Premium Collection
ProQuest Engineering Collection
ProQuest Engineering Database (NC LIVE)
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
Engineering Collection
DatabaseTitle Publicly Available Content Database
Engineering Database
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest One Academic Eastern Edition
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Engineering Collection
ProQuest One Academic UKI Edition
ProQuest Central Korea
Materials Science & Engineering Collection
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
Engineering Collection
DatabaseTitleList Publicly Available Content Database
Database_xml – sequence: 1
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Physics
EISSN 2331-8422
Genre Working Paper/Pre-Print
GroupedDBID 8FE
8FG
ABJCF
ABUWG
AFKRA
ALMA_UNASSIGNED_HOLDINGS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
FRJ
HCIFZ
L6V
M7S
M~E
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PTHSS
ID FETCH-LOGICAL-a524-76d9a716708a729e75f8caee419c7c4d967f9ac4d14a502a230eb8b1bb6e282e3
IEDL.DBID M7S
IngestDate Mon Jun 30 09:23:38 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a524-76d9a716708a729e75f8caee419c7c4d967f9ac4d14a502a230eb8b1bb6e282e3
Notes SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
content type line 50
OpenAccessLink https://www.proquest.com/docview/2489447444?pq-origsite=%requestingapplication%
PQID 2489447444
PQPubID 2050157
ParticipantIDs proquest_journals_2489447444
PublicationCentury 2000
PublicationDate 20240225
PublicationDateYYYYMMDD 2024-02-25
PublicationDate_xml – month: 02
  year: 2024
  text: 20240225
  day: 25
PublicationDecade 2020
PublicationPlace Ithaca
PublicationPlace_xml – name: Ithaca
PublicationTitle arXiv.org
PublicationYear 2024
Publisher Cornell University Library, arXiv.org
Publisher_xml – name: Cornell University Library, arXiv.org
SSID ssj0002672553
Score 1.8621633
SecondaryResourceType preprint
Snippet Video captioning automatically generates short descriptions of the video content, usually in form of a single sentence. Many methods have been proposed for...
SourceID proquest
SourceType Aggregation Database
SubjectTerms Annotations
Cleaning
Datasets
Title The MSR-Video to Text Dataset with Clean Annotations
URI https://www.proquest.com/docview/2489447444
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwELagBYmJt3iUygOr28R24mRCUFrB0CpqK1SmynauUiWUlCRU_HxsN4UBiYXNlhfrzrq77-58H0K3oYKAKU4JsFQTHrAFkbFmxAOfCQrCjlRzZBNiNIpmszipE25l3Va5tYnOUKe5tjnyLuVRzLngnN-t3olljbLV1ZpCYxc17ZQE6lr3Jt85FhoKEzGzTTHTje7qyuJzue5YnNPxLDT5ZYKdXxkc_vdGR6iZyBUUx2gHshO07_o5dXmKuFE_Hk7G5GWZQo6rHE-NFcaPsjJeq8I2-4p7byAzfJ9l-aYcX56h6aA_7T2RmiCByIByIsI0lgbvCC-SJkYGESwiLQG4H2uheRqHYhFLs_C5DDwqDdoAFSlfqRAM0gJ2jhpZnsGF_bntpUz6kCqlLHVvZJArNdBZmqgWDPi-RK2tDOb1Iy_nPwK4-vv4Gh1QEwu4n-BBCzWq4gNu0J5eV8uyaKPmQ3-UjNtOd2aXPA-T1y8LKKMT
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LS8NAEF5qq-jJNz6q5qDHbZPdTTY5iEhraemDYoP0VnaTKRQkqUms-qP8j-6mjR4Ebz14CwRCmJ2d-b7Zmf0QunYk2FQygoGGAWY2nWLhBRSbYFFOgOsr1XKxCT4YuOOxNyyhz2IWRrdVFjExD9RhHOgaeZ0w12OMM8bu5i9Yq0bp09VCQmPpFl34eFOULb3tNNX63hDSevAbbbxSFcDCJgxzJ_SEIgncdIUClsDtqRsIAGZ5AQ9Y6Dl86gn1YDFhm0QoiA7SlZaUDih6AlR9dgNVmA7-eafg6LukQxyuADpdnp3mN4XVRfI-W9Q0raqZmgn9ivh5Gmvt_jMD7KHKUMwh2UcliA7QVt6tGqSHiCnnNvqjR_w0CyE2stjwVY4xmiJTOTkzdG3ZaDyDiIz7KIqXzQbpEfLX8Z_HqBzFEZzouXQzpMKCUEqphYldxcuJyRyhMDuAJKeoWph8strC6eTH3md_v75C222_35v0OoPuOdohCvXkM-92FZWz5BUu0GawyGZpcpm7i4Ema16dL7-T-60
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+MSR-Video+to+Text+Dataset+with+Clean+Annotations&rft.jtitle=arXiv.org&rft.au=Chen%2C+Haoran&rft.au=Li%2C+Jianmin&rft.au=Frintrop%2C+Simone&rft.au=Hu%2C+Xiaolin&rft.date=2024-02-25&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422&rft_id=info:doi/10.48550%2Farxiv.2102.06448