Comparing BERT Against Traditional Machine Learning Models in Text Classification

The BERT model has arisen as a popular state-of-the-art model in recent years. It is able to cope with NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any corpus delivering great results has make this approach very popular in academia and indu...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Journal of Computational and Cognitive Engineering Ročník 2; číslo 4; s. 352 - 356
Hlavní autori: Garrido-Merchan, Eduardo C., Gozalo-Brizuela, Roberto, Gonzalez-Carvajal, Santiago
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: 15.11.2023
ISSN:2810-9570, 2810-9503
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract The BERT model has arisen as a popular state-of-the-art model in recent years. It is able to cope with NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any corpus delivering great results has make this approach very popular in academia and industry. Although, other approaches have been used before successfully. We first present BERT and a review on classical NLP approaches. Then, we empirically test with a suite of different scenarios the behaviour of BERT against traditional TF-IDF vocabulary fed to machine learning models. The purpose of this work is adding empirical evidence to support the use of BERT as a default on NLP tasks. Experiments show the superiority of BERT and its independence of features of the NLP problem such as the language of the text adding empirical evidence to use BERT as a default technique in NLP problems.   Received: 10 March 2023 | Revised: 4 April 2023 | Accepted: 20 April 2023   Conflicts of Interest The authors declare that they have no conflicts of interest to this work.
AbstractList The BERT model has arisen as a popular state-of-the-art model in recent years. It is able to cope with NLP tasks such as supervised text classification without human supervision. Its flexibility to cope with any corpus delivering great results has make this approach very popular in academia and industry. Although, other approaches have been used before successfully. We first present BERT and a review on classical NLP approaches. Then, we empirically test with a suite of different scenarios the behaviour of BERT against traditional TF-IDF vocabulary fed to machine learning models. The purpose of this work is adding empirical evidence to support the use of BERT as a default on NLP tasks. Experiments show the superiority of BERT and its independence of features of the NLP problem such as the language of the text adding empirical evidence to use BERT as a default technique in NLP problems.   Received: 10 March 2023 | Revised: 4 April 2023 | Accepted: 20 April 2023   Conflicts of Interest The authors declare that they have no conflicts of interest to this work.
Author Garrido-Merchan, Eduardo C.
Gozalo-Brizuela, Roberto
Gonzalez-Carvajal, Santiago
Author_xml – sequence: 1
  givenname: Eduardo C.
  surname: Garrido-Merchan
  fullname: Garrido-Merchan, Eduardo C.
– sequence: 2
  givenname: Roberto
  surname: Gozalo-Brizuela
  fullname: Gozalo-Brizuela, Roberto
– sequence: 3
  givenname: Santiago
  surname: Gonzalez-Carvajal
  fullname: Gonzalez-Carvajal, Santiago
BookMark eNp9kF9LwzAUxYNMcM59Al_yBapp0rTJ4yzzHx2i1Odyk97MQJeOtPjn27tN8UHBp3PhnN_hck7JJPQBCTlP2UVWKMkvTR9ePb7dl-VScMaVUEdkylXKEi2ZmPzcBTsh82HwhklWCJHpdEoey36zhejDml4tn2q6WIMPw0jrCK0ffR-goyuwLz4grRBi2CdXfYvdQH2gNb6PtOxg1-q8hT1wRo4ddAPOv3VGnq-XdXmbVA83d-WiSixnWiUFl9y11jCrUQolhQNVtNamOsMctdm5PLcSDSIIzHJUuXFOgASmC2VSMSP6q9fGfhgiusb68fDBGMF3TcqawzzN33l2rPjFbqPfQPz4l_oEyFZu7g
CitedBy_id crossref_primary_10_1108_APJML_03_2025_0550
crossref_primary_10_1177_14727978251322023
crossref_primary_10_1080_23311975_2025_2487219
crossref_primary_10_1177_14727978251321982
crossref_primary_10_1186_s12302_025_01067_z
crossref_primary_10_1108_BFJ_10_2024_1072
crossref_primary_10_1007_s10115_025_02551_x
crossref_primary_10_1007_s43621_024_00737_x
crossref_primary_10_1021_acs_jchemed_3c00757
crossref_primary_10_1016_j_jclepro_2024_143850
crossref_primary_10_1145_3763002
crossref_primary_10_1109_ACCESS_2025_3610157
crossref_primary_10_23919_JSC_2025_0007
crossref_primary_10_1007_s40615_025_02416_7
crossref_primary_10_1016_j_procs_2024_09_351
crossref_primary_10_1142_S179300572550053X
crossref_primary_10_1007_s11192_024_05217_7
crossref_primary_10_1016_j_knosys_2024_111984
crossref_primary_10_1186_s44398_025_00005_6
crossref_primary_10_1016_j_eswa_2025_129552
crossref_primary_10_3390_cancers15204909
ContentType Journal Article
DBID AAYXX
CITATION
DOI 10.47852/bonviewJCCE3202838
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList CrossRef
DeliveryMethod fulltext_linktorsrc
EISSN 2810-9503
EndPage 356
ExternalDocumentID 10_47852_bonviewJCCE3202838
GroupedDBID AAYXX
ALMA_UNASSIGNED_HOLDINGS
CITATION
M~E
ID FETCH-LOGICAL-c2098-7252fdcb0c9e53853fa87dcc194e6e9b52f26c5ebeea3e46e86bff3a5a0978b13
ISSN 2810-9570
IngestDate Sat Nov 29 03:21:37 EST 2025
Tue Nov 18 20:43:04 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 4
Language English
License https://creativecommons.org/licenses/by/4.0
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c2098-7252fdcb0c9e53853fa87dcc194e6e9b52f26c5ebeea3e46e86bff3a5a0978b13
OpenAccessLink https://ojs.bonviewpress.com/index.php/JCCE/article/download/838/394
PageCount 5
ParticipantIDs crossref_citationtrail_10_47852_bonviewJCCE3202838
crossref_primary_10_47852_bonviewJCCE3202838
PublicationCentury 2000
PublicationDate 2023-11-15
PublicationDateYYYYMMDD 2023-11-15
PublicationDate_xml – month: 11
  year: 2023
  text: 2023-11-15
  day: 15
PublicationDecade 2020
PublicationTitle Journal of Computational and Cognitive Engineering
PublicationYear 2023
SSID ssib050733491
Score 2.537082
Snippet The BERT model has arisen as a popular state-of-the-art model in recent years. It is able to cope with NLP tasks such as supervised text classification without...
SourceID crossref
SourceType Enrichment Source
Index Database
StartPage 352
Title Comparing BERT Against Traditional Machine Learning Models in Text Classification
Volume 2
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2810-9503
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssib050733491
  issn: 2810-9570
  databaseCode: M~E
  dateStart: 20220101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1La9tAEF7ctIdeSktbmvTBHnpzlSraXUl7TIRLKTj04UJuZl8yKkIKihOCD_1x_WWd0UqyEkNoDr0Isx4vWPNpdnc0832EvBdMmhAfJDgrm4BbawKlQx5InliBL5p4nLdiE8npaXp2Jr9OJn_6XpirMqmq9Ppanv9XV8MYOBtbZ-_h7mFSGIDP4HS4gtvh-k-Oz7yyYLWansy-L6bHKzj7X6yRxdwWXeJv3lZQup5cddUqopVtZewCgrVXysQaoq3bdvevXg-izyVi_j0bSpFGLIdDgY9qmsLWwdwhO5PP-ljEZz3NDgejeqPKOjhpis2lK9W29LveWlRg4jZBhqJGv1q5gukPgEehVvU4hREx7OXzTZw-0kUpLAZSeAWRQzceC9k4VEcjRPJR2GWeBbdbwZmnKr-9OPAkFcg2q-sK37p8ybIZisennl3mJhX3rSVyKFyEI1M7zXJ3kgfkYZQIiZF1_nvWxzSBopi8FW4c_qUnv2rn-bg7z2iDNNrpLJ6SJ52L6bGH1jMycdVz8m2AFUVY0Q5WdAQr2sGK9rCiHla0qCjCit6E1Qvy89NskX0OOjmOwEQt62wkotwaHRrp4OkWLFdpYo05ktzFTmr4NoqNgKjgFHM8dmms85wpobBXSB-xl2Svqiv3CuvpkkTDsBHccSOk0nA3eGhZrLQOpd0nUX8PlqbjqkfJlHJ5hwP2yYfhR-eequUu84P7mb8mj7fQfUP21s2le0semat1cdG8a33-F4zyks4
linkProvider ISSN International Centre
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Comparing+BERT+Against+Traditional+Machine+Learning+Models+in+Text+Classification&rft.jtitle=Journal+of+Computational+and+Cognitive+Engineering&rft.au=Garrido-Merchan%2C+Eduardo+C.&rft.au=Gozalo-Brizuela%2C+Roberto&rft.au=Gonzalez-Carvajal%2C+Santiago&rft.date=2023-11-15&rft.issn=2810-9570&rft.eissn=2810-9503&rft.volume=2&rft.issue=4&rft.spage=352&rft.epage=356&rft_id=info:doi/10.47852%2FbonviewJCCE3202838&rft.externalDBID=n%2Fa&rft.externalDocID=10_47852_bonviewJCCE3202838
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2810-9570&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2810-9570&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2810-9570&client=summon