Deep Forest Regression Based on Dynamic State Transition Optimization Algorithm

As a deep algorithm of non-neural network structure, deep forest regression (DFR) can be used to build soft measuring models of difficult-to-measure key parameters. However, as a kind of deep learning, the optimization of hyperparameters has become an inevitable problem in DFR. To solve above proble...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Chinese Automation Congress (Online) s. 3786 - 3791
Hlavní autoři: Xia, Heng, Tang, Jian, Qiao, Junfei
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 06.11.2020
Témata:
ISSN:2688-0938
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract As a deep algorithm of non-neural network structure, deep forest regression (DFR) can be used to build soft measuring models of difficult-to-measure key parameters. However, as a kind of deep learning, the optimization of hyperparameters has become an inevitable problem in DFR. To solve above problem, an improved dynamic state transition algorithm (DSTA) is used to optimize the hyper-parameters of the model. To achieved more accurate optimization process, the error change rate is used to fine-tuning the state factor during the iteration process, which is further improved with gradient-based refinement. Finally, simulation experiments are performed on the benchmark data set, and satisfactory simulation results show the effectiveness of the proposed approach.
AbstractList As a deep algorithm of non-neural network structure, deep forest regression (DFR) can be used to build soft measuring models of difficult-to-measure key parameters. However, as a kind of deep learning, the optimization of hyperparameters has become an inevitable problem in DFR. To solve above problem, an improved dynamic state transition algorithm (DSTA) is used to optimize the hyper-parameters of the model. To achieved more accurate optimization process, the error change rate is used to fine-tuning the state factor during the iteration process, which is further improved with gradient-based refinement. Finally, simulation experiments are performed on the benchmark data set, and satisfactory simulation results show the effectiveness of the proposed approach.
Author Qiao, Junfei
Xia, Heng
Tang, Jian
Author_xml – sequence: 1
  givenname: Heng
  surname: Xia
  fullname: Xia, Heng
  email: Xia_heng1220@163.com
  organization: Beijing University of Technology,Faculty of Information Technology,Beijing,China
– sequence: 2
  givenname: Jian
  surname: Tang
  fullname: Tang, Jian
  email: freeflytang@bjut.edu.cn
  organization: Beijing University of Technology,Faculty of Information Technology,Beijing,China
– sequence: 3
  givenname: Junfei
  surname: Qiao
  fullname: Qiao, Junfei
  email: junfeiq@bjut.edu.cn
  organization: Beijing University of Technology,Faculty of Information Technology,Beijing,China
BookMark eNotj9tKw0AYhFdRsK19AhHyAon_HrN7GVOrQiGg9br8TXbrSnMguzf16Y3aq29mGAZmTq66vrOE3FPIKAXzUBalpFKbjAGDzHCWa-AXZE5zpmmudC4vyYwprVMwXN-QZQhfAMA4FVLAjFQra4dk3Y82xOTNHiYG33fJIwbbJJNYnTpsfZ28R4w22Y7YBR9_G9UQfeu_8c8Ux0M_-vjZ3pJrh8dgl2cuyMf6aVu-pJvq-bUsNqlnwGMqee0QNdurhjtUXHPBJBrDUICjxjRTbnUNuURBc9gzpYRGx5yuEVHUfEHu_ne9tXY3jL7F8bQ7_-c_yB1Rmw
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/CAC51589.2020.9327803
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE/IET Electronic Library
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Forestry
EISBN 1728176875
9781728176871
EISSN 2688-0938
EndPage 3791
ExternalDocumentID 9327803
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  funderid: 10.13039/501100001809
GroupedDBID 6IE
6IF
6IL
6IN
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
OCL
RIE
RIL
ID FETCH-LOGICAL-i203t-53cfaa82b6d3fa6383425a992a40f199d6d3e8c075a4170b26648af2f8caaa4c3
IEDL.DBID RIE
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000678697003161&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 05:52:36 EDT 2025
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i203t-53cfaa82b6d3fa6383425a992a40f199d6d3e8c075a4170b26648af2f8caaa4c3
PageCount 6
ParticipantIDs ieee_primary_9327803
PublicationCentury 2000
PublicationDate 2020-Nov.-6
PublicationDateYYYYMMDD 2020-11-06
PublicationDate_xml – month: 11
  year: 2020
  text: 2020-Nov.-6
  day: 06
PublicationDecade 2020
PublicationTitle Chinese Automation Congress (Online)
PublicationTitleAbbrev CAC
PublicationYear 2020
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0002314540
Score 1.7462375
Snippet As a deep algorithm of non-neural network structure, deep forest regression (DFR) can be used to build soft measuring models of difficult-to-measure key...
SourceID ieee
SourceType Publisher
StartPage 3786
SubjectTerms Data models
deep forest regression
dynamic state transition algorithm
Forestry
Heuristic algorithms
hyper-parameters optimization
Optimization
Stochastic processes
Support vector machines
Training
Title Deep Forest Regression Based on Dynamic State Transition Optimization Algorithm
URI https://ieeexplore.ieee.org/document/9327803
WOSCitedRecordID wos000678697003161&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NawMhFJQ0lNJTP5LSbzz0WJNd3Y16TJOGHkoSSltyC64-00CyCemmv7_qLimFXnpTQYQnOoO-eYPQXSyNI_3KEJXFjCQ8syRTLCGSaU0lSKBBx_3-zIdDMZnIcQ3d77QwABCSz6Dlm-Ev36z01j-VtR3X4MKX9tzjvFNqtXbvKY6n-GJylUgnjmS71-05sBZejUKjVjX3l4lKwJDB0f9WP0bNHzEeHu9g5gTVID9FB95S0_u0NdCoD7DGZR-_wKxMbM3xg8Mng12jX5rO48ArcQCnkKeFR-66WFY6TNxdzFabefGxbKK3weNr74lUNglkTiNWkJRpq5SgWccwq9x5Yu4cKimpSiIbS2ncOAjtuIFKYh5lDpLd7lhqhVZKJZqdoXq-yuEc4ciRIaO49ULzhKepsJqmxlrNZAQiTS9Qw8dlui4rYUyrkFz-PXyFDn3og3Kvc43qxWYLN2hffxXzz81t2L5vuMubpQ
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LTgIxFG0IGnXlA4xvu3DpQKftMO0SQYIRgRg07EinDyCRgeDg99t2JhgTN-7aJk2T3rTnpL3nHgDuQq4s6RcqEElIAhonJkgEoQEnUmKuucZex_3ei_t9Nh7zYQncb7UwWmuffKZrrun_8tVSbtxTWd1yjZi50p47EaUY5Wqt7YuKZSqunFwh0wkRr7eaLQvXzOlRMKoVs3_ZqHgU6Rz-b_0jUP2R48HhFmiOQUmnJ2DPmWo6p7YKGLS1XsG8D1_1NE9tTeGDRSgFbaOd285DzyyhhyefqQUH9sJYFEpM2PyYLtfzbLaogrfO46jVDQqjhGCOEcmCiEgjBMNJQxEj7Iki9iQKzrGgyIScKzuumbTsQNAwRokFZRsfgw2TQggqySkop8tUnwGILB1SIjZOak7jKGJG4kgZIwlHmkXROai4fZms8loYk2JLLv4evgX73dFLb9J76j9fggMXBq_ja1yBcrbe6GuwK7-y-ef6xofyGwIlnuw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Chinese+Automation+Congress+%28Online%29&rft.atitle=Deep+Forest+Regression+Based+on+Dynamic+State+Transition+Optimization+Algorithm&rft.au=Xia%2C+Heng&rft.au=Tang%2C+Jian&rft.au=Qiao%2C+Junfei&rft.date=2020-11-06&rft.pub=IEEE&rft.eissn=2688-0938&rft.spage=3786&rft.epage=3791&rft_id=info:doi/10.1109%2FCAC51589.2020.9327803&rft.externalDocID=9327803