The Use of Recurrent Neural Networks in the Optimization of Computer Science Algorithms

A new era of computational efficacy and problem-solving abilities has begun with the combination of Recurrent Neural Networks (RNNs) and computer science methods. It is crucial in modern computing to combine RNNs with methods from the field of computer science. RNNs bring a dynamic learning paradigm...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 International Conference on Emerging Research in Computational Science (ICERCS) s. 1 - 6
Hlavní autoři: Franklin, Ramya G, Ronald Doni, A., Poornima, D., Sabasti Prabu, S. Igni
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 07.12.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract A new era of computational efficacy and problem-solving abilities has begun with the combination of Recurrent Neural Networks (RNNs) and computer science methods. It is crucial in modern computing to combine RNNs with methods from the field of computer science. RNNs bring a dynamic learning paradigm, making algorithms more efficient and flexible by allowing them to process sequential input and complicated dependencies. Their usefulness goes beyond one specific area of study, and into others such as optimization issues, recommendation systems, image processing, and natural language processing. RNNs serve a vital role in addressing the severe challenges that algorithms confront, including processing sequential input, identifying temporal connections, and optimizing in high-dimensional environments. By providing dynamic and adaptive modeling, RNNs aid algorithms in navigating these obstacles. Recurrent Neural Network-Based Optimization Algorithm (RNN-OA) is an innovative novel approach presented in this research. RNN-OA is an approach to algorithmic optimization that makes use of RNNs' dynamic learning capability. Mechanisms for selective data processing with attention, regularization methods for model stability, and interpretability improvements for openness are all a part of it. Quickening the process of optimization in many different types of problems is transfer learning and fine-tuning approaches. There are a wide variety of computer science fields where RNN-OA can be useful. Time-series analysis, language modeling, picture recognition, and recommendation systems all benefit greatly from its ability to handle sequential data. The efficacy, scalability, and robustness of the method are thoroughly evaluated by simulating various problem scenarios and data streams. The results of these simulations provide illumination for the benefits and drawbacks of RNN-OA, paving the way for its development toward actual application in algorithmic optimization.
AbstractList A new era of computational efficacy and problem-solving abilities has begun with the combination of Recurrent Neural Networks (RNNs) and computer science methods. It is crucial in modern computing to combine RNNs with methods from the field of computer science. RNNs bring a dynamic learning paradigm, making algorithms more efficient and flexible by allowing them to process sequential input and complicated dependencies. Their usefulness goes beyond one specific area of study, and into others such as optimization issues, recommendation systems, image processing, and natural language processing. RNNs serve a vital role in addressing the severe challenges that algorithms confront, including processing sequential input, identifying temporal connections, and optimizing in high-dimensional environments. By providing dynamic and adaptive modeling, RNNs aid algorithms in navigating these obstacles. Recurrent Neural Network-Based Optimization Algorithm (RNN-OA) is an innovative novel approach presented in this research. RNN-OA is an approach to algorithmic optimization that makes use of RNNs' dynamic learning capability. Mechanisms for selective data processing with attention, regularization methods for model stability, and interpretability improvements for openness are all a part of it. Quickening the process of optimization in many different types of problems is transfer learning and fine-tuning approaches. There are a wide variety of computer science fields where RNN-OA can be useful. Time-series analysis, language modeling, picture recognition, and recommendation systems all benefit greatly from its ability to handle sequential data. The efficacy, scalability, and robustness of the method are thoroughly evaluated by simulating various problem scenarios and data streams. The results of these simulations provide illumination for the benefits and drawbacks of RNN-OA, paving the way for its development toward actual application in algorithmic optimization.
Author Sabasti Prabu, S. Igni
Ronald Doni, A.
Franklin, Ramya G
Poornima, D.
Author_xml – sequence: 1
  givenname: Ramya G
  surname: Franklin
  fullname: Franklin, Ramya G
  email: ramyagfranklin123@gmail.com
  organization: Sathyabama Institute of Science & Technology,Department of Computer Science and Engineering,Chennai,India
– sequence: 2
  givenname: A.
  surname: Ronald Doni
  fullname: Ronald Doni, A.
  email: ronaldtony.a@gmail.com
  organization: Sathyabama Institute of Science & Technology,Department of Computer Science and Engineering,Chennai,India
– sequence: 3
  givenname: D.
  surname: Poornima
  fullname: Poornima, D.
  email: poorniramesh2011@gmail.com
  organization: Sathyabama Institute of Science & Technology,Department of Computer Science and Engineering,Chennai,India
– sequence: 4
  givenname: S. Igni
  surname: Sabasti Prabu
  fullname: Sabasti Prabu, S. Igni
  email: igni.prabu@sathyabama.ac.in
  organization: Sathyabama Institute of Science & Technology,Department of Computer Science and Engineering,Chennai,India
BookMark eNo1j11LwzAYhSPohc79Ay_iD-jMZ9NcjjJ1MBzsAy9H1r5xwTYpaYror1-HevXA4TkHzh269sEDQo-UzCgl-mlZLjblViotihkjjM8oEVwQyq_QVCtdcEm41Crnt-h9dwK87wEHizdQDTGCT_gNhmiaEekrxM8eO4_T6K275Fr3Y5IL_lIoQ9sNCSLeVg58BXjefITo0qnt79GNNU0P0z9O0P55sStfs9X6ZVnOV5mjVKdMSWlVXpD6qKyxlWFHVpu8qBkYrW2hrJBSE87GQOWitqqmFRHk4tY5ZcAn6OF31wHAoYuuNfH78P-XnwEYlVGk
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ICERCS57948.2023.10434013
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Xplore POP ALL
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9798350359763
EndPage 6
ExternalDocumentID 10434013
Genre orig-research
GroupedDBID 6IE
6IL
CBEJK
RIE
RIL
ID FETCH-LOGICAL-i119t-755f7680db7fafca2b2da68d2ea99f87f4559032d2e764df7d1c040fafcd612e3
IEDL.DBID RIE
IngestDate Wed May 01 11:58:48 EDT 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i119t-755f7680db7fafca2b2da68d2ea99f87f4559032d2e764df7d1c040fafcd612e3
PageCount 6
ParticipantIDs ieee_primary_10434013
PublicationCentury 2000
PublicationDate 2023-Dec.-7
PublicationDateYYYYMMDD 2023-12-07
PublicationDate_xml – month: 12
  year: 2023
  text: 2023-Dec.-7
  day: 07
PublicationDecade 2020
PublicationTitle 2023 International Conference on Emerging Research in Computational Science (ICERCS)
PublicationTitleAbbrev ICERCS
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
Score 1.8530822
Snippet A new era of computational efficacy and problem-solving abilities has begun with the combination of Recurrent Neural Networks (RNNs) and computer science...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Computational modeling
Computer science
Data models
Heuristic algorithms
High-Dimensional Environments
Optimization
Recommender systems
Recurrent neural networks
Title The Use of Recurrent Neural Networks in the Optimization of Computer Science Algorithms
URI https://ieeexplore.ieee.org/document/10434013
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED_cEPFpfkz8JoKvmWuSNs2jjA19mWM63NtIm0QLWzvWzr_fJOsUBR98SgnXBnK5Xi653-8Abik3OjVUY-vLBGaGKiwVMVgYw5gKdRAR44tN8OEwnk7FqAareyyM1tonn-mOe_R3-apI1-6ozFo4oy4eaECD82gD1tqDm5o38-6x1x_3nkO7wlzOFqGdrfyPyinecQxa_xzyANrfEDw0-nIuh7Cj8yNobWswoNokj-HV6hlNSo0Kg8bu7NyxLSFHuSHntvE53iXKcmQ3eujJ_h8WNfDSvfD7c-h-_lassup9UbZhMui_9B5wXSwBZ0EgKszD0NjQoasSbqRJJUmIklGsiJZCmJgbZpXRpcR28Igpw1WQWgN2ssrucjQ9gWZe5PoUkA0aEivL0qRLGY2VcIwuiVJppEIuKT-Dtpuo2XLDhzHbztH5H_0XsO_U4ZNA-CU0q9VaX8Fu-lFl5eraa_ETigOgkw
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED90ivo0PyZ-G8HXzjZJm_ZRxsaGc4654d5G2yRa2FppO_9-k6xTFHzwqSEkKeRyuVxyv98B3BImRSyJsJQtCywqCbdCjqUVSEkpd4XjYWmSTbDBwJ9Og2EFVjdYGCGECT4TTV00b_k8i5f6qkxpOCXaH9iELZdSbK_gWjtwUzFn3vVa7VHr2VVrTEdtYdJc9_iRO8WYjk79nz_dh8Y3CA8Nv8zLAWyI9BDq6ywMqFLKI3hRkkaTQqBMopG-Pdd8S0iTboRz9TFR3gVKUqSOeuhJ7RCLCnqpO_weDt3PX7M8Kd8WRQMmnfa41bWqdAlW4jhBaTHXlcp5sHnEZCjjEEeYh57PsQiDQPpMUiUOm2BVwTzKJeNOrFRYt-XqnCPIMdTSLBUngJTbEKm2NI5sQonPA83pEnEee9xlIWGn0NATNXtfMWLM1nN09kf9Nex2x4_9Wb83eDiHPS0aExLCLqBW5ktxCdvxR5kU-ZWR6CcR9KPa
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+International+Conference+on+Emerging+Research+in+Computational+Science+%28ICERCS%29&rft.atitle=The+Use+of+Recurrent+Neural+Networks+in+the+Optimization+of+Computer+Science+Algorithms&rft.au=Franklin%2C+Ramya+G&rft.au=Ronald+Doni%2C+A.&rft.au=Poornima%2C+D.&rft.au=Sabasti+Prabu%2C+S.+Igni&rft.date=2023-12-07&rft.pub=IEEE&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FICERCS57948.2023.10434013&rft.externalDocID=10434013