Rediscovering Reinforcement Learning.
Gespeichert in:
| Titel: | Rediscovering Reinforcement Learning. |
|---|---|
| Autoren: | Barto, Andrew G.1 (AUTHOR) barto@cs.umass.edu |
| Quelle: | Communications of the ACM. Dec2025, Vol. 68 Issue 12, p98-102. 5p. |
| Schlagwörter: | *TECHNOLOGICAL progress, *GOVERNMENT aid to research, REINFORCEMENT learning |
| Firma/Körperschaft: | UNITED States. Air Force. Office of Scientific Research |
| Abstract: | The article argues that the evolution of Reinforcement Learning (RL) vividly demonstrates how federally supported basic research, guided by the exploration–exploitation principle, is essential for driving long-term innovation and technological progress. Reinforcement learning, which was once overshadowed by supervised learning, was revitalized through decades of sustained basic-research funding from the U.S. Air Force Office of Scientific Research and the National Science Foundation. According to the article, early exploratory work—rooted in psychology, neuroscience, and cybernetics—allowed researchers to investigate unconventional theories, develop foundational RL algorithms, and connect RL to broader mathematical frameworks such as stochastic control and dynamic programming. This exploratory foundation enabled breakthroughs such as temporal-difference learning and deep RL, which later powered transformative applications in robotics, healthcare, gaming, and autonomous decision-making systems. The article argues that RL’s evolution vividly demonstrates how federally supported basic research, guided by the exploration–exploitation principle, is essential for driving long-term innovation and technological progress. |
| Datenbank: | Business Source Index |
| Abstract: | The article argues that the evolution of Reinforcement Learning (RL) vividly demonstrates how federally supported basic research, guided by the exploration–exploitation principle, is essential for driving long-term innovation and technological progress. Reinforcement learning, which was once overshadowed by supervised learning, was revitalized through decades of sustained basic-research funding from the U.S. Air Force Office of Scientific Research and the National Science Foundation. According to the article, early exploratory work—rooted in psychology, neuroscience, and cybernetics—allowed researchers to investigate unconventional theories, develop foundational RL algorithms, and connect RL to broader mathematical frameworks such as stochastic control and dynamic programming. This exploratory foundation enabled breakthroughs such as temporal-difference learning and deep RL, which later powered transformative applications in robotics, healthcare, gaming, and autonomous decision-making systems. The article argues that RL’s evolution vividly demonstrates how federally supported basic research, guided by the exploration–exploitation principle, is essential for driving long-term innovation and technological progress. |
|---|---|
| ISSN: | 00010782 |
| DOI: | 10.1145/3765908 |
Full Text Finder
Nájsť tento článok vo Web of Science