Preference Learning for Move Prediction and Evaluation Function Approximation in Othello

This paper investigates the use of preference learning as an approach to move prediction and evaluation function approximation, using the game of Othello as a test domain. Using the same sets of features, we compare our approach with least squares temporal difference learning, direct classification,...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on computational intelligence and AI in games. Ročník 6; číslo 3; s. 300 - 313
Hlavní autoři: Runarsson, Thomas Philip, Lucas, Simon M.
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 01.09.2014
Témata:
ISSN:1943-068X, 1943-0698
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper investigates the use of preference learning as an approach to move prediction and evaluation function approximation, using the game of Othello as a test domain. Using the same sets of features, we compare our approach with least squares temporal difference learning, direct classification, and with the Bradley-Terry model, fitted using minorization-maximization (MM). The results show that the exact way in which preference learning is applied is critical to achieving high performance. Best results were obtained using a combination of board inversion and pair-wise preference learning. This combination significantly outperformed the others under test, both in terms of move prediction accuracy, and in the level of play achieved when using the learned evaluation function as a move selector during game play.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1943-068X
1943-0698
DOI:10.1109/TCIAIG.2014.2307272