Changes

Jump to: navigation, search

SAL

513 bytes added, 13:22, 29 October 2018
no edit summary
==Evaluation==
The game independent [[Evaluation|evaluation]] is implemented as [[Neural Networks|neural network]] for each side. The inputs to the network are features [[Board Representation|representing the board]], the number of [[Pieces|pieces]] of each type on the board, the type of piece just moved, the type of piece just captured (if any), and several features considering pieces and squares under attack. The neural network evaluator is trained by [[Temporal Difference Learning|temporal difference learning]] to estimate the outcome of the game, given the current position <ref>[http://satirist.org/learn-game/systems/sal.html SAL] from [http://satirist.org/learn-game/ Machine Learning in Games] by [[Jay Scott]]</ref> <ref>[[Marco Block-Berlitz|Marco Block]], Maro Bader, [http://page.mi.fu-berlin.de/tapia/ Ernesto Tapia], Marte Ramírez, Ketill Gunnarsson, Erik Cuevas, Daniel Zaldivar, [[Raúl Rojas]] ('''2008'''). ''Using Reinforcement Learning in Chess Engines''. Concibe Science 2008, [http://www.micai.org/rcs/ Research in Computing Science]: Special Issue in Electronics and Biomedical Engineering, Computer Science and Informatics, Vol. 35, [http://page.mi.fu-berlin.de/block/concibe2008.pdf pdf], 1.1 Related Work</ref>.
=Results=

Navigation menu