Difference between revisions of "Gosu"

From Chessprogramming wiki
Jump to: navigation, search
Line 7: Line 7:
  
 
=Description=
 
=Description=
In Arkadiusz Paterek's  paper ''Modeling of an evaluation function in games'' <ref>[[Arkadiusz Paterek]] ('''2004'''). ''Modelowanie funkcji oceniającej w grach''. [[University of Warsaw]], [https://www.mimuw.edu.pl/~paterek/mfog.ps.gz zipped ps] (Polish, Modeling of an evaluation function in games)</ref>, referring his thesis  ''Modeling of an evaluation function in chess'', the [[Evaluation|evaluation]] is mentioned using a [[Neural Networks#Perceptron|single-layer perceptron]] design inspired by [[Michael Buro|Michael Buro's]] [[Michael Buro#GLEM|general linear evaluation model]] (GLEM) <ref>[[Michael Buro]] ('''1998'''). ''[http://link.springer.com/chapter/10.1007/3-540-48957-6_8 From Simple Features to Sophisticated Evaluation Functions]''. [[CG 1998]], [https://skatgame.net/mburo/ps/glem.pdf pdf]</ref> in the domain of [[Othello]]. Gusu performs [[Automated Tuning#LogisticRegression|logistic regression]] to optimize weights of corresponding features aka minimize the [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] [https://en.wikipedia.org/wiki/Loss_function loss function] by [https://en.wikipedia.org/wiki/Gradient_descent gradient descent] over a set of 6.2 million quiet positions from master games. For each position, it squares the difference of an [[Oracle|oracle]] score of 0.999 for a win, 0.5 for a draw, 0.0013 for a loss, and the [https://en.wikipedia.org/wiki/Dot_product dot product] of the weight and [https://en.wikipedia.org/wiki/Feature_(machine_learning) feature vector], squashed by a [https://en.wikipedia.org/wiki/Logistic_function logistic function] into a 0.0 to 1.0 range of a [[Pawn Advantage, Win Percentage, and Elo|winning probability]]. To speed up matters after [[Automated Tuning|tuning]], an [[Evaluation Hash Table|evaluation cache]] is used along with [[Lazy Evaluation|lazy evaluation]], which performed well in Gosu's [[MTD(f)]] framework.
+
In Arkadiusz Paterek's  paper ''Modeling of an evaluation function in games'' <ref>[[Arkadiusz Paterek]] ('''2004'''). ''Modelowanie funkcji oceniającej w grach''. [[University of Warsaw]], [https://www.mimuw.edu.pl/~paterek/mfog.ps.gz zipped ps] (Polish, Modeling of an evaluation function in games)</ref>, referring his thesis  ''Modeling of an evaluation function in chess'', the [[Evaluation|evaluation]] is mentioned using a [[Neural Networks#Perceptron|single-layer perceptron]] design inspired by [[Michael Buro|Michael Buro's]] [[Michael Buro#GLEM|general linear evaluation model]] (GLEM) <ref>[[Michael Buro]] ('''1998'''). ''[http://link.springer.com/chapter/10.1007/3-540-48957-6_8 From Simple Features to Sophisticated Evaluation Functions]''. [[CG 1998]], [https://skatgame.net/mburo/ps/glem.pdf pdf]</ref> in the domain of [[Othello]]. Gosu performs [[Automated Tuning#LogisticRegression|logistic regression]] to optimize weights of corresponding features aka minimize the [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] [https://en.wikipedia.org/wiki/Loss_function loss function] by [https://en.wikipedia.org/wiki/Gradient_descent gradient descent] over a set of 6.2 million quiet positions from master games. For each position, it squares the difference of an [[Oracle|oracle]] score of 0.999 for a win, 0.5 for a draw, 0.0013 for a loss, and the [https://en.wikipedia.org/wiki/Dot_product dot product] of the weight and [https://en.wikipedia.org/wiki/Feature_(machine_learning) feature vector], squashed by a [https://en.wikipedia.org/wiki/Logistic_function logistic function] into a 0.0 to 1.0 range of a [[Pawn Advantage, Win Percentage, and Elo|winning probability]]. To speed up matters after [[Automated Tuning|tuning]], an [[Evaluation Hash Table|evaluation cache]] is used along with [[Lazy Evaluation|lazy evaluation]], which performed well in Gosu's [[MTD(f)]] framework.
  
 
=Tournament Play=
 
=Tournament Play=

Revision as of 22:25, 21 March 2019

Home * Engines * Gosu

Gosu,
a Chess Engine Communication Protocol compatible chess engine by Arkadiusz Paterek, originated as a part of his masters thesis. In Korean its name means "expert".

Description

In Arkadiusz Paterek's paper Modeling of an evaluation function in games [2], referring his thesis Modeling of an evaluation function in chess, the evaluation is mentioned using a single-layer perceptron design inspired by Michael Buro's general linear evaluation model (GLEM) [3] in the domain of Othello. Gosu performs logistic regression to optimize weights of corresponding features aka minimize the mean squared error loss function by gradient descent over a set of 6.2 million quiet positions from master games. For each position, it squares the difference of an oracle score of 0.999 for a win, 0.5 for a draw, 0.0013 for a loss, and the dot product of the weight and feature vector, squashed by a logistic function into a 0.0 to 1.0 range of a winning probability. To speed up matters after tuning, an evaluation cache is used along with lazy evaluation, which performed well in Gosu's MTD(f) framework.

Tournament Play

Gosu played four Polish Computer Chess Championships, after a strong debut at the PCCC 2004, it won the PCCC 2005, and became third at the PCCC 2006 and played the IOPCCC 2007 where it lost the final rounds versus later winner Glaurung and runner up WildCat. Gosu further performed at the CCT7 with 4½/8.

Publications

Forum Posts

2004

2005

2006 ...

External Links

Chess Engine

Misc

Gosu (programming language) from Wikipedia

References

  1. Pansori gosu from Wikipedia
  2. Arkadiusz Paterek (2004). Modelowanie funkcji oceniającej w grach. University of Warsaw, zipped ps (Polish, Modeling of an evaluation function in games)
  3. Michael Buro (1998). From Simple Features to Sophisticated Evaluation Functions. CG 1998, pdf

Up one Level