RuyTune

Home * Automated Tuning * RuyTune



RuyTune, an open source framework for tuning evaluation function parameters, written by Álvaro Begué in C++, released on Bitbucket as introduced in November 2016. RuyTune applies logistic regression using a limited-memory BFGS, a quasi-Newton method that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm with limited amount of memory. It uses the libLBFGS library along with reverse-mode automatic differentiation and requires that the evaluation function is converted to a C++ template function where the score type is a template parameter, and a database of quiescent positions with associated results.

=Method= The function to minimize the mean squared error of the prediction is: where: Sigmoid(s) = tanh(0.0043s)
 * N is the number of test positions.
 * R i is the result of the game corresponding to position i; -1 for black win, 0 for draw and +1 for white win.
 * q i is corresponding to position i, the value returned by the chess engine evaluation function. (Computing the gradient on the QS is a waste of time - it is much faster to run the QS saving the PV and then compute the gradient using the evaluation function of the end-of-PV position - and not worry too much about the fact that tweaking the evaluation function could result in a different position being picked ).
 * Sigmoid is implemented by hyperbolic tangent to convert centipawn scores into an expected result in [-1,1].

=See also=
 * Arasan's Tuning
 * Eval Tuning in Deep Thought
 * RuyDos
 * Stockfish's Tuning Method
 * Texel's Tuning Method

=Forum Posts=
 * A database for learning evaluation functions by Álvaro Begué, CCC, October 28, 2016
 * C++ code for tuning evaluation function parameters by Álvaro Begué, CCC, November 10, 2016
 * Re: Texel tuning method question by Peter Österlund, CCC, June 07, 2017
 * Re: Texel tuning method question by Álvaro Begué, CCC, June 07, 2017

=External Links=
 * alonamaloh / ruy_tune — Bitbucket (Wayback Machine)

=References= Up one Level